In the United States, a new scam that weaponizes online reviews to extort money from small business owners is on the rise. Scammers flood platforms like Google Maps, the all-purpose review site Yelp, and TripAdvisor with fake malicious reviews, then demand payment to remove them.
As of the 14th local time, according to related industries, U.S. regulators, and platforms, this kind of review-based extortion is spreading across the United States. According to ConsumerAffairs, a U.S. private consumer group, in Jun., the head of a small interior company in Los Angeles received a WhatsApp message saying, "An order came in to post twenty fake one-star reviews about your company." The sender's country code was Pakistan. When pressed, the sender followed up by saying, "If you pay, I'll block that order." The construction company had previously paid $150 to a contact using a Bangladesh number to have malicious reviews removed. Based on that experience, the owner sent $100 this time as well. But a few weeks later, a dozen or so fake reviews appeared on Google Maps all at once. Its rating, once a perfect 5.0, plunged into the 3s. A reputation built over eight years was destroyed in a single day by one person. This is the classic pattern of a so-called "review attack," in which, once money is handed over, the scammers switch the account and strike again.
This kind of review-driven scam is spreading across the United States. Scammers mainly target small operations such as moving companies, roof repairers, and appliance repair shops, whose livelihoods depend on their online reputations. According to ConsumerAffairs, they do not leave vague slams like "the service was bad." They write detailed, malicious accounts such as "a mover deliberately dropped a box right in front of me," sowing confusion among consumers who read the reviews.
Robert Reyes, who runs a moving company in Orlando, said in an interview with Florida local broadcaster WKMG, "Dozens of fake reviews like 'total scam company' and 'they arrived hours late for the reservation' were posted in a row," and "I felt helpless because I had no idea when this review attack would stop." A Google account under the name Ezra Max that left bad reviews for this moving company posted a review calling a nearby roofing contractor "a bad company" the very next day, suggesting coordinated activity.
Experts said that while fake reviews used to be written by people, recently scammers are deploying artificial intelligence (AI). According to the fake review watchdog Fake Review Watch, online review scammers use AI across multiple fake accounts to carry out crimes in a tightly coordinated way.
The number of malicious reviews removed each year by review platforms alone shows the scale of the ecosystem in which "review extortion" feeds. Google said it blocked or deleted more than 240 million policy-violating reviews last year. Separately, it sanctioned 12 million fake business profiles and 900,000 accounts that repeatedly violated review rules. TripAdvisor also caught 2.7 million "fake reviews" over the past year. Trustpilot, a website where nearly 1 million reviews are posted every month, removed 4.5 million—nearly four months' worth. Even so, victims say the vicious cycle continues, with scammers posting malicious reviews again from different accounts.
Kelly Kurlizek, head of the online reputation management firm Better Reputation, said in an interview with the business magazine Entrepreneur, "It turns out that more than 90% of potential consumers look up reviews before purchasing major services or products," adding, "Even having only bad or mediocre reviews means losing potential customers." For legal issues such as defamation or blackmail, the most common approach is to use the "legal removal request" process. Experts advised first submitting a legal notice directly to the platform to alert it that the platform is being used to commit a crime.
Recently, moves to curb malicious reviews at the national level have accelerated. The Federal Trade Commission (FTC) finalized and put into effect the Consumer Reviews & Testimonials Rule in Oct. last year, banning paid-for fake reviews, AI-generated fake reviews, and reviews without real experience. Intentional violations can result in civil penalties of up to $50,000 per count, or about 69 million won.
In the United Kingdom, after an investigation by the Competition and Markets Authority (CMA), Google pledged to attach a "suspicion warning" badge to suspected fake reviews and to introduce or expand measures such as restricting new reviews from the relevant account.