Skip to main content

Why haven’t ubiquitous user reviews banished bad products from virtual shelves?

Bad customer reviews product reviews amazonAmazon currently offers prospective toaster buyers a choice of several thousand different models. They range from adequate choices that cost $20 or less, to what must be the Maserati of toasters, which costs hundreds of dollars. This being Amazon, almost all of the toasters have been rated by people with plenty of free time, and the bell curve holds true; most of them are somewhere in the three- or four-star range. One five-star toaster apparently has an exceptionally vocal fan base.

With so much information readily available on which toasters perform and which ones stink, how do we still live in a world with thousands of options? Why does anyone bother making a poor product anymore if they know they’ll just be eviscerated on the Internet? As it turns out, user reviews haven’t been a magic bullet for consumers for several reasons.

The lowest-rated toaster on Amazon is a model by Rival that is as basic as you can possibly imagine. It has an average rating of 1.5 stars written by six people. What could inspire such vigorous toaster hate? Complaints range from inconsistent temperature settings to alleged safety issues. Several reviewers wonder why no recall has been issued yet.

What are the chances that anyone would buy this toaster from Amazon, considering these reviews? Pretty good, apparently. They only had two left in stock at the time of this writing. The price of $6.23 for a brand-new one probably has something to do with that. At that price, most consumers can afford to gamble and see if it’s really that bad.

So our desire for a deal seems to override the advice of our fellow consumer. But there’s another factor that keeps user reviews from necessarily affecting our purchasing decisions: We’re not sure if they’re real. While the reviews of the Rival seem too detailed (and weirdly passionate, given the subject matter) to be fake, we’ve all seen plenty of one or two-line user reviews that are so banal as to be suspicious. In fact, there’s an entire economy built around fake reviews. 

Visit some freelance writing job boards like Craigslist, ODesk, or Elance and you’ll find no shortage of writers willing to write fake reviews, often from overseas. Sometimes, the rate is abysmal, but not always. As Suw Charman-Anderson explained in an article in Forbes last August, the going rate for fake book reviews on Amazon can be 50 for $6,000 or more. She further posits that Amazon has no incentive to weed out the fakes because they seem not to affect sales. If one product is receiving bad reviews, legitimately or not, a customer will simply pick another product from Amazon’s vast database. Or simply roll the dice on a $6.23 fire-hazard.

With this practice well established, it’s little wonder that we don’t simply look at the best-rated product and open our wallets. 

There have been some efforts to curb this trend. Yelp received some positive press in November for its efforts to expose businesses that pay for positive reviews on its site. Staffers even went as far as to go undercover as potential writers on Craigslist. Yelp’s entire business model rests on the legitimacy of its reviews, so it has a strong incentive to take the task of policing them more seriously than Amazon. If enough people get tired of fishy reviews, Yelp simply goes out of business.

Since 2009, the Federal Trade Commission has decided to stick a finger in the dam to stem the tide of fake reviews. The Commission revised its Endorsement and Testimonial Guides to try and warn off such practices, but the FTC only has the power to levy fines, and simply has too much work on its docket to investigate all of the fake reviewing going on.

Perhaps the FTC should check with these researchers at Cornell, who have published a paper outlining the possibilities for automatic detection of “opinion spam.” It’s a fascinating read, not only for people interested in online commerce but also in psychology.

Whether we like it or not, we live in a crowd-sourced economy with mobile technology making it possible to check reviews on a product even from brick-and-mortar stores. But unless we actually trust these reviews – and learn to look past single-digit price tags – the glut of duds that still clutter up online store shelves isn’t going anywhere.

Editors' Recommendations

Amazon’s had enough of fake reviews on its site, files lawsuit
Amazon cancels live TV streaming plans

Amazon this week filed a lawsuit against companies that allegedly sell fake  four- and five-star reviews that appear alongside products on its main site. It's the first time for the e-commerce giant to take such action, indicating increasing concern among Amazon executives that such phoney comments risk undermining the integrity of the site's review section.

The suit, filed in a Washington state court on Wednesday, involves four companies –,,, and – that Amazon claims sell less-than-honest product assessments to Amazon merchants looking to paint a rosy picture of their wares in order to get them noticed.

Read more
The next big thing in science is already in your pocket
A researcher looks at a protein diagram on his monitor

Supercomputers are an essential part of modern science. By crunching numbers and performing calculations that would take eons for us humans to complete by ourselves, they help us do things that would otherwise be impossible, like predicting hurricane flight paths, simulating nuclear disasters, or modeling how experimental drugs might effect human cells. But that computing power comes at a price -- literally. Supercomputer-dependent research is notoriously expensive. It's not uncommon for research institutions to pay upward of $1,000 for a single hour of supercomputer use, and sometimes more, depending on the hardware that's required.

But lately, rather than relying on big, expensive supercomputers, more and more scientists are turning to a different method for their number-crunching needs: distributed supercomputing. You've probably heard of this before. Instead of relying on a single, centralized computer to perform a given task, this crowdsourced style of computing draws computational power from a distributed network of volunteers, typically by running special software on home PCs or smartphones. Individually, these volunteer computers aren't particularly powerful, but if you string enough of them together, their collective power can easily eclipse that of any centralized supercomputer -- and often for a fraction of the cost.

Read more
Why AI will never rule the world
image depicting AI, with neurons branching out from humanoid head

Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity -- for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans.

According to the theory, advances in AI -- specifically of the machine learning type that's able to take on new information and rewrite its code accordingly -- will eventually catch up with the wetware of the biological brain. In this interpretation of events, every AI advance from Jeopardy-winning IBM machines to the massive AI language model GPT-3 is taking humanity one step closer to an existential threat. We're literally building our soon-to-be-sentient successors.

Read more