Skip
Current Issue
This Month's Print Issue

Follow Fast Company

We’ll come to you.

2 minute read

Online User Ratings Are Useless, But We Believe Them Anyway

Average ratings hardly at all correspond with objective measures of product quality, but there are still smart ways to read and utilize reviews.

Online User Ratings Are Useless, But We Believe Them Anyway

Online user ratings, like those found on Amazon and every other shopping site, are pretty much completely useless. A product with a high rating is no more likely to be good than one with a low rating.

"The likelihood that an item with a higher user rating performs objectively better than an item with a lower user rating is only 57 percent," Bart de Langhe of University of Colorado Boulder said in a news release "A correspondence of 50 percent would be random, so user ratings provide very little insight about objective product performance."

According to de Langhe’s paper, buyers place a lot of faith in these reviews, even when there are only a few reviews to go by. We’re just as likely to buy a product with a couple of five-star ratings as we are one with hundreds of good reviews.

"This is a mistake," de Langhe told CU News. "Oftentimes, there are just not enough ratings for a product or there is too much disagreement among reviewers. In this case, consumers should not trust the average very much, but they do nonetheless."

As part of the study, de Langhe’s team compared the reviews of 1,272 products with the products’ Consumer Reports scores and found that they don’t match up. De Langhe calls Consumer Reports "the most commonly used measure of objective quality in the consumer behavior literature," which makes it a decent-enough yardstick, although Consumer Reports itself is a pretty bad source for reviews. It may be objective, but that’s the problem—the tests rely more on feature checklists than on actual real-world, long-term use.

De Langhe also used resale value as a metric for quality, which might just be the smartest part of the paper. "Products with better reliability and performance retain more of their value over time," he says, adding that user ratings seldom correlate with resale value, further undermining their validity.

I used to be a professional gadget reviewer, a job which involved reading lots of customer reviews. After a while you spot patterns. Good reviews come from people who have just purchased a product they’re excited about, and they want to tell somebody about it. These "reviews" come days or hours after opening the package. And most of the negative reviews are complaints about the features of the product, not its performance (usually proving the buyer didn’t read the product description), or they’re complaints about the shipping/packaging. Very few of the reviews are actually that—sometimes a user will return after a year to give an updated opinion, and these are the ones you should pay attention to.

The study also found that when comparing two items with otherwise "objective qualities," users tend to give a higher rating to more expensive items, or to items from a premium brand.

What can we, as consumers, do about this? After all, as de Langhe states rather dryly, "You can’t assume that people follow such a scientific approach before they rate products online." Your best bet is to find a reviewer you can trust, and follow their advice. The problem there is that if you’re buying in a product category you know nothing about, you don’t know who to trust. So you should probably do what all smart consumers do these days: Check to see what The Wirecutter recommends, and just buy that. You literally can’t go wrong.

loading