Current Landscape of Online Reviews

Current Landscape of Online Reviews

Guiding Question: Do user generated reviews actually reflect “quality”?

The Breakdown: Consumer Perspective

Take a second to think about the following stats:

82% of adults consult online reviews before buying something for the first time, 40% always do

In general marketing contexts (i.e. when were scrolling through our Facebook feeds or looking for a YouTube video to watch because Netflix is too much of a commitment), when we see an ad, we have our guard up - we know we’re being sold to, hence why can be more difficult for us to be persuaded. However, when we get feedback from ordinary people without a profit motive, the opinion seems more genuine and trustworthy. This reality is one of the main reasons why online reviews can be so powerful. That being said one might wonder: do reviews actually aid consumers in making better purchasing decisions?

Fernbach et. all conducted a study where they analyzed 1272 products across 120 vertically differentiated product categories, comparing and contrasting their Amazon Reviews and their Consumer Report Scores. Here were the main takeaways:

  • They observed a lack convergence with Consumer Reports scores - the most commonly used measure of objective quality in the consumer behavior literature

There is a low correspondence (~57% of the time) between Amazon stars and Consumer Reports ratings.

  • The Amazon review scores were often based on insufficient sample sizes which limits their informativeness
  • The ratings did not predict resale prices in the used-product marketplace. One would expect a product with a 5 star rating to have a higher resale value than a product with 4 star rating. There was virtually no statistically significant relationship.
  • Amazon ratings were higher for more expensive products and premium brands, (controlling for Consumer Reports scores).
  • When forming quality inferences and purchase intentions, consumers heavily weight the average rating compared to other cues for quality like price and the number of ratings
  • Consumers fail to moderate their reliance on the average user rating as a function of sample size sufficiency
  • Consumers’ trust in the average user rating as a cue for objective quality appears to be based on an “illusion of validity.”

Other Takeaways: Online reviews are subject to many biases:

  1. A more expensive product/service is viewed as being of better quality.
  2. Reviewers are more likely to rate products with better display pictures and brand recognition more positively
  3. “Diva Bias”: Reviewers with extreme opinions, being really positive or really negative, are more likely to leave a reviews. Also, some reviewers just want to show off and brag. (Wanna be influencers). Accurate, objective information just isn’t necessarily their main goal.

The Breakdown : Business’ Perspective

A bad review can indisputedly hurt the bottom line

A one-star increase in Yelp rating leads to a 5-9 percent increase in revenue [7]

Some business try to combat this by subjecting users to a non-disparagement clause. Basically, if a consumer wants to do business with an establishment, they have to agree not to talk negatively about them online. These can be problematic for several reasons:

  1. Hidden deep in Terms of Service Agreements and not readily made available.

  2. Only targets negative reviews, information that can actually be helpful to other users, creating a “lopsided database”

  3. Unfair advantage for Business that have these clauses, since most of their reviews will be positive

Consequently, Consumer Tights Freedom Act was passed (2017), making it illegal to bury non-disparagement clauses in Terms of Services. They need to be clearly communicated to the customer. This protects honest negative reviews, and not false negative reviews.

What does this all mean?

  • Extreme Reviewers are over represented
  • Having a smaller number of reviewers often leads to less reliable opinions
  • Effects of positive and negative reviews on businesses are very real
  • This is a complex problem with a lot of moving parts…

It is this complexity and intrigue about this phenomena that led me to working on my project The Language of Food: Improving Dining Experiences with NLP and Recommender systems


Sources

Navigating by the Stars: Investigating the Actual and Perceived Validity of Online User Ratings

What we know and Don’t Kow About Online Word-Of-Mouth: A Systematic Review and Synthesus of the Literature

Palmer v. KlearGear.com

H.R.5111 - Consumer Review Fairness Act of 2016

Understanding the Consumer Review Fairness Act of 2016

Doctored Reviews

Reviews, Reputation, and Revenue: The Case of Yelp.com

Why do Online Reviews have a J-Shaped Distribution

Write a negative online review and get sued? It can happen, but maybe not for long

Incentives Can Reduce Bias in Online Reviews


© 2018. All rights reserved.