Thanks for the responses to the post on what it means to be "under-rated" or "over-rated". I will summarize two strands of thought from the readers, and also offer some tentative ideas of my own based on your responses.
***
1. Us vs. Them
Several of you agreed with the last section of my post, where I defined "under-rated" as the rating of me or people like me being higher than the average rating, presumably of a larger group of people or the entire population.
This definition implies that I or people like me are "outliers" in the population of raters. Then it's possible to use standardized scores and just claim that anyone whose rating is 3 SD above the average rating believes the object being rated is "under-rated". To take this further, "under-rating"/"over-rating" becomes a measure of dispersion of the individual ratings. If variability is low, then we won't find such subgroups who are dissatisfied with the average rating, which means when there is consensus, the object can't be over-rated or under-rated.
2. Popularity
Whether the rated object is popular (i.e. how many reviews) is considered very important by many of you. Based on the above, it would seem like it's not exactly popularity but rather variability. Popularity and variability are linked via the law of large numbers so that if an object receives a lot of ratings, then we expect the variability to fall as the square root of the number of ratings. However, don't forget the intrinsic standard deviation.
Avner led us to the Anime News Network where they use a Bayesian (or shrinkage) estimator to correct for low response rates. Here, we are talking about a set of ratings say on a set of movies or restaurants. And the result is to reduce the dispersion of average ratings across the set of movies or restaurants. This is definitely a must-do for any review sites but I don't see the relevance to under-rated or over-rated.
I actually disagree with the ANN's definition of under-rated: "Based on the premise that titles deserve to be seen in equal proportion to how well rated they are, the value in the "#" column is the number of positions by which this title would climb in the Most Viewed list if it was seen as much as it was liked (according to bayesian rating)."
The premise doesn't make sense. If there is perfect correlation between rating and popularity, then the "rating" is just a popularity contest. It's basically saying that a bad movie should have few reviews, the worst movie should have no reviews. The reality is that the number of reviews is a function of many things, not just the quality of the movie. For example, it is a function of star power and marketing spend.
3. Conjoint analysis
Now, I come to Casey's comment about popularity as a function of quality and many other factors. The point is that something that is under-rated has high popularity and low quality.
This leads me down the following path. First, instead of talking about popularity, let's talk about the rating. The rating is a function of many attributes, including quality, star power, marketing prowess, etc. Second, assume that I (or people like me) have a value function (some will call it a utility function). In Casey's example, Casey's value function is dependent on quality alone and nothing else. In general, the value function is also a function of many attributes, including quality, star power, marketing prowess, etc. This function describes how much weight an individual places on different attributes of the product being rated. And the function will be vary from person to person.
What this means is that the difference in ratings between people is due to the different weights in our individual value functions. Also, different objects being rated has different combinations of attributes.
Since Casey is only concerned about quality while the average person looks at quality plus other factors, then there will be things in which the other factors would drag down the average rating, and if Casey only cares about quality, there would be a gap between Casey's rating and the average rating.
This setup reminds me of conjoint analysis.
I love this topic and think it applies widely--from products to sports to movies. Oddly enough, I get tripped up on the definition of "under-rated".
Take movies as an example; there are readily available sources of information about popularity (box office receipts) and ratings (user ratings / critical ratings). By the definition given above ("The point is that something that is under-rated has high popularity and low quality."), we would conclude that Titanic is one of the most under-rated movies of all time, given it's overwhelming popularity coupled with poor ratings.
But a movie critic would laugh at you if you said that Titanic was under-rated. It seems that when we use the term "under-rated", we mean the exact opposite; something is under-rated by society (thus, low popularity level), but we (critics / experts) thinks it deserves a higher rating.
Interesting discussion. I'm anxious to see how others would attempt to quantify under-rated-ness...
Posted by: Ryan Bower | 02/24/2012 at 12:40 PM