I have often grumbled about "story time!", the practice of spinning grand stories based on tiny morsels of data. It's not that I disapprove of story-telling per se -- it is that the story-teller has got to find evidence to support his/her stories.
A few days ago, I dissected the Trefis financial model used to support a $100 billion valuation for Facebook. A handful of aggressive assumptions must be believed to make it happen. So it is very pleasant to find in Business Insider some actual data to help us assess the credibility of some of these assumptions.
For those not in this industry, CTR is the number of ads that get clicked on divided by the number of ads shown to Facebook users; and CPC is the average dollars paid by advertisers to Facebook for each click on their ads.
The chart does not provide per-advertiser data. Instead, advertisers are grouped by the industry they are in (health care, internet, etc.), and the aggregate results are shown.
The one thing that should jump out at us is the range of clickthrough rates: it's mostly in the range of 0.01 to 0.1. (The last one -- Tabloids and Blogs -- seems mislabeled, and the last two rows are sufficiently different from the rest that one would want to check the numbers again.)
Mind you, that is 0.01% to 0.1%. What does 0.01% mean? Yes, that's 100 clicks per 1 million ads shown to Facebook users. (My friend Augustine has long ago pointed out Facebook's abysmal metrics, relative to other advertising platforms. Look here for his perspective.)
Now, put yourself in the shoes of say Pfizer showing ads to Facebook users. Clicks don't equal revenues. Only some proportion of clicks would turn into sales. For illustration, say 5% of clicks lead to sales. With 100 clicks, they get 5 sales. They have to show 1 million ads to get 100 clicks. The clicks cost them $130 according to the data in the chart. If the value of each sale is more than $130/5 = $26, then Pfizer just about breaks even on the ads.
What's on the advertiser's mind?
One strategy is to flood Facebook with ads. More ads mean more clicks, even if the clickthrough rate is tiny. Perhaps unexpectedly, this sort of tactic works only to a limited extent. It bumps up against the law of diminishing returns. The clickthrough rate typically falls say when you double the number of ad impressions.
Another strategy is to use statistical models to selectively show ads only to Facebook users most likely to click on them. This raises the clickthrough rate. What it doesn't solve is the "quality" of Facebook users, or put differently, their tendency to pay attention to advertising.
Now, let's fancy yourself the person trusted to build these statistical models. You are trying to predict who's going to click on a given type of ad and who's not. What you have at your disposal is historical data on who got shown what ad, and whether they clicked or not. (You would typically want to grab any other data you can get your hands on, such as what the user has been doing on Facebook recently. You can get more creative, such as what Facebook has been secretly doing, described here.)
Let's say you are given the data on 1 million ads that were displayed. According to the above, you will find 100 clicks in this data... and 999,8999 999,900 non-clicks. Now, assume that you discover that there are some commonalities among the 100 users who clicked on the ad. Say, 50 out of the 100 signed on to Facebook after midnight, and live in the East Coast. That's a very strong signal.
Now, how many users would you find among the non-clicks who signed on after midnight and live in the East Coast? Oops, there would be multiples more than 50 such cases who did not click on the ad. This is all due to the tiny clickthrough rate.
That, in brief, is the challenge of Web analytics. If this excites you, there are lots of opportunities out there.