Another month, another unemployment report, another set of peanut-gallery reports from the business press. The August numbers apparently delighted quite a few: some headlines are "U.S. Stocks Advance After Employment Report Exceeds Estimates" (Bloomberg BusinessWeek); "Fewer Jobs Lost in August; Private Hiring Beats Forecast" (CNBC); "Private Hiring Surprises with 67,000 New Jobs" (Reuters).
The key reported results are: employment drop (-54,000) lower than expected, government jobs decreased due to end of Census, private sector jobs increased (+67,000), upward revisions for both June (+46,000) and July (+77,000).
In Tip #1, I already discussed the idiocy of including once-every-10-year Census jobs in any of these numbers.
In this post, we shall hone our ability to see through the "noise".
***
First notice that the revisions from the last couple of months are in the same order of magnitude (tens of thousands) as the reported changes for the current months. This is a very strong sign that the reported changes for the current month are just noise. When the revision of the August numbers comes out in September, what would the -54,000 become? It could be comfortably above zero indicating an overall gain in employment, or it could be quite a bit more negative than reported today.
Now imagine that the revisions were of a different magnitude. Let's say instead of 46K and 77K, they were 5,000 and 8,000. Then, we could believe that the August numbers would be directionally correct even after future revisions, and we would have more confidence in those numbers.
What we have done here is to use historical fluctuations to get a mental picture of how accurate these estimates are, and then use that margin of error to judge how good the current estimates are. This is a very important skill to have when looking at numbers, especially when looking for trends.
In Chapter 1, I pointed out how important it is to know the variability around average values. Here, the reports only gave us average values. But by looking back in historical revisions, we can get a good sense of how variable the numbers are, and get the information denied us.
***
A more rigorous way to do this is to look up the technical note for the margin of error. The width of the confidence interval is given as 100,000 at 90% confidence. What this means is that when they report -54,000, what they actually mean is that any number between +46,000 and -154,000 is consistent with the data that was observed. So in fact, the statisticians have no idea whether employment grew or shrank in August. This is the overall employment number; for individual breakouts (like private sector jobs or mining jobs), the margin of error will be even greater, and they have even less of a handle of the trend.
Technically, this happens because the government does not have data on every business in the U.S. All of these estimates are made based on survey samples. For example, the so-called Establishment Survey is based on 140,000 businesses and government agencies (I think, minus the nonresponders). That's a very small proportion of millions of businesses in the U.S. and therefore some error in estimation is inevitable.
***
This may be the shocker: if you take the margin of error of 100,000, and notice that almost every number in the employment report is smaller than that, then essentially you can conclude that the entire report is pure noise, and you'd be right. (We say none of the changes are statistically significant.) The original design for the sample survey was not intended to read changes of this scale.
Now that you know this, and as you look around, you will find it's very very noisy out there.
You made one very common error in your discussion: the precision of a survey-based estimate has almost _nothing_ to do with what proportion of the population is being sampled (as long as you are not sampling almost the entire population). I am sure you know the soup-tasting analogy. So the wide margin of error of the estimates is not because 140,000 is a small proportion of all the businesses, but because the buisiness-to-business variability of the change in the number of employees is large.
Posted by: Aniko | 09/03/2010 at 01:56 PM
In discussing the confidence interval you state:
I think you are stretching it a bit to say that statistician have "no idea" whether employment grew or shrank. They do have an idea, their best guess is that employment shrank by 54,000 jobs. Yes, their sample was noisy, and even if the true value were 0, they would get sample estimates like this one more than 10 percent of the time, but I think it is a statistically valid claim to state "our data suggest it is more likely than not that employment shrank."
Posted by: Aaron | 09/03/2010 at 04:59 PM
Aniko: You read a lot more into that sentence than I intended - and I realize that what I wrote could be misleading so thanks for bringing this up. If everyone in the population were surveyed, then we would have complete information and there could be no sampling error. If we can only collect partial information, then the larger the sample, the smaller the error. But after a certain point, increasing the sample doesn't reduce the error enough to matter so we like to say proportions don't matter. Hope I clarified that.
Also I want to clarify your statement that the large error is not due to sample size but due to variability. The sample size is designed to filter out a certain level of noise (conversely, read a certain level of signal); if the survey has been designed to read changes in employment of 10,000, then the statistician would have called for a lot more than 140K businesses to be surveyed.
Aaron: A margin of error comes at a certain level of confidence, in this case, 90%. Any statement like "our data suggest it is more likely than not that employment shrank" is valid ONLY if we accept a lower confidence level. 90% is already a lower threshold than typically used so one must be careful when issuing such statements.
I have fundamentally a strong objection to this line of thinking because it is equivalent to saying when the sampling error is large, just ignore the variability, and use the average value (point estimate) as the most likely value. It is precisely when the sampling error is large that we must pay attention to it. Otherwise, we might declare all the research on confidence levels and margins of error useless!
Posted by: Kaiser | 09/03/2010 at 11:04 PM
In Australia they include a trend line, but most commentators ignore it. I'm going to use the accuracy of unemployment figures as an example in a basic stats course this semester.
Posted by: Ken | 09/04/2010 at 03:21 AM