« What about the dead people? | Main | How to act like a data scientist 12: say no to the daily grind »


Feed You can follow this conversation by subscribing to the comment feed for this post.

Dan Vargo

Testing positivity rate is interesting and can be used for some broad "conclusions" to lead further investigation.

You don't need pure randomness, as there is likely a predictable order in which people have gotten tested.

If you have 10 tests, you're going to give them to the 10 most likely to have it or the 10 where the result is most meaningful. If you have 10 more, it'll go to the next 10 by the same criteria, etc.

So, if you increase large-numbers testing by another large number, you should expect positive test rate to decrease, ceterus peribus.

An increase in positive test rate does not mean that the virus is spreading, as it could be our testing strategy was flawed all along. Multiple different states/municipalities experiencing a similar result while having using differing prioritization strategies would strengthen the ability to conclude that the virus is likely spreading and results are not due to increased testing.


DV: Thanks for the comment. Self-sorting by severity is certainly a factor. This is one of those things that are simple conceptually but hard to measure. One measure is falling positivity rate, as you stated. But now we're using the same metric for cause and effect. I agree that increasing positivity ratio does not mean the virus is spreading - that's the reason for this post because the media seem to believe so. The biggest determinant of the positivity ratio is what types of people are getting tested, and how is this changing over time. After learning about the fiasco in California, my trust in testing data went to zero. So far, no one has provided a satisfactory explanation for the fiasco (see here.


There is an interesting consequence of initially poor testing, which is the number of daily new cases in some countries doubled a fast as every 2 days. This didn't mean a greater rate of infections, it was simply a result of catching up all the non-tested cases.


Ken: Deliberately introducing bias into data collection is never wise. A lot of schools are repeating this mistake. If they don't do comprehensive or random testing (with high compliance), they are fooling themselves... some readers think that is by design to collect tuition. You see, when they publish those biased numbers, they did not draw any generalizations; they don't have to because readers will do this themselves and get misled.

The comments to this entry are closed.

Kaiser Fung. Business analytics and data visualization expert. Author and Speaker.
Visit my website. Follow my Twitter. See my articles at Daily Beast, 538, HBR, Wired.

See my Youtube and Flickr.
Numbers Rule Your World:
Amazon - Barnes&Noble

Amazon - Barnes&Noble


  • only in Big Data

Next Events

Jan: 10 NYPL Data Science Careers Talk, New York, NY

Past Events

Aug: 15 NYPL Analytics Resume Review Workshop, New York, NY

Apr: 2 Data Visualization Seminar, Pasadena, CA

Mar: 30 ASA DataFest, New York, NY

See more here


R Fundamentals, Principal Analytics Prep

Numbersense: Statistical Reasoning in Practice, Principal Analytics Prep

Applied Analytics Frameworks & Methods, Columbia

The Art of Data Visualization, NYU

Signed copies at McNally-Jackson, NYC

Excerpts: Numbersense Ch. 1, 7, 8. NRYW

Junk Charts Blog

Link to junkcharts

Graphics design by Amanda Lee