You can follow this conversation by subscribing to the comment feed for this post.

To be quite honest if I was tracking a number at 0.7% and forecast it would be 73% in two weeks - I'd be feeling hugely proud of myself if it turned out to be 23%. Doubly proud if it went on to be 59% on week 3.

0.7% to 7% to 23% to 53% is x10, x3.3, x2.3

0.7 to 73% is x104 or x10.2 per week.

So even accurate for week 1. (I know you can't treat %-ages as exponential models because they stop at 100%, but crudely and below say 40% it kinda works.)

The modellers should be congratulated, not mocked.
Whether the predictions should have been published the way they were is a different question.

The real target of mockery should be the idea that Omicron appears to be a threat when all the evidence is very very low hospitalisations and almost no deaths. England reported 49 deaths to date (yesterday) but we know most Covid hospitalisations are arriving for other causes so many of this 49 will be "with" and not "by".
Omicron seems to be a blessing.

MD: I don't know what to make of your comment. Maybe you are being sarcastic. But you just made me realize I gave them too much credit. From 0.7% to 73% is not 10 fold as I said, but 100 fold.

They are making the same mistake that they have been committing from day 1 of this pandemic. You cannot compare the # of deaths and the # of cases on a cross section of time. That is assuming zero lag between cases and deaths.

The U.S. has seen an average of 1,000 deaths per day for three consecutive months this far into the pandemic. That is a run rate of 360,000 deaths a year. They also told us that death rates of Delta were lower. Somehow this is not reflected in the aggregate numbers. Strange, no?

Does US test also use just the S gene dropout result to see if Omicron?
I remember this test pattern also used to show the UK variant last year.

But I also remembers that an S dropout can happen at higher CT.meaning only N and ORF or E are seen.

Quibble: You corrected the 10 -> 100, but forgot the 3 -> 30 in the third last paragraph.

**

It really is terrible that they don't have any easily accessible documentation of the model - or even, dare I dream, the code. It may be a bad model, or it may be a case of small errors in the input leading to large errors in the output, like what we see happen sometimes with SEIR models.

I'm less concerned with it being bad; even "failures" can be informative, in that they can teach us what pitfalls to avoid in the future. It's hard to learn much of anything here. This would still apply even if the nowcast was accurate.

Kaiser: No. I think you want to consider what a good forecast would have been like and whether proportionately it would have been any better.
They predicted a rapid exponential rise in two weeks and a rapid exponential rise happened in three weeks.
To suggest that a 100x forecast is inaccurate misses the point. No one could have predicted a 30x rise accurately. to may any prediction of >10x is both a bold call and a good one.

They are making the same mistake that they have been committing from day 1 of this pandemic. You cannot compare the # of deaths and the # of cases on a cross section of time. That is assuming zero lag between cases and deaths.
Maybe I missed something - I can't see any reference to deaths in the forecast.

MD: They did not predict a "rapid exponential rise" in some kind of handwaving, qualitative sense. They made a numeric prediction, which is embarrassingly wrong. And they chose to hide the model so no one knows how they came up with such a horrible prediction.

What would a "good" prediction have been for something that went from 0.7 to 23?
Would 5 have been more embarrassingly wrong? I think yes, 5 would have been worse.

The comments to this entry are closed.

##### Get new posts by email:
Kaiser Fung. Business analytics and data visualization expert. Author and Speaker.
Visit my website. Follow my Twitter. See my articles at Daily Beast, 538, HBR, Wired.

## Search3

•  only in Big Data
Amazon - Barnes&Noble

Numbersense:
Amazon - Barnes&Noble

## Junk Charts Blog

Graphics design by Amanda Lee

## Next Events

Jan: 10 NYPL Data Science Careers Talk, New York, NY

## Past Events

Aug: 15 NYPL Analytics Resume Review Workshop, New York, NY

Apr: 2 Data Visualization Seminar, Pasadena, CA

Mar: 30 ASA DataFest, New York, NY

See more here