« Does biometrics authentication work? | Main | Measuring value by how much was spent »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Ken

One way of solving this is to do an additional smaller random sample of the vaccinated. Then you can treat it as a missing data problem and due multiple imputations of the COVID status of the remaining subjects. It is similar to a problem in diagnostic testing, where there is a screening test and only the positives are given a more accurate test.They are now doing similar in Data Science but don’t understand what they are doing so end up with inflated ideas of their test accuracy.

A Palaz

Hi Kaiser,


I thinknyou find these two paper interesting because carrying some of the errors you point out in your postings.

Here then problem for me is how to try to solve best to make the 2 able to compare.


http://dx.doi.org/10.1136/bmj.n2244

https://doi.org/10.1016/S2589-7500(21)00080-7


Of course not so easy even though both big sample just to eqaalise results on sample size. Can you make any suggestion ...or tell me it is not possible.

Context is for to see who best for boosting.


Thanks!

Kaiser

AP: Thanks for those links. They are interesting work that I'll review later. But they don't address the issue of bias in data collection between vaccinated and unvaccinated, as the main model only does prediction for vaccinated people. (There are biases in determination of cause of death which I'd like to see some discussion of.)

Ken pointed out one possible fix, which is to run a small random sample on the side. (The React-1 study I reviewed before uses random samples.)
Another source of information is the time series for the vaccinated. They all transitioned from unvaccinated to vaccinated at some time. So we might find correlation between the time series of percent vaccinated and the time series of testing rate and/or positivity rate.
These lead to crude adjustments to the averages but better than not adjusting at all.

The problem with boosting (and I'm foreshadowing the next post on the blog) is that the evidence is weak in terms of its usefulness, and in particular, the idea that the booster shot works better for at-risk, older people is more of a belief than an empirical result. If the signal is weak, then it's hard to find a predictive model that work well. I'd consider using matching to balance your training dataset first before modeling.

The comments to this entry are closed.

Get new posts by email:
Kaiser Fung. Business analytics and data visualization expert. Author and Speaker.
Visit my website. Follow my Twitter. See my articles at Daily Beast, 538, HBR, Wired.

See my Youtube and Flickr.

Search3

  • only in Big Data
Numbers Rule Your World:
Amazon - Barnes&Noble

Numbersense:
Amazon - Barnes&Noble

Junk Charts Blog



Link to junkcharts

Graphics design by Amanda Lee

Community