Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> and in most cases the test specificity is on par with the incidence rate itself,

????? We can't exclude that specificity may be 97% (it's unlikely, based on our data, but it's at the edge of the confidence interval), which is unfortunately on par with the studies returning 2-4% positives... but can't exactly explain a return of 21%. Subtract off 4% worst-case false positives from 21%, and where are you?

Since you want to appeal to authority with Gelman, this is what he said about this on his blog: "– Those California studies estimating 2% or 4% infection rate were hard to assess because of the false-positive problem: if a test has a false positive rate of 1% and you observe 1.5% positive tests, your estimate’s gonna be super noisy. But if 20% of the tests you observe are positive, then the false-positive rate is less of a big deal." ... "– In any case, the 20% number seems reasonable. It’s hard for me to imagine it’s a lot higher, and, given the number of deaths we’ve seen already, I guess it can’t be much lower either."

> so that pooling the studies does nothing to overcome the huge uncertainty they suffer in incidence rates.

The case count multiple from the serological study for New York state (and indeed, the California counties) is right in line with what's expected with a variety of statistical measures that were made without relying upon the serological data. So if you even peeked at my source you'd not be making this argument. https://www.medrxiv.org/content/10.1101/2020.04.18.20070821v...



From the same Gelman article:

> First off, 3% does not sound implausible. If they said 30%, I’d be skeptical, given how everyone’s been hiding out for awhile, but 3%, sure, maybe so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: