Ugh, wow, somehow I missed all this. I guess he joins the ranks of the scientists who made important contributions and then leveraged that recognition into a platform for unhinged diatribes.
Please don't lazily conclude that he's gone crazy because it doesn't align with your prior beliefs. His work on Covid was just as rigorous as anything else he's done, but it's been unfairly villainized by the political left in the USA. If you disagree with his conclusions on a topic, you'd do well to have better reasoning than "the experts said the opposite".
Ioannidis' work during Covid raised him in my esteem. It's rare to see someone in academics who is willing to set their own reputation on fire in search of truth.
“Most Published Research Findings Are False” —> “Most Published COVID-19 Research Findings Are False” -> “Uh oh, I did a wrongthink, let’s backtrack at bit”.
Yes, sort of. Ioannidis published a serosurvey during COVID that computed a lower fatality rate than the prior estimates. Serosurveys are a better way to compute this value because they capture a lot of cases which were so mild people didn't know they were infected, or thought it wasn't COVID. The public health establishment wanted to use an IFR as high as possible e.g. the ridiculous Verity et al estimates from Jan 2020 of a 1% IFR were still in use more than a year later despite there being almost no data in Jan 2020, because high IFR = COVID is more important = more power for public health.
If IFR is low then a lot of the assumptions that justified lockdowns are invalidated (the models and assumptions were wrong anyway for other reasons, but IFR is just another). So Ioannidis was a bit of a class traitor in that regard and got hammered a lot.
The claim he's a conspiracy theorist isn't supported, it's just the usual ad hominem nonsense (not that there's anything wrong with pointing out genuine conspiracies against the public! That's usually called journalism!). Wikipedia gives four citations for this claim and none of them show him proposing a conspiracy, just arguing that when used properly data showed COVID was less serious than others were claiming. One of the citations is actually of an article written by Ioannidis himself. So Wikipedia is corrupt as per usual. Grokipedia's article is significantly less biased and more accurate.
He published a serosurvey that claimed to have found a signal in a positivity rate that was within the 95% CI of the false-positive rate of the test (and thus indistinguishable from zero to within the usual p < 5%). He wasn't necessarily wrong in all his conclusions, but neither were the other researchers that he rightly criticized for their own statistical gymnastics earlier.
That said, I'd put both his serosurvey and the conduct he criticized in "Most Published Research Findings Are False" in a different category from the management science paper discussed here. Those seem mostly explainable by good-faith wishful thinking and motivated reasoning to me, while that paper seems hard to explain except as a knowing fraud.
> He wasn't necessarily wrong in all his conclusions, but neither were the other researchers that he rightly criticized for their own statistical gymnastics earlier.
In hindsight, I can't see any plausible argument for an IFR actually anywhere near 1%. So how were the other researchers "not necessarily wrong"? Perhaps their results were justified by the evidence available at the time, but that still doesn't validate the conclusion.
I mean that in the context of "Most Published Research Findings Are False", he criticized work (unrelated to COVID, since that didn't exist yet) that used incorrect statistical methods even if its final conclusions happened to be correct. He was right to do so, just as Gelman was right to criticize his serosurvey--it's nice when you get the right answer by luck, but that doesn't help you or anyone else get the right answer next time.
It's also hard to determine whether that serosurvey (or any other study) got the right answer. The IFR is typically observed to decrease over the course of a pandemic. For example, the IFR for COVID is much lower now than in 2020 even among unvaccinated patients, since they almost certainly acquired natural immunity in prior infections. So high-quality later surveys showing lower IFR don't say much about the IFR back in 2020.
There were people saying right at the time in 2020 that the 1% IFR was nonsense and far too high. It wasn't something that only became visible in hindsight.
Epidemiology tends to conflate IFR and CFR, that's one of the issues Ioannidis was highlighting in his work. IFR estimates do decline over time but they decline even in the absence of natural immunity buildup, because doctors start becoming aware of more mild cases where the patient recovered without being detected. That leads to a higher number of infections with the same number of fatalities, hence lower IFR computed even retroactively, but there's no biological change happening. It's just a case of data collection limits.
That problem is what motivated the serosurvey. A theoretically perfect serosurvey doesn't have such issues. So, one would expect it to calculate a lower IFR and be a valuable type of study to do well. Part of the background of that work and why it was controversial is large parts of the public health community didn't actually want to know the true IFR because they knew it would be much lower than their initial back-of-the-envelope calculations based on e.g. news reports from China. Surveys like that should have been commissioned by governments at scale, with enough data to resolve any possible complaint, but weren't because public health bodies are just not incentivized that way. Ioannidis didn't play ball and the pro lockdown camp gave him a public beating. I think he was much closer to reality than they were, though. The whole saga spoke to the very warped incentives that come into play the moment you put the word "public" in front of something.
From what I can gather, the best estimates for pre-vaccine, 2020 Wuhan/Alpha strain IFR are about 0.5% to 0.8%, approaching 1%, depending very much on the age structure (age 75+ had an IFR of 5-15%).
The current effective IFR (very often post-vaccination or post-exposure, and of with weaker strains) is much lower. But a 1% IFR estimate in early 2020 was entirely justified and fairly accurate.
For what it's worth, epidemiologists are well aware of the distinction between IFR, CFR, and CMR (crude mortality rate = deaths/total population), and it is well known that CFR and CMR bracket IFR.
Yeah I remember reading that article at the time. Agree they're in different categories. I think Gellman's summary wasn't really supportable. It's far too harsh - he's demanding an apology because the data set used for measuring test accuracy wasn't large enough to rule out the possibility that there were no COVID cases in the entire sample, and he doesn't personally think some explanations were clear enough. But this argument relies heavily on a worst case assumption about the FP rate of the test, one which is ruled out by prior evidence (we know there were indeed people infected with SARS-CoV-2 in that region in that time).
There's the other angle of selective outrage. The case for lockdowns was being promoted based on, amongst other things, the idea that PCR tests have a false positive rate of exactly zero, always, under all conditions. This belief is nonsense although I've encountered wet lab researchers who believe it - apparently this is how they are trained. In one case I argued with the researcher for a bit and discovered he didn't know what Ct threshold COVID labs were using; after I told him he went white and admitted that it was far too high, and that he hadn't known they were doing that.
Gellman's demands for an apology seem very different in this light. Ioannidis et al not only took test FP rates into account in their calculations but directly measured them to cross-check the manufacturer's claims. Nearly every other COVID paper I read simply assumed FPs don't exist at all, or used bizarre circular reasoning like "we know this test has an FP rate of zero because it detects every case perfectly when we define a case as a positive test result". I wrote about it at the time because this problem was so prevalent:
I think Gellman realized after the fact that he was being over the top in his assessment because the article has been amended since with numerous "P.S." paragraphs which walk back some of his own rhetoric. He's not a bad writer but in this case I think the overwhelming peer pressure inside academia to conform to the public health narratives got to even him. If the cost of pointing out problems in your field is that every paper you write has to be considered perfect by every possible critic from that point on, it's just another way to stop people flagging problems.
Ioannidis corrected for false positives with a point estimate rather than the confidence interval. That's better than not correcting, but not defensible when that's the biggest source of statistical uncertainty in the whole calculation. Obviously true zero can be excluded by other information (people had already tested positive by PCR), but if we want p < 5% in any meaningful sense then his serosurvey provided no new information. I think it was still an interesting and publishable result, but the correct interpretation is something like Figure 1 from Gelman's
I don't think Gelman walked anything back in his P.S. paragraphs. The only part I see that could be mistaken for that is his statement that "'not statistically significant' is not the same thing as 'no effect'", but that's trivially obvious to anyone with training in statistics. I read that as a clarification for people without that background.
We'd already discussed PCR specificity ad nauseam, at
These test accuracies mattered a lot while trying to forecast the pandemic, but in retrospect one can simply look at the excess mortality, no tests required. So it's odd to still be arguing about that after all the overrun hospitals, morgues, etc.
By walked back, what I meant is his conclusion starts by demanding an apology, saying reading the paper was a waste of time and that Ioannidis "screwed up", that he didn't "look too carefully", that Stanford has "paid a price" for being associated with him, etc.
But then in the P.P.P.S sections he's saying things like "I’m not saying that the claims in the above-linked paper are wrong." (then he has to repeat that twice because in fact that's exactly what it sounds like he's saying), and "When I wrote that the authors of the article owe us all an apology, I didn’t mean they owed us an apology for doing the study" but given he wrote extensively about how he would not have published the study, I think he did mean that.
Also bear in mind there was a followup where Ioannidis's team went the extra mile to satisfy people like Gellman and:
They added more tests of known samples. Before, their reported specificity was 399/401; now it’s 3308/3324. If you’re willing to treat these as independent samples with a common probability, then this is good evidence that the specificity is more than 99.2%. I can do the full Bayesian analysis to be sure, but, roughly, under the assumption of independent sampling, we can now say with confidence that the true infection rate was more than 0.5%.
After taking into account the revised paper, which raised the standard from high to very high, there's not much of Gellman's critique left tbh. I would respect this kind of critique more if he had mentioned the garbage-tier quality of the rest of the literature. Ioannidis' standards were still much higher than everyone else's at that time.
It's good that Ioannidis improved the analysis in response to criticism, but that doesn't mean the criticism was invalid; if anything, that's typically evidence of the opposite. As I read Gelman's complaint of wasted time and demand for an apology, it seems entirely focused on the incorrect analysis. He writes:
> The point is, if you’re gonna go to all this trouble collecting your data, be a bit more careful in the analysis!
I read that as a complaint about the analysis, not a claim that the study shouldn't have been conducted (and analyzed correctly).
Gelman's blog has exposed bad statistical research from many authors, including the management scientists under discussion here. I don't see any evidence that they applied a harsher standard to Ioannidis.
Does the IFR matter? The public thinks lives are infinitely valuable. Lives that the public pays attention to. 0.1% or 1%, it doesn’t really matter, right, it gets multiplied by infinity in an ROI calculation. Or whatever so called “objective” criteria people try to concoct for policymaking. I like Ioannidis’s work, and his results about serotypes (or whatever) were good, but it was being co-opted to make a mostly political policy (some Republicans: compulsory public interaction during a pandemic and uncharitably, compulsory transmission of a disease) look “objective.”
I don’t think the general idea of co-opting is hard to understand, it’s quite easy to understand. But there is a certain personality type, common among people who earn a living by telling Claude what to do, out there with a defect to have to “prove” people on the Internet “wrong,” and these people are constantly, blithely mobilized to further someone’s political cause who truly doesn’t give a fuck about them. Ioannidis is such a personality type, and as you can see, a victim.
> The public thinks lives are infinitely valuable.
In rhetoric, yes. (At least, except when people are given the opportunity to appear virtuous by claiming that they would sacrifice themselves for others.)
In actions and revealed preferences, not so much.
It would be rather difficult to be a functional human being if one took that principle completely seriously, to its logical conclusion.
I can't recall ever hearing any calls for compulsory public interaction, only calls to stop forbidding various forms of public interaction.
The SHOW UP act was congressional republicans forcing the end of telework for federal workers, not for any rational basis. Teachers in Texas and Florida, where Republicans run things, staff were faced with show up in person (no remote learning) or quit.