Perhaps studies should be put into some kind of "scientific maturity level" ranking so lay people can better digest recent research?
SML1 - the first study on this topic, may be wrong, but could be pioneering
SML2 - the first confirmation study on an SML1 study, confirming the results
SML3 - one of multiple confirmation studies
SML4 - a consensus or meta-study that confirms the original SML1 study
SML5 - scientific consensus indicates the SML1 study is correct based on current scientific knowledge and is unlikely to be refuted in the near term.
Nobody outside of the immediate scientific circle should care at all until a topic is in SML3, and then it should be treated as bleeding edge and only show up in harder to digest science popularization magazines: Sci-Am for example. SML4 would be ready for the Pop-Sci "emerging science" column. And SML5 would be fit for newspapers and the general public.
Cochrane does systematic research; they take a class of researches, and "summarize" [very inappropriate term] them.
I don't think that this lends itself very well to the publishing process of research in general (meant as a stream of single researches).
Cochrane is one of the foundations of the book Bad science, which you would probably find interesting (assuming you're interested in the subject).
When it comes to the media, the situation is complex, because the interests of the media (and the institutions and so on) can be many, and research quality can be not the only one.
If on a scale from 0 to 10 of interest in quality, the Cochrane institution is a (just to say) 10, the Daily Mail would be a -5, since sensationalized garbage is much more productive for them than boring, low-key, truthful research.
[Of course, I don't mean that Daily mail is certainly not the reference or the standard.]
Your framework seems like a great idea, but I think it should also take into account if a study was done on animals or humans. Even reproducible studies in animals don't seem to translate well to humans in some cases:
And as a matter of fact just watched this on 60 Minutes tonight:
"Every once in a while, a story comes through 60 Minutes that has the potential to change lives. This week, Scott Pelley's report on a new medical treatment using the polio virus to kill cancer is that kind of story."
Serious reporting in television media is long gone. It's been all hype, sensationalism, drama, matinee-horror-show entertainment for at least the past 20 years.
Stop watching TV news. You won't learn anything and it's designed to make you anxious.
Much longer than that. 60 Minutes was this bad in the late 1970s. In somewhat famous examples, their bogus hatchet job on the Jeep CJ was in 1980. For the most notorious, Audi "unintended" acceleration in 1986 and Alar in apples in 1989; all three of these involved serious falsifications.
The Jeeps were driven by robots that moved the steering wheel twice as fast as a human could, and even then the 4 Jeeps they tested a zillion times were a lot more stable than you would think (I drove a ~1966 Kaiser CJ-5 1977-79). They over pressurized an Audi's automatic transmission, even plugged a pressure relief valve, to demonstrate the failure they were claiming. Alar has never been shown to be a carcinogen, although if you feed near LD50 doses of a breakdown product to lab rodents you can show a carcinogenic effect. If you drink over 5,000 gallons of apple juice a day Alar might be a threat, but that's well over the LD50 (lethal dose for 50% of a tested population) of just plain water, which is around a gallon and a half at once.
In the end, we have to count on incremental improvements in publishing, and incremental improvements in reading by all of us, to prompt medical studies and all kinds of scientific studies to become more accurate and informative about important issues. We especially have to improve our own tendency to seize a hopeful answer whether or not it's a true answer as we grapple with life's problems. (When I write "we" here I am including all of us, including myself, whether we are producers or consumers of scientific studies.) Medical studies specifically and scientific studies in general have helped humankind gain a lot of useful knowledge, but it has been a slow, unsteady process with lots of back and forth.
P.S. All the links I've posted here, including the one opening the thread, are from the "Weekend Reads" weekly post on the Retraction Watch group blog, which I think many Hacker News participants will find interesting.
This is true of all studies, not just medical studies. It would be nice if the people at Vox realized this as a lot of their explainers are based on one shot studies with small sample size, marginally significant p-values, and no replication.
It is true of just about every article, forget about studies. Studies implied someone at least did some work.
Not a week goes by without a ton of these, revolutionary improvements on battery technology, solar power, wind power, curing cancer, curing AIDS, things that give you cancer and on and on.
There must be money in it, I can't think of any other reason why structurally mis-informing the public is fun or useful.
No I believe the motivating factor is not money, but a short-lived burst of fame, and even notoriety, for the authors of the studies.
Think about it...you dedicate your life to your science, and grind day after day, year after year with no verifiable results, because, you know, in case you haven't noticed, curing cancer is motherfucking hard.
I can see where maybe at some point, you just want to feel some sense of accomplishment or to be noticed in your social group, so you find some questionably-causal, but statistically-relevant piece of research you did and just publish the damned thing.
Don't get me wrong, I find this sort of thing appalling, but in the end, most of us are fairly imperfect people.
That could explain researchers publishing such things (but I'd give more credit to this being just a result of publish-or-perish culture), but it doesn't explain why journalists write all that crap about those studies. Also note that often, the finding itself is reasonable but of little direct impact - "study found that X correlates with Y (r=0.67) under conditions Z, p<0.05" - and it's journalists who turn it into a bunch of sensationalist lies.
My guess would be that they are not trained to know better and that they have to write about something, anything really, that has some chance of grabbing reader attention and isn't related to celebrity culture.
What's more interesting then a headline that promises some scientific breakthru is about to change the world?
The real issue, I believe, is that there is no accountability for almost anyone these days. If there is some upside in publishing or reporting on questionable science, and no downside, what is the incentive not to do it?
> My guess would be that they are not trained to know better and that they have to write about something, anything really, that has some chance of grabbing reader attention and isn't related to celebrity culture.
I fear that no training can be helpful if you're supposed to have 10 stories written by this afternoon, and they'd better be good at attracting readers, or else.
> The real issue, I believe, is that there is no accountability for almost anyone these days. If there is some upside in publishing or reporting on questionable science, and no downside, what is the incentive not to do it?
I agree. I think you captured the essence of the problem. There is no immediate cost to doing this (if people had an actual span of attention, most of the news sites would go banrkupt quickly), and clear benefits. There is no direct price to be paid for misleading people, and the accumulated costs are apparent only years later, in the form of growing distrust towards everything, which destroys society from inside.
Let us not forget the superiors to those scientists who pressure publication because a lot of times that is the only way they can be sure to get further funding / grants.
At least that is what I take away from a lot of it.
This is why I flag almost all the health-related stories I see on the HN front page, especially those about supposed toxicity. Much of what passes for medical news nowadays seems calculated to breed neuroses, especially in parents.
Which reminds me of the old chestnut - being paranoic doesn't mean everybody is not watching you. How about assessing reports on their merits if such exist instead of making generalizations about 'almost all the health-related stories'?
A neurosis is an unreasonable response to perceived reality which may indeed, as you say, not be what it appears. Solutions for neuroses must lie with the neurotic not with the stimuli held to be 'responsible' for them.
From what I've noticed as an outsider, it is very difficult for a layperson, especially one untrained in statistical methods, without full journal subscriptions, or without an understanding of the politics involved in grant applications, to be able to judge the accuracy of reports, which are often based, it seems on press releases (not even abstracts).
In 2013, a story made the front page of HN about high lead levels in imported rice (covered by some credible sources). For a few weeks, as a precaution, I avoided giving my children rice to eat (and I'm sure I wasn't the only one) and getting into a fairly heated argument with my spouse about it, who thought I was crazy. It later transpired that the entire story was based on miscalibrated equipment. Till today, there are news sources that haven't corrected the story (Google "lead in rice"). This was an extreme example, but I don't doubt that a lot of health stories suffer from real flaws, but are tougher to debunk.
Science is important, but science reporting (especially medical reporting) seems awful. No parent wants their child to be harmed, indeed, I've read comments on HN where people say they give their kids glass bottles because of the putative risks of phthalates. While I don't blame them for being careful, imagine what happens when the glass bottles shatter? It is very tempting for parents to panic and to "parent by headline", and it is obviously profitable for researchers and their institutions to generate maximal alarm. However, HN is my primary source of tech news, and I would prefer it if it wasn't clogged with possibly dubious health stories which perpetuate this cycle.
> Science is important, but science reporting (especially medical reporting) seems awful.
I don't think it's limited to science reporting. The more I learn, the easier is is to see the numerous errors made by journalists across all areas. Take Vox, it's an older link but they have been criticized for poor accuracy in reporting (and failing to prominently note corrections).[1]
The Royal Society has it right with their motto, nullius in verba: the news media, much like Wikipedia, is not a reliable source for facts.
With all the fuss about phalates and BPA as you mention, I don't get why UV-hardened polyethylene isn't much more widely used? Anyone have an explanation?
The only health studies one should even notice are those not funded by those who sell related products. Since every nutrition study is so funded, I try not to even overhear any of those.
Just read them as any other study. It usually is done on minimal groups, very high statistical errors, minimal duration (compared to human life length)... Pretty much anecdotal evidence. Study done on 200 000 people over three generation is bit more reliable.
There are certain nutrition advices, which are based on meta-studies and are at least at SML3 level according to the comment above (https://news.ycombinator.com/item?id=9289124).
SML1 - the first study on this topic, may be wrong, but could be pioneering
SML2 - the first confirmation study on an SML1 study, confirming the results
SML3 - one of multiple confirmation studies
SML4 - a consensus or meta-study that confirms the original SML1 study
SML5 - scientific consensus indicates the SML1 study is correct based on current scientific knowledge and is unlikely to be refuted in the near term.
Nobody outside of the immediate scientific circle should care at all until a topic is in SML3, and then it should be treated as bleeding edge and only show up in harder to digest science popularization magazines: Sci-Am for example. SML4 would be ready for the Pop-Sci "emerging science" column. And SML5 would be fit for newspapers and the general public.