Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How scientists fool themselves and how they can stop (nature.com)
96 points by DavidSJ on Oct 14, 2015 | hide | past | favorite | 24 comments


It's too bad that there's not a more robust discussion about the institutional incentives that drive scientists to fool themselves.


This is a key aspect. There is a pressure (both institutional and self-imposed) to produce something original and unique, so trying to reproduce other people results is seen as a lost of time.

Either you confirm previous results, which is nice but not something you can publish or write a thesis about; or you find they are wrong and then you have to tell the authors, which is something they may appreciate or not, and be exhaustive to prove it, which is something in which you may be interested or not.

What I would like to see is that PhD students were required to reproduce some previous results as part of their thesis. This would increase the quality of scientific results and would prepare the candidates to do their own research (at difference of now, that they are put on front of a laboratory to get "interesting results" and they don't know what to do with it).


Agreed -- would also love to see an incentive structure around publishing negative results. One of the areas of research I think is the most exciting is artificial photosynthesis, and it's currently at the phase where the researchers are trying to discover efficiency gains by just trying out a bunch of different materials.

Take this recent publication [1]:

> For water oxidation, the photoanode surface was protected from corrosion by a 62.5 nm layer of amorphous, hole-conducting TiO2 that was grown by atomic-layer deposition (ALD).

> The TiO2 layer significantly improves the stability of III-V photoanodes in a tandem structure for water oxidation while the tandem structure produces sufficient photovoltage to sustain the efficient, unassisted production of hydrogen by water splitting in aqueous alkaline electrolytes.

They discovered that coating the photoanode with 62.5nm of TiO2 helps stabilize the reaction. But who knows how many materials they went through to get that one? And how many different coating thicknesses they tried before settling of 62.5nm? This tech could be next-gen solar, would be great to see the global rate of discovery increase. Perhaps YC Research can start this trend?

[1] Joint Center for Artitificial Photosynthesis - http://authors.library.caltech.edu/59897/1/c5ee01786f.pdf


Exactly. It is very easy to fool yourself when your entire career is on the line if you don't.


Some things in science have become "immoral" to question. In a big way evolution, including macro evolution, has become the religion of science. If you even entertain the idea that macro evolution has not been happening, you might be labeled an evil heretic. You might be labeled anti-science. You might be laughed out of your field. You could get fired from your job. You might lose all your funding. All of this could happen even if you go about your research, testing, and analysis in a very objective, scientific way.


A scientist's first instinct is "get more funding". This cannot help but bias their conclusions.


Unnecessarily cynical. Its a problem that science funding makes scientist's careers so precarious, but nobody goes into science for the money.


"Get more funding" isn't the difference between making a little money vs. making a lot of money. It's the difference between making a little money and not having a job.


Depends on the scientist's position. Tenure-track and tenured faculty in the US typically have 9 months of their salary covered by the institution employing them. Only the remaining three months need be covered by grants (or teaching).


But going from tenure track to tenure in most research universities does require getting grants.


Maybe nobody goes into science for the money, but if you don't focus on the money you won't stay in science very long.


That's an oversimplification at best. Right now, NSF grant funding rates are so low that scientists feel they need to submit 6 good grants to get one funded, so they are forced to spend most of their time allocated to research writing grants. But if this environment went away, most of these people would love nothing more than to do the actual science.

I think a couple of bigger issues are (i) we reward the first person/group to reach a conclusion, so there's incentive to be sloppy and get your name on the research first and sort out the (hopefully minor) problems later and (ii) there's an expectation from the general public that scientists have to be "productive" all the time. Productive is of course defined as producing new and exciting results, rather than carefully verifying old results, or trying to fill the gaps in previous work. See how much pressure there is on scientists to do "transformative" rather than "incremental" research.

I don't think we as a society have a solution to these problems even outside of science. To pick a familiar example for HN, programmers who write overly-complicated messy code are often rewarded over those who write simpler, cleaner and well-architected code because the former looks like more work. Similarly, there's little interest in writing test benches or documentation in open source projects, because there's little reward or incentive for these things. OpenSSL pre-heartbleed era is a perfect example of all the wrong incentive systems working together to produce a steaming pile of bullshit that everyone involved in the field kept claiming as being the most secure solution out there.


That's not true. Funding is important, but once you have it you stop looking for more. I've been working in research for years, both in companies and universities, and I have always been involved in long-term projects, so I have never had to worry about funding.

Of course, other colleagues are not so lucky, and getting funding is an important part of their day-to-day jobs, specially once you reach higher positions. But it is a necessity, not an instinct.


Do you expect to see any differences in the quality of science of those few scientist who are self-funded?

(I heard a lecture once from a researcher who was on Wall Street, made millions, then switched to femtosecond lasers. He worked out of a university, and funded his own lab. So such people do exist.)

If there are differences, are they measurable?


The problem is that there is massive selection bias. If you're one of the very top researchers in your field you are much more likely to A) produce world class research and B) never have much problems with funding.

Conversely many of those funding their own research are doing so because they cannot get any other funding, which may be an indicator that they aren't very good.


> because they cannot get any other funding

Really? I seriously question that "because". If I had FU money I would definitely start my own lab and pay for research out of pocket. Science is fantastically fun if you don't have to care about funding (and by extension tenure) and only publish because it's nice to get feedback from your peers (rather than because you need X papers to get your next grant and keep your job).

Conversely, writing grants is a total bore.

A "because" I would be more willing to buy: if you made that kind of money for yourself, odds are you spent your prime years achieving that and therefore haven't dedicated the time to science that's necessary to be at the top of your field.


> Conversely many of those funding their own research are doing so because they cannot get any other funding, which may be an indicator that they aren't very good.

... at selling. In my experience the skill sets needed for aquiring funds and those for doing research don't have much in common.


I think it's official, we moderns owe an apology to medieval barbers and conjurers.


We don't - first we have correcting mechanisms in place. Second - the self deception ability is greater the softer/fuzzier the field is.


Who measures softness/fuzziness of a field? Who makes sure that measurement isn't biased? Just because you use maths doesn't imply your conclusions and theories are as precise and clean cut as 2+2.


I don't like the use of soft/fuzzy for certain fields, but I do know what it means in this context: it's shorthand for "fields so complicated that we aren't even sure if the questions we're asking mean anything." Take physics vs. psychology as an example. The top story on HN right now is about standardizing the kilogram by answering two questions: how many silicon-28 atoms do you need to equal the mass of the current reference kilogram, and at what pount a watt balance can support the 1 kg weight. This depends on a bunch of terminology and assumptions, but we're pretty clear what those are.

In psychology on the other hand (a field I have a huge amount of respect for, incidentally), we're trying to ask questions about human behavior; everything we ask about is taken within a cultural context, predicated on assumed levels of "normal" behavior, and even assumes that humans /are/ repeatable in some sense (and a lot of psychology studies may not be repeatable at all...)

That repeatability question is a big one, I think: we're a lot more sure about our ability to prepare equivalent circumstances with silicon-28 atoms for an experiment than with people.


> Who measures softness/fuzziness of a field?

There's an absolutely unambiguous distinction between the "hard" and "soft" sciences

Social scientists hold themselves to fundamentally different standards on things like repeatability and predictive power and methodolgy than other fields. And whenever that difference can be quantified (e.g. statistical results), "different" means "lower". It's just a fact, and it's one most sociologists will freely admit, for example. You simply can't get the same sorts of results on humans that you can get on e.g. chemical reactions.

Unfortunately, soft/hard comes along with a value judgement that's not necessarily appropriate. But pretending like there isn't a clear difference between what are now referred to as the soft and hard sciences is disinegenious.


Generally agree with the sentiment, but I think the line is fuzzy along the periphery.

For instance, economics has, I would claim, far weaker levels of model validation than engineering fields, but often is presented as a hard or 'close-to-hard' science.


Until you have the technology to actually "see" something or measure it accurately, it is almost impossible to deduce reality from correlations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: