Agreed, once you know what to look for and how to reproduce it. What do you do if you can't reproduce? That may mean the original research paper doesn't disclose everything (malicious or not, but malicious is REALLY frequent) or missed an important factor (sometimes doing a reaction in a slightly scratched glass container will change the outcome entirely).
To come back to the malicious part, for many researchers, not publishing the exact way they do things is part of how they protect themselves from people reproducing their work. Some do it for money (they want to start a business from that research), others to avoid competition, others because they believe they own the publicly funded research...
And sometimes you fail to reproduce something because you failed to do it properly. I don't know how often that happens in the field on in the lab, but it's extremely common on the computational side.
Very often, the thing you are trying to reproduce isn't exactly the same that was published. You have to adapt the instructions to your specific case, which can easily go wrong. Or maybe you did a mistake in following the instructions. Or maybe you mixed the instructions for two different cases, because you didn't fully understand the subtleties of the topic. Or maybe you made a mistake in translating the provided scripts to your institute's computational environment.
Part of the problem is that methods sections in contemporary journals do not provide enough information for exact replication, and in the most egregious cases let authors write stuff like "cultured cells prepared according to prevailing standards".
From the site: "One of the problems was room temperature was too low elsewhere. Tumors don't grow when mice are chilly because the vessels vasocontrict, it cuts off the blood supply and drugs don't circulate.”
That means there are important validation/verification steps left out of the whole process. Sure, it's impossible to give every detail, and naturally there's always a time constraint, but if there's a hypothesis of action it needs to be verified. (Again easier said than done.)
That's awful. In any field, a writeup of a discovery should include enough information for a peer of the author to reproduce the results. Ideally, it would include enough detail for an enthusiastic amateur to reproduce the results.
This is how we write pen testing reports at work. A pen testing report written that way ~20 years ago is one of the things that got me interested in pen testing. But I apply it to all of my technical writing.
If lack of reproducibility in science is as big a problem as it seems to be, maybe journals should impose a randomized "buddy system" where getting a paper published is conditional on agreeing to repeat at least 2-3 other experiments performed by peers. Have 3 peer researchers/labs repeat the work. If at least 2/3 are successful, publish the paper. If not, the original researchers can revise their instructions once or twice to account for things the peers did differently because the original instructions didn't discuss them.
Hopefully needing to depend on the other organizations for future peer review would be sufficient to keel everyone honest, but maybe throw in a secret "we know this is reproducible" and a secret "we know this is not reproducible" set of instructions every once in awhile and ban organizations from the journal if they fail more than a few of those.
For corner cases that require something truly impractical for even a peer to reproduce independently ("our equipment included the Large Hadron Collider and a neutral particle beam cannon in geosynchronous orbit"), the researchers that want to publish can supervise the peers as the peers reproduce the work using the original equipment.
This would obviously be costly up front, but I think it would be less costly in the big picture than thousands of scientists basing decades of research on a single inaccurate paper.
I also think that forcing different teams to work together might help build a collaborative culture instead of the hostile one that's described elsewhere in this discussion, but maybe I'm overly optimistic.
One problem with ideas like this is that the academia is a meritocracy (or at least it attempts to be), and a meritocracy needs merits that can be assessed in a timely fashion. If people need publications for jobs, promotions, and grants and proper peer review takes too long, they will create a parallel track of publications with lower standards. Over time, the parallel track will become dominant.
That already happened in CS, which inherited slow and thorough journals from mathematics. Because peer review was taking too long, people elevated abstracts in conference proceedings to the status of papers. The idea was that you submitted an extended abstract to a conference with limited peer review. After receiving feedback, you would write the actual paper and submit it to a journal for proper peer review. But because the work was already published in conference proceedings, people often didn't bother with the full paper.
In some countries, administrators resisted this and only considered journal papers real publications. Those administrators were universally reviled by the CS community. Over time, most of them budged and started accepting conference papers as merits. And so CS became a field with lower than average standards for peer review.
In biochem sometimes there are a lot of skills involved, to the point where it's almost magic intangible qualities making an experiment succeed. Especially for more manual work.
For my masters' research I spent 6 years refining a super niche technique until I was able to reproduce my own work.
No if you want you have virtually unlimited supplementary information you can attach to your publication. It is really about a mix of doing as little as possible and hiding details so competitors can't do it.
To come back to the malicious part, for many researchers, not publishing the exact way they do things is part of how they protect themselves from people reproducing their work. Some do it for money (they want to start a business from that research), others to avoid competition, others because they believe they own the publicly funded research...