‘Replication’ means that with the same data provided by the original research, the same experiment will produce the same result, within a calculated variance. A problem is that a lot of that doesn’t happen, and the real problem is that this means a lot the money spent on research grants has been wasted and a lot of people know that, and it’s been ignored since that would otherwise mean the money would likely dry up and these ‘scientists’ would have to find another way to fund the lifestyle they’ve grown accustomed to.
In other words; “Can you say ‘Fraud‘? ……I knew you could!”
The ‘Replication Crisis’ Could Be Worse Than We Thought, New Analysis Reveals.
The science replication crisis might be worse than we thought: new research reveals that studies with replicated results tend to be cited less often than studies which have failed to replicate.
Thus, based on the new analysis, research that is more interesting and different appears to garner more citations than research with a lot of corroborating evidence.
Behavioral economists Marta Serra-Garcia and Uri Gneezy from the University of California analyzed papers in some of the top psychology, economy, and science journals; they found that studies that failed to replicate since their publication were on average 153 times more likely to be cited than studies that had – and that the influence of these papers is growing over time.
“Existing evidence also shows that experts predict well which papers will be replicated,” write the researchers in their published paper. “Given this prediction, why are non-replicable papers accepted for publication in the first place?”
“A possible answer is that the review team faces a trade-off. When the results are more ‘interesting’, they apply lower standards regarding their reproducibility.”
After analysing 20,252 papers citing these studies across a variety of journals, they found that non-replicable papers are, on average, cited 16 times more per year.
In psychology journals, 39 percent of the 100 analyzed studies had been successfully replicated. In economy journals, it was 61 percent of 18 studies, and in the journals Nature and Science, it was 62 percent of 21 studies.
The differences in the prominent Nature and Science journals were the most striking: here, non-replicable papers were cited 300 times more than replicable ones on average. These variations remained even after accounting for the number of authors on a paper, the number of male authors, the details (like location and language) of the experiments and the field in which the paper was published.
Across all the journals and papers, citations of a non-replicable study only mentioned the non-replication 12 percent of the time. However, it’s not just paper authors and scientists who need to be more aware of the problem, the researchers say.
Problematic research can take a long time to put right, too: an infamous, now-retracted 1998 paper linking vaccines and autism turned many against the idea of vaccination as a safe and healthy option. It took 12 years for that particular paper to be retracted, and it has caused lasting damage to public perception of vaccine safety.
Retracting papers can make a difference though, the researchers point out – statistics show that citations of a retracted paper tend to drop by almost 7 percent per year. This is perhaps one way of managing the current replication crisis and making sure that our scientific methods are as thorough as possible.
The authors of the new study acknowledge that academics and journal editors alike feel pressure to publish ‘interesting’ findings that are more likely to attract attention, but want to see research into how the quality of scientific papers can be improved.
“We hope our research encourages readers to be cautious if they read something that is interesting and appealing,” says Serra-Garcia.
“Whenever researchers cite work that is more interesting or has been cited a lot, we hope they will check if replication data is available and what those findings suggest.”
The research has been published in Science Advances.