Cookie policy: This site uses cookies (small files stored on your computer) to simplify and improve your experience of this website. Cookies are small text files stored on the device you are using to access this website. For more information please take a look at our terms and conditions. Some parts of the site may not work properly if you choose not to accept cookies.

Join

Subscribe or Register

Existing user? Login

Failures to replicate trial findings

  • Print
  • Share
  • Comment
  • Save
  • Print Friendly Version of this pagePrint Get a PDF version of this webpagePDF

Following the recent Pharmaceutical Journal review of the book ‘Bad Pharma’ by Ben Goldacre (PJ 2012;289:526), I was interested to read about another issue with medical trials. This was an opinion piece in New Scientist suggesting that more than half of biomedical study results cannot be reproduced. In other words, if you conduct a trial once and then repeat it with the same methodology, the findings may be different.

Although we are all aware of the need to test scientific knowledge over and over again to build a robust evidence base, this article points to a huge amount of the published literature that is not reproducible. 

Pharmaceutical companies identify some of these replication failures as they scour the scientific literature for promising leads in new drug development. The New Scientist author, Elizabeth Iorns, highlights two articles in Nature journals in which two pharmaceutical companies — Bayer and Amgen — say they could not replicate about two-thirds and 88 per cent, respectively, among published studies they examined of interest to them.

Several factors contribute to these failures, says the article, from the pressure to cut corners and get published quickly, to seeing what one wants to see and getting a positive outcome from months or years of hard work as well as the impossibility of being an expert in all experimental techniques. But the cost is high, she notes. Researchers may spend vast amounts of money trying to replicate findings in the literature, and their failures may go unpublished, leading to others repeating the same studies over and over again.

Punishing investigators when they publish studies that cannot be reproduced has apparently been suggested, but Iorns asks if, instead, researchers could not be rewarded for getting their scientific results replicated independently before or shortly after publication.

To this end, she has set up the Reproducibility Initiative, part of which is the Science Exchange, in which scientists submit studies they would like to see replicated. The idea is for an independent advisory board to send studies to experts in the relevant fields and their findings are returned to the original investigators with a certificate of reproducibility for studies that are successfully replicated. The cost, expected to be one 10th the cost of the original study will be borne by the original investigators.

The aim is not to police the entire scientific literature but to generate some guarantee of robustness to increase the efficiency of research and development. As Goldacre says in his book, and Iorns suggests here, there are ways of beginning to address some of the existing problems associated with medical trials.

Have your say

For commenting, please login or register as a user and agree to our Community Guidelines. You will be re-directed back to this page where you will have the ability to comment.

  • Print
  • Share
  • Comment
  • Save
  • Print Friendly Version of this pagePrint Get a PDF version of this webpagePDF

From: Beyond pharmacy blog

Take a look here for thoughts and musings beyond the pharmacy realm

Newsletter Sign-up

Want to keep up with the latest news, comment and CPD articles in pharmacy and science? Subscribe to our free alerts.