Dealing with biased reporting of the available evidence
Biased reporting of research occurs when the direction or statistical significance of results influences how research is reported.
Key Concepts addressed:Details
Biased reporting of research occurs when the direction or statistical significance of results influences whether and how research is reported.
Avoiding biased comparisons entails identifying and taking account of all the relevant reliable evidence in systematic reviews. This is challenging in many ways, particularly as some pertinent evidence is not published because biased decisions are made about which results of research are submitted and accepted for publication. Studies that have yielded ‘disappointing’ or ‘negative’ results are less likely to be reported than others. This is often called ‘publication bias’ or ‘reporting bias’. It might arise from biased analyses of studies, after their results are known.
These reporting biases have been recognized for centuries (Dickersin and Chalmers 2010). In 1792, for example, James Ferriar stressed the importance of recording treatment failures as well as treatment successes (Ferriar 1792). This principle was reiterated in an editorial published in the Boston Medical and Surgical Journal just over a century later (Editorial 1909).
There is now a large body of evidence confirming that reporting bias is a substantial problem. There is also evidence that reporting bias results principally from researchers not writing up or submitting reports of research for publication, not because of biased rejection of submitted reports by journal editors (Dickersin 2004). Recent research has also revealed an additional problem: if estimates of treatment effects on some of the outcomes studied don’t support the conclusions of researchers, these data sometimes don’t get reported either (Chan et al. 2004).
For example, had all the studies of the effects of giving drugs to reduce heart rhythm abnormalities in patients having heart attacks been reported, tens of thousands of deaths from these drugs could have been avoided. In 1993, Dr Cowley and his colleagues pointed out how an unpublished study done 13 years previously might have “provided an early warning of trouble ahead”. Nine patients had died among the 49 assigned to the anti-arrhythmic drug (lorcainide) compared with only one patient among a similar number given placebos. “When we carried out our study in 1980”, they reported, “we thought that the increased death rate was an effect of chance…The development of lorcainide was abandoned for commercial reasons, and this study was therefore never published; it is now a good example of ‘publication bias’” (Cowley et al. 1993).
Reporting biases tend to lead to conclusions that medical treatments are more useful and freer of side effects than they are in fact. They can therefore result in unnecessary suffering and death, and in wasted resources spent on ineffective or dangerous treatments (Chalmers 2004). People who agree to researchers’ requests that they participate in tests of treatments assume that their participation will lead to an increase in knowledge. This implied contract between researchers and participants in research is breached by researchers who do not make public the results of the research.
Biased under-reporting of research is scientific misconduct and unethical (Chalmers 1990). Selective reporting of studies sponsored by the pharmaceutical industry is a particular problem (Hemminki E 1980; Melander et al. 2003), although the problem is not limited to those with commercial vested interests. Research ethics committees, medical ethicists and research funders have so far not done enough to protect patients and the public from the adverse effects of reporting biases (Savulescu et al. 1996). Fair testing of treatments – particularly those treatments in which there is commercial interest – will remain compromised just as long as this form of research misconduct is tolerated by governments and others who should be protecting the interests of the public.
The World Health Organization has begun to coordinate solutions to address the problem of unidentifiable research and publication (or dissemination) bias: First, it has established standards for the registration and exchange of data for the registration of trials. Secondly, it proposes registration of research protocols in databases that fulfill the above standards, before patient recruitment starts. Finally, it has established an open access portal (www.who.int/ictrp), which collates the data of all national and regional registers, allowing people to learn about coming, ongoing and finished research protocols. Since 2013, the All Trials Campaign (www.alltrials.net) has called for registration and reporting of all trials. Monty Python’s take on selective reporting can be seen here.
The text in these essays may be copied and used for non-commercial purposes on condition that explicit acknowledgement is made to The James Lind Library (www.jameslindlibrary.org).
References
Chalmers I (1990). Under-reporting research is scientific misconduct. JAMA 263:1405-1408.
Chalmers I (2004). In the dark: drug companies should be forced to publish all the results of clinical trials. New Scientist 181:19.
Chan A-W, Hròbjartsson A, Haahr M, Gøtzsche PC, Altman DG (2004). Empirical evidence for selective reporting of outcomes in randomized trials: Comparison of protocols to publications. JAMA 291:2457-2465.
Cowley AJ, Skene A, Stainer, Hampton JR (1993). The effect of lorcainide on arrhythmias and survival in patients with acute myocardial infarction. International Journal of Cardiology 40:161-166.
Dickersin K (2004). How important is publication bias? A synthesis of available data. AIDS Educ Prev 1997;9 (1 Suppl):15-21.
Dickersin K, Chalmers I (2010). Recognising, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the World Health Organisation. JLL Bulletin: Commentaries on the history of treatment evaluation, (www.jameslindlibrary.org)
Editorial (1909). The reporting of unsuccessful cases. Boston Medical and Surgical Journal 161:263-264.
Ferriar J (1792). Medical histories and reflexions. Vol 1. London: Cadell and Davies, 1792.
Hemminki E (1980). Study of information submitted by drug companies to licensing authorities. BMJ 280:833-6.
Melander H, Ahlqvist-Rastad J, Meijer G, Beermann B (2003). Evidence b(i)ased medicine – selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. BMJ 326:1171-3.
Savulescu J, Chalmers I, Blunt J (1996). Are research ethics committees behaving unethically? Some suggestions for improving performance and accountability. BMJ 313:1390-1393.
Read more about the evolution of fair comparisons in the James Lind Library.