‘Publish or perish’ and bias
Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data, by Daniele Fanelli (link). Abstract:
“The growing competition and “publish or perish” culture in academia might conflict with the objectivity and integrity of research, because it forces scientists to produce “publishable” results at all costs. Papers are less likely to be published and to be cited if they report “negative” results (results that fail to support the tested hypothesis). Therefore, if publication pressures increase scientific bias, the frequency of “positive” results in the literature should be higher in the more competitive and “productive” academic environments. This study verified this hypothesis by measuring the frequency of positive results in a large random sample of papers with a corresponding author based in the US. Across all disciplines, papers were more likely to support a tested hypothesis if their corresponding authors were working in states that, according to NSF data, produced more academic papers per capita. The size of this effect increased when controlling for state’s per capita R&D expenditure and for study characteristics that previous research showed to correlate with the frequency of positive results, including discipline and methodology. Although the confounding effect of institutions’ prestige could not be excluded (researchers in the more productive universities could be the most clever and successful in their experiments), these results support the hypothesis that competitive academic environments increase not only scientists’ productivity but also their bias. The same phenomenon might be observed in other countries where academic competition and pressures to publish are high.”
An important bit on ‘”negative” results’ from the paper:
“Words like “positive”, “significant”, “negative” or “null” are common scientific jargon, but are obviously misleading, because all results are equally relevant to science, as long as they have been produced by sound logic and methods [11,12]. Yet, literature surveys and meta-analyses have extensively documented an excess of positive and/or statistically significant results in fields and subfields of, for example, biomedicine , biology , ecology and evolution , psychology , economics , sociology .
Many factors contribute to this publication bias against negative results, which is rooted in the psychology and sociology of science. Like all human beings, scientists are confirmationbiased (i.e. tend to select information that supports their hypotheses about the world) [19,20,21], and they are far from indifferent to the outcome of their own research: positive results make them happy and negative ones disappointed . This bias is likely to be reinforced by a positive feedback from the scientific community. Since papers reporting positive results attract more interest and are cited more often, journal editors and peer reviewers might tend to favour them, which will further increase the desirability of a positive outcome to researchers, particularly if their careers are evaluated by counting the number of papers listed in their CVs and the impact factor of the journals they are published in.
Confronted with a “negative” result, therefore, a scientist might be tempted to either not spend time publishing it (what is often called the “file-drawer effect”, because negative papers are imagined to lie in scientists’ drawers) or to turn it somehow into a positive result. This can be done by re-formulating the hypothesis (sometimes referred to as HARKing: Hypothesizing After the Results are Known ), by selecting the results to be published , by tweaking data or analyses to “improve” the outcome, or by willingly and consciously falsifying them . Data fabrication and falsification are probably rare, but other questionable research practices might be relatively common .”
No comments yet.