Just to Get Published: An Insight Into the Scientific Lying
Article and illustration by Norbert Borski
Scientific journals exist for one reason – sharing knowledge. The core idea is that major discoveries are published so the world and other researchers can stay updated on recent advancements. But what if something published is not true? What if our research turns out to be based on false data? We lose money, time, and trust in scientific communication. This is the consequence of publishing bullshit.
Effective communication in the scientific world is crucial. Published findings allow us to take the next step towards discovering something new or better understanding observations and results. Publications pave the road leading to new discoveries and make it essential that only verified and confirmed information are shared. This allows other researchers to replicate the experiments; use presented data and build upon it. Unfortunately, this idea is not always the reality.
A biotechnology firm Amgen attempted to replicate the experiments of 53 promising pre-clinical studies. They succeeded in confirming the findings of only six – just 11% of all the studies (Begley & Ellis, 2012). This does not necessarily mean that the original authors engaged in research misconduct. Reproducing results can be challenging, particularly in pre-clinical trails. However, it highlights the importance of critical evaluation when reading scientific literature. If data is falsified or evidence is selectively presented to support a hypothesis, replication becomes impossible.
One of the most infamous cases of misinformation in scientific literature is Andrew Wakefield’s study linking the MMR (measles, mumps, and rubella) vaccine to autism. Published in The Lancet – one of the oldest and renewed British medical journals – in February 1998, Wakefield’s paper garnered widespread attention. Within a month, a panel of 37 experts reviewed the alleged evidence and concluded that Wakefield’s claims were baseless (Edwards, 2001). In other words, it was all bullshit.
Despite the opinion of experts and article’s retractation, Wakefield’s flawed study continues to fuel anti-vaccine movements, demonstrating the far-reaching consequences of allowing false information into journals.
One common form of scientific misconduct is image manipulations. In biomedical research, for example, this can involve photos of cells under a microscope or immunoblotting results – a technique of proteins identification. Dishonest researchers might clone images, simply copy-paste them for unrelated experiments, crop them to show only favorable results, or enhance them so the protein bands are more visible than they are in reality.
Detecting such manipulation requires a trained eye and Elisabeth Bik surely has one. She is a real police guard of scientific literature. This Dutch microbiologist has made a remarkable impact by analyzing thousands of papers for image fraud. The level of details she notices is impressive, but according to herself, she “merely detects the very stupid scientists” (Voormolen, 2021).
What makes Bik’s work unique is also her transparency. She publishes her findings under her own name, openly pointing out suspicious images and potential misconduct on social media. Her dedication and persistence have resulted in at least 172 retractions and over 300 corrections (Shen, 2020).
We should all channel our inner Elisabeth Bik every time we read something. In the age of fake news, critical thinking and reading is more important than ever. For scientific literature, platforms like PubPeer allow users to discuss and comment on published studies, often highlighting questionable data.
What should you do if you suspect fraud in some paper? It is possible to contact authors directly or reach out to journal’s editors and request an investigation, which may lead to the paper’s retraction. Unfortunately, this process can be slow – sometimes taking years. For instance, a paper by Siijen et al. (2007) received a comment on PubPeer in 2015 raising concerns about credibility of one of the published images. However, no action was taken until 2020, when Elisabeth Bik joined the discussion in March. The paper was finally retracted in November 2020. It took 8 years for the first person to notice the modified image, and another 5 years to retract the paper. During these 13 years it remained accessible – 13 years of false information circulating and influencing further research.
Of course, not all inaccuracies in scientific publications are intentional. Mistakes do happen. Neglect or a simply a lack of knowledge about certain experimental techniques can result in misleading data. What appears to be groundbreaking research may sometimes be a result of simple errors. However, intentional manipulation undermines the very foundation of scientific integrity.
One might ask: why do people do that, why all this scientific lying? There are many possible reasons, but a major one is the pressure to publish. Without publications, finding a job or getting funding can be challenging. A researcher must establish a reputation in the scientific community – without publications you are often no one. Additionally, Begley and Ellis (2012) rightly observed that “journal editors, reviewers and grant-review committees often look for a scientific finding that is simple, clear and complete — a perfect story”. Journals prioritize groundbreaking, logical, coherent, and well-presented stories, so some researchers try to adjust their findings to fit these criteria.
All of that highlights why reviewing is crucial. Typically, scientific journals require at least two reviewers per article. Their responsibility is to verify quality, identify mistakes, and request additional experiments, if necessary, before publication. But if two experts review each paper, how is it that some experiments remain irreproducible? Why do manipulated data, altered images, all that bullshit get published? Is it carelessness of the reviewers, or are there deeper issues at play? These are important questions we must ask. Such situations should be rare. Otherwise, how can we maintain trust in scientific communication?
References:
Begley, C. G., & Ellis, L. M. (2012). Raise standards for preclinical cancer research. Nature, 483(7391), 531–533. https://doi.org/10.1038/483531a
Edwards, C. (2001). Is the MMR vaccine safe? Western Journal of Medicine, 174(3), 197–198. https://doi.org/10.1136/ewjm.174.3.197
Shen, H. (2020). Meet this super-spotter of duplicated images in science papers. Nature, 581(7807), 132–136. https://doi.org/10.1038/d41586-020-01363-z
Sijen, T., Steiner, F. A., Thijssen, K. L., & Plasterk, R. H. A. (2007, January 12). RETRACTED: Secondary siRNAs Result from Unprimed RNA Synthesis and Form a Distinct Class. https://pubpeer.com/publications/2B00E5BEB5B75B499550D03C15EFA4
Voormolen, S. (2021, May 22). ‘Ik detecteer alleen maar de heel domme wetenschappers’ NRC. https://www.nrc.nl/nieuws/2019/10/04/een-arendsoog-voor-wetenschapsfraude-a3975678
Wakefield, A. J., Murch, S. H., Anthony, A., Linnell, J., Casson, D. M., Malik, M., Berelowitz, M., Dhillon, A. P., Thomson, M. A., Harvey, P., Valentine, A., Davies, S. E., & Walker-Smith, J. A. (1998). Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. Lancet (London, England), 351(9103), 637–641. https://doi.org/10.1016/s0140-6736(97)11096-0
(Retraction published Lancet. 2004 Mar 6;363(9411):750. doi: 10.1016/S0140-6736(04)15715-2 Lancet. 2010 Feb 6;375(9713):445. doi: 10.1016/S0140-6736(10)60175-4)

Norbert Borski is a first-year Master’s student in Biomedical Sciences with a specialization in Oncology at UvA. He completed his Bachelor’s in Biology in Cracow, Poland, and then traveled through South America before settling in Amsterdam. Besides spending time in the lab, he is passionate about exploring new cultures, traveling, and learning languages. In his free time, he enjoys creating art through pottery, painting, and writing, as well as appreciating art through film and theater.