Can We Leave Politics out of Medical Research?

Bedcloud
6 min readJun 5, 2020

It can be tiring to read another painfully obvious pop-science article show up in our collective consciousness. An easy bet: everyone in the developed country has come across health articles, whether by their phone or when in checkouts lines, that state along the lines of: “Is coffee bad for you?”, “Does fish oil prevent alzheimers?”, “Exercise improves health”, quotations followed by: “Here is what scientists have to say”. We revere medical science; after all, where would we be without life-saving drugs and treatments like vaccines and cancer-detection tools. We revere scientists with unquestioning dogma in our modern day and age. However, that unquestioning trust in the medical community can unintentionally create a set of political problems and pressure for the medical community to publish papers that are not wholly accurate. Scientists are not publishing papers that are pushing innovation forward, they are publishing papers that will secure research funding and stay afloat in their field. This problem produces a high rate of medical error, according to the article “Lies, Damned Lies, and Medical Science” by David H. Freeman, and has called for meta researchers to review medical journals for accuracy.

When introducing errors in medical research, Davd H. Freeman followed the work of John Ioannidis, a physician-researcher known for his contributions to Metascience (or evidence-based research). In the 1990’s, as a graduate student at Harvard University, Ioannidis used his mathematical and medical background to support the medical research of his time with statistical analysis. Hard-data was needed to back treatment decisions made in the medical community, however when working with statistical analysis, Ioannidis had found the data for drug-company research insufficient and that the results of the medical treatments were easy to manipulate. In addition, his findings were that medical research had a replication crisis where science studies can be difficult to replicate. For instance, if a drug-company would like to prove that a certain medication will alleviate anxiety disorders, they are written with soft-data such as self-reported symptoms or easily measurable health markers (testing blood pressure, cholesterol), to detect any chances in the controlled test subject. One of the biggest criticisms of drug research was that randomised controlled trials did not account for placebos, and did not sufficiently eliminate bias coming into the research.

There is suggestion that this is the byproduct of the environment medical researchers work in, rather than intentional harm in publishing papers with distrustful conclusions. Freeman describes a research career as somewhat of a volatile job position. In order for a scientist to stay afloat in their career, they constantly have to publish papers. But just any paper published will not do, they must have their work published in a well-regarded journal a high rejection rate. “To get funding and tenured positions, and often merely to state afloat, researchers have to get their work published in well-regarded journals, where rejection rates can climb above 90 percent” [1]. In addition the paper must not “undermine the work of a respected colleague” [1]. So naturally in a precarious environment, there is a tendency to publish papers on the same topic over and over again with slight variations such as the numerous and repetitive health studies of olive oil lowering blood pressure or omega-3 fats not helping heart patients. Dr. John Ioannidis claims that half to a-third of acclaimed research in medicine is untrustworthy in his published work for PLoS Medicine, an online journal titled “Why Most Published Research Findings Are False”. [2] He cites imperfects imperfect research techniques, researcher bias, routine misanalysis (software used is too complex, so botched data imputed and marginal errors), and financial conflicts of interest given that the research published has to make the organization money or repute. [2]

At the time that Dr. John Ioannidis published his work with Metascience in PLoS Medicine in regards to data errors and randomized controlled trials being problematic for accurate results, the year was 2005. At this time in 2020, some of his criticism with medical journals publishing false research has contributed to the current climate of distrust for science. For example, today in the current year 2020, people do not trust that climate change is real, vaccine hesitancy (Anti-Vaxers) has slightly increased childhood mortality [3], and there are people who believe the world is flat rather than spherical [4]. It could absolutely be a case to say that some governments have failed to respond to the 2020 Covid-19 pandemic due to a level of distrust for science. There are still conflicting views as to whether wearing a face mask will prevent illness, with conflicting science literature at this time [5]. The problems going on today, proves how incredibly damaging the distrust for science can be when faced with life-threatening pandemics, and why scrutinizing and addressing unhelpful practices in the science communities may unintentionally have can save lives by magnitudes.

Science is not infallible. Ideally medical research should be a collaborative environment and free to refute errors, however at the moment it is not incentivized to be accurate and is marginally productive to improving health outcomes. A famous example of the scientific environment being too prestige obsessive even at the cost of medical errors is the case of Brian Wansink, a former head of the Food and Brand Lab at Cornell University. Wansink’s research has been retracted due to “misreporting of research data, problematic statistical techniques, failure to properly document and preserve research results, and inappropriate authorship” [6]. His reputation has been dragged through the muds as the scientist who manipulated data to chase headlines. [6]. He had failed to upload the “gold standard” of scientific research. When interviewed, he was incentivized by “getting caught up in a race for the next attention-grabbing conclusion”, given the incredible pressure to publish. [6]. Learning of Wansink’s situation, it feels as though research can be a marketing game rather than genuine progress. It raises questions of how many productive research has been dismissed because it was not “popular” enough to be published?

It is understandable why medical research needs to prioritize whether their research can be funded, rather than focus on accuracy or innovation. At the end of the day, researchers need to be paid and financially secure for them to innovate, else their priorities will shift to staying financially afloat, rather than focus on science. Research organizations need a way to establish reputation when asking for funding, and must be protective of their intellectual property. But it is worth addressing a long standing credibility crisis with the public’s trust in medical research, whether what is systematically occurring now (not only for researchers) is good for society as a whole in the long-term where people start to dismiss scientific findings. We can improve by creating solutions to re-establish that trust in medical science. Data-dredging (where scientific data is misused) should not be acceptable. Imperfect-research techniques should be refined and perfected to avoid life-costing errors. And medical researchers should be in an environment that allows them to challenge the works of colleagues civilly, without repercussions to their own career. We are seeing the cost of distrust towards science, at this time in a pandemic. Given the intellectually driven community that tends to be found in science, it will be interesting to see what innovative solutions can be created to address an ongoing problem.

If you like what I do, please drop a tip! :)
https://ko-fi.com/bedcloud

Works Cited:

1. Freedman, David H. “Lies, Damned Lies, and Medical Science.” The Atlantic, Atlantic Media Company, 2 Sept. 2015, www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/.

2. Ioannidis, John P. A. “Why Most Published Research Findings Are False.” PLOS Medicine, Public Library of Science, journals.plos.org/plosmedicine/article?id=10.1371%2Fjournal.pmed.0020124.

3. “Ten Health Issues WHO Will Tackle This Year.” World Health Organization, World Health Organization, www.who.int/news-room/feature-stories/ten-threats-to-global-health-in-2019.

4. Furze, Anders. “Why Do Some People Believe the Earth Is Flat?” Phys.org, Phys.org, 14 Jan. 2019, phys.org/news/2019–01-people-earth-flat.html.

5. Oaklander, Mandy. “Should You Wear a Mask to Prevent Coronavirus?” Time, Time, 6 Apr. 2020, time.com/5815251/should-you-wear-a-mask-coronavirus/.

6. Dahlberg, Brett. “Cornell Food Researcher’s Downfall Raises Larger Questions For Science.” NPR, NPR, 26 Sept. 2018, www.npr.org/sections/thesalt/2018/09/26/651849441/cornell-food-researchers-downfall-raises-larger-questions-for-science.

--

--

Bedcloud

The world is open and full of opportunities. If you like what I write, please drop a tip. :) https://ko-fi.com/bedcloud