Did that drug you just took flunk its clinical trial? How about the course of treatment your doctor just recommended? Of course, you don’t know that because you have faith in the regulatory bodies, academic journals and researchers to be both competent and honest. And, of course, you know nothing of the perverse incentives to produce positive results. A professor once told me that journals have a bias for positive results. Negative results are rarely published. Knowing what does not work is valuable in science, but less interesting for most journals. In science, published articles are what gets raises and promotions, not failed experiments. According to an article in the journal, Nature:
Medicine is plagued by untrustworthy clinical trials.
Investigations suggest that, in some fields, at least one-quarter of clinical trials might be problematic or even entirely made up, warn some researchers. They urge stronger scrutiny.
The editor of the journal Anaesthesia (it’s the British spelling) has experience analyzing dodgy data in randomized clinical trials (RCT). He looked at 500 studies over a three-year period. He found about one-fourth were fatally flawed for one reason or another.
For more than 150 trials, Carlisle got access to anonymized individual participant data (IPD). By studying the IPD spreadsheets, he judged that 44% of these trials contained at least some flawed data: impossible statistics, incorrect calculations or duplicated numbers or figures, for instance. And 26% of the papers had problems that were so widespread that the trial was impossible to trust, he judged — either because the authors were incompetent, or because they had faked the data.
How widespread is the problem of either fake or unreliable data? It’s hard to say but experts suggest it is common.
For years, a number of scientists, physicians and data sleuths have argued that fake or unreliable trials are frighteningly widespread. They’ve scoured RCTs in various medical fields, such as women’s health, pain research, anaesthesiology, bone health and COVID-19, and have found dozens or hundreds of trials with seemingly statistically impossible data. Some, on the basis of their personal experiences, say that one-quarter of trials being untrustworthy might be an underestimate. “If you search for all randomized trials on a topic, about a third of the trials will be fabricated,” asserts Ian Roberts, an epidemiologist at the London School of Hygiene & Tropical Medicine.
Academic medicine (and probably other disciplines) has what’s known as a paper-mill problem.
[O]ver the past decade, journals in many fields have published tens of thousands of suspected fake papers, some of which are thought to have been produced by third-party firms, termed paper mills.
But faked or unreliable RCTs are a particularly dangerous threat. They not only are about medical interventions, but also can be laundered into respectability by being included in meta-analyses and systematic reviews, which thoroughly comb the literature to assess evidence for clinical treatments. Medical guidelines often cite such assessments, and physicians look to them when deciding how to treat patients.
Other experts argue it is difficult to ascertain the extent of clinical trials based on fake or bad data, but dismiss the problem as small.
Many research-integrity specialists say that the problem exists, but its extent and impact are unclear. Some doubt whether the issue is as bad as the most alarming examples suggest.
Maternal health is purportedly an area of medicine plagued by fraudulent research studies. An expert in the field looked at studies of a drug used for post birth hemorrhage, the most common cause of maternal death. He found 26 studies that looked suspect. When he asked to view the data from these 26 studies he ran into roadblocks.
When he followed up with individual authors to ask for more details and raw data, he generally got no response or was told that records were missing or had been lost because of computer theft.
The expert said these were most likely copycat fraud. Medical researchers hear of a large clinical trial and fake a much smaller one that is unlikely to be questioned. Yet adding their fake trial data to the medical literature can give false information, affecting clinical guidelines when all published studies are taken into account.
When I was in grad school I was asked to help a new professor move into his new office (which, unfortunately, had been the graduate student lounge prior to his arrival). This professor had saved hundreds and hundreds of pounds of data from previous research projects. His early research data was on old fashioned punch cards. He saved this data, moving it with him whenever he changed jobs or offices. When experts inquired about data from suspected research papers they were often told the data had been lost. Considering the degree to which the professor I helped move into his new office went to safeguard his old, old data, it’s doubtful a serious research scientist would lose data. One partial solution to the problem of fake data is for journal editors to require access to the raw data with research paper submissions.
It’s hard to fathom what ‘peer review’ is supposed to mean when the peers don’t have access to the data under review.
John Abramson has written much on this topic, although mainly in the area of big pharma fudging clinical trials and then restricting access to much of the actual trial data. I wasn’t aware that there was widespread fraud with no motive other than to get a paper published. If that’s really the case then we are in trouble.
I too was a bit surprised it would happen with randomized controlled trials. I mean, how do you fake a clinical trial that is supposed to involve human subjects? Any facility that has research activity will have an institutional review board (IRB) that must give approval for human subjects testing. I ran into that once when we were merely trying to use patient data to correlate length of stay. The data was ex post facto with names removed. Senior hospital executives still shut us down when they learned we hadn’t run the study by the IRB, even though we weren’t testing human subjects.