Even if you don’t follow the news about the latest medical studies very closely, you may have noticed that sometimes they seem to contradict themselves.
One week red wine, or bread, or chocolate is good for you. The next, it increases your risk of disease.
Or take a 2013 study in the American Journal of Clinical Nutrition. Researchers found that many common ingredients in a cookbook were linked to an increased and decreased risk of cancer.
It all depended on which medical study you looked at.
This can be confusing for the public, and for doctors. You may even be tempted to tune out whenever the “latest medical breakthrough” is announced.
A better approach might be to treat medical studies with a bit of healthy skepticism. And also to understand how things can go wrong as medical research moves from the lab, to the clinic, to the doctor’s office.
This can help you know which studies to trust and which to question.
Many Studies Published, Few Noticed
According to the Web of Science scientific citation database, about 12.8 million medical and health studies were published between 1980 and 2012.
Most university scientists read only 250 to 270 scientific papers per year. Nonuniversity scientists read about half that number.
By some estimates, that means about half of all scientific papers are read only by the authors, reviewers, and journal editors. Ninety percent are never cited by another medical study.
Even fewer studies make it into the media. However, when they do they can sometimes generate an immense amount of hype.
While media outlets are primarily the ones overhyping medical studies, there’s plenty of blame to go around.
In a 2014 BMJ paper, researchers found that exaggerated reporting of medical studies can sometimes be traced back to the press releases put out by universities.
Forty percent of the press releases they looked at included health advice that was more direct or explicit than what was found in the actual paper. Thirty-six percent overinflated the relevance of animal or cell studies to humans.
The press releases put out by the medical journals themselves have also been accused of overhyping study findings.
“I do not enjoy this – repeatedly calling out The BMJfor its misleading news releases on observational studies, but I’m going to keep doing it until I see a change,” Gary Schwitzer, a journalism researcher at the University of Minnesota School of Public Health in Minneapolis, wrote on his Health News Review blog in 2014.
Scientists also bear some responsibility.
A 2012 PLOS Medicine study found that overhyped medical news stories were “probably related to the presence of ‘spin’ in conclusions of the scientific article's abstract.”
However, that hardly absolves the media of passing on overhyped information to the public.
“Journalists who blame poor or misleading press releases for their own poor or misleading reports are rather like athletes who blame positive drug tests on contaminated supplements,” Mark Henderson, head of communications at the Wellcome Trust and former science editor of the U.K.’s The Times, wrote on the Wellcome Trust website. “They should take better care.”
Knowing what kind of study is being reported can cut through much of the hype. It can take years for research in mice or chimpanzees to make its way to human clinical trials. Also, observational studies are not enough to say that a treatment works. For that you need a randomized clinical trial, which is the gold standard of medical research.
Also, it’s useful to remember that science is a cumulative process. If you look at one data point, or one medical study, you can never be sure if that is the way things really are.
Systematic reviews, like the ones found in the Cochrane Library, can provide a bigger picture. These reviews look at the existing studies on a certain topic to come up with a way they think things currently are.
Pressure to Publish
Even without hype, medical studies can still lead the public astray, sometimes at the hands of the researchers themselves.
Earlier this month in Australia, neuroscientist Bruce Murdoch, Ph.D. received a two-year suspended sentence for fraud related to a study of a treatment for Parkinson’s disease. During the sentencing, the judge stated that she found no evidence that Murdoch had even conducted the clinical trial.
Several papers written by Murdoch and colleague Caroline Barwood, Ph.D. were retracted by journals.
Reputable journals try to ensure the quality and accuracy of studies by sending them through a peer review process in which other researchers in the same field review the paper before publication.
This is meant to flag major concerns, but it may not catch blatant fraud by the researchers because peer reviewers don’t have access to all of the study’s data. Also, even the peer review process can be faked.
Although peer review is not perfect, many scientists stand by it as the best way to ensure the quality of medical studies.
Not every journal, though, is peer-reviewed. And the rise of internet-only journals has opened the floodgates.
Jeffrey Beall, an academic librarian at the University of Colorado Denver, maintains a list of what he calls “predatory” journals. The papers in these journals are not necessarily fake or wrong, but without some sort of review by other researchers familiar with the science, it’s difficult to know if the papers are worth reading.
Funding Can Shape Study Results
Even peer-reviewed journals have their problems.
Some of these issues are subtle, like the influence of funding on a study’s results.
In the United States, most scientific research is funded by government agencies like the National Institutes of Health (NIH) or the National Science Foundation (NSF).
However, private companies also fund studies, often ones that are testing their drug or product.
One study found that clinical trials that favored a new treatment over a traditional therapy were more likely to be funded by pharmaceutical companies. Even nutrition-related studies about soft drinks, juice, or milk can favor the product of the company sponsoring the study.
This doesn’t mean that companies are deliberately altering the results. Something as simple as the way a study is designed, including which products or treatments being compared, can influence the outcome.
That’s why it’s important to know who is paying for a study. Most journals include this information in the paper, but it may not always be mentioned in a news story.
Many Studies Are Wrong
Other experts see even bigger problems with medical studies, and even suspect that most of them are wrong.
That might sound extreme, but all scientific studies have some flaw or bias in their design. That’s why science emphasizes repeating or replicating experiments to confirm the results. A single positive result might just be a fluke.
Not every published study, though, can be replicated.
Recently, social psychologist Brian Nosek, Ph.D., and his colleagues repeated research from 98 original papers that were found in three psychology journals to see if they would get the same results. They succeeded only in 39 cases.
This problem is not unique to the field of psychology.
Biotechnology company Amgen found that they could not replicate 47 out of 53 “landmark” cancer studies.
Drug company Bayer had a similar problem. They were able to repeat only one-fifth of 67 important papers in oncology, women’s health, and cardiovascular medicine.
However, like other medical studies, even systematic reviews have their limitations, especially if they are based on poorly designed or run studies, which some experts think there are a lot of.
Dr. John Ioannidis, a professor of medicine at Stanford University School of Medicine, argues that as much as 90 percent of the published medical information that doctors use to make their decisions is flawed.
In addition, a service that reviews new studies for doctors and other clinicians found that only 3,000 of about 50,000 medical papers published each year are well-designed enough to be used to guide patient care.
Ioannidis identified problems with the way scientists do research — all the way from designing a study to publishing their findings in a medical journal.
“At every step in the process, there is room to distort results, a way to make a stronger claim, or to select what is going to be concluded,” Ioannidis said in an interview with The Atlantic in 2010. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”
In spite of the obvious failings of many medical studies, Ioannidis sees a way forward.
In a 2014 paper in PLOS Medicine, he proposed treating scientific research the way you might a disease — by finding an intervention that will make research more structured and rigorous.
“The achievements of science are amazing, yet the majority of research effort is currently wasted,” wrote Ioannidis. “Interventions to make science less wasteful and more effective could be hugely beneficial to our health, our comfort, and our grasp of truth and could help scientific research more successfully pursue its noble goals.”