Skip to main content

Meat Is Unhealthy, Meat Is Okay: Why Science Keeps Overturning What We Thought We Knew

Don’t be surprised that scientists keep updating their advice.

For years, health experts have been saying that to decrease the risk of heart attacks and cancer, it’s wise to cut back on red meat, and especially processed red meat, like bacon.

This week, that conventional wisdom was upended. Five systematic reviews, published Monday in the journal Annals of Internal Medicine, found that the unhealthful effects of regular meat consumption are negligible. (There remains a strong environmental, and ethical, case for reducing meat consumption — that’s just not what these reviews looked at.)

But while the new red meat decree might feel jarring, it’s not actually a bad thing for nutrition — or even science generally. In fact, this is how science is supposed to work.

The real story behind the meat news is that a widespread understanding about nutrition was changed by better science and stronger methodology. And it’s not just nutrition science that’s experiencing this kind of reckoning.

Other influential research in psychology has also been toppled by more refined scientific methods lately — that’s what the “replication crisis” is all about. It’s a big deal, and a pattern worth looking at if we want to understand why the things we thought we knew keep turning out to be wrong.

Why nutrition science is getting better

A growing chorus of critics has been pointing out that the bedrock of nutrition science — large, observational studies — are often hopelessly limited in their ability to give us clear answers about which foods are beneficial for health.

For example, with case-control studies — a type of observational research — researchers start with an endpoint (for example, people who already have cancer). For each person with a disease (a case), they find a match (a control) — or someone who doesn’t have the disease. They then look backward in time and try to determine if any patterns of exposure (in this case, eating meat) differed in those with cancer compared to those who don’t have cancer.

But since meat eaters differ so fundamentally from those who don’t eat meat, as we’ve explained, the reasons the two groups have varying health outcomes could have nothing to do with eating meat. Researchers try to control for “confounding factors,” the unmeasured variables that may lead to one person getting cancer, and another staying healthy. But they can’t capture all of them.

So these relatively weak study designs are not meant to be a source for definitive statements about how a single food or nutrient increased or decreased the risk of a disease by a specific percentage.

If you like this article, please sign up for Snapshot, Portside's daily summary.

(One summary e-mail a day, you can change anytime, and Portside is always free.)

Why have so many of these studies been done? Because they can give nutrition researchers a sense of what they might study in a more rigorous (and expensive) randomized trial. One observational study can’t tell you much. But if many of the best quality observational studies (such as cohort studies) find a large effect on a disease in the data, they’re probably pointing to something real.

Yet guidelines in the past haven’t taken a nuanced approach to evaluating the strengths and weaknesses of different types of nutrition studies.

Instead, they’ve relied on a broad range of research, including animal evidence and case-control studies. Just four years ago, the World Health Organization’s International Agency for Research on Cancer announced that people should cut back on processed meats if they wanted to avoid certain types of cancer. The American Heart Association and the US government’s dietary guidelines panel, meanwhile, have been beating the drum about a plant-rich diet for years.

The new meat studies attempted to hold nutrition research to a higher standard.

The 14 researchers behind the papers sorted through the noise of observational studies — picking out only the strongest among them (i.e. the large cohort studies), while also relying on higher quality evidence from randomized controlled trials to draw their conclusions. The authors were making a deliberate effort to ensure nutrition advice is based on only the best-available research, with conclusions that are more reliable.

The result isn’t perfect. One can argue that nutrition science is so flawed, perhaps we shouldn’t be making guidelines at all. Or that people need guidance about what to eat, and reviews like the meat studies at least show the holes in our knowledge, what studies we need to make even stronger guidelines.

Nutrition science crusaders haven’t just been picking on weak observational studies. They’ve also been challenging some of the most respected randomized trials in nutrition by looking back at trial data using sophisticated statistical tests to pick out flaws.

The PREDIMED study was one target. Conducted in Spain, it tracked more than 7,400 people at high risk of cardiovascular disease. And the researchers stopped the trial early, after they found the Mediterranean diet, when supplemented with lots of olive oil or nuts, could cut a person’s risk of cardiovascular disease by a third. A recent review of the data showed the trial was poorly run, and PREDIMED’s conclusions have since been called into question.

Ideas in social science are being overturned and debated, too

That nutrition science is updating old findings with new evidence does not mean the science is fatally flawed. Science moves along incrementally. It’s a long, grinding process involving false starts, dead ends, and studies that in hindsight may turn out to be poorly executed.

If anything, the meat studies remind us the science is getting better.

A similar trend can be seen in social science, where researchers have been reevaluating classic textbook findings with more rigorous methodology, and discovering many are flawed.

The “replication crisis” in psychology started around 2010, when a paper using completely accepted experimental methods was published purporting to find evidence that people were capable of perceiving the future, which is impossible. This prompted a reckoning: Common practices like drawing on small samples of college students were found to be insufficient to find true experimental effects.

Scientists thought if you could find an effect in a small number of people, that effect must be robust. But often, significant results from small samples turn out to be statistical flukes.

The crisis intensified in 2015 when a group of psychologists, which included Nosek, published a report in Science with evidence of an overarching problem: When 270 psychologists tried to replicate 100 experiments published in top journals, only around 40 percent of the studies held up. The remainder either failed or yielded inconclusive data. The replications that did work showed weaker effects than the original papers. (The “crisis” has also inspired investigations revealing outright scientific malpractice, and not just methodological errors.)

There are so many textbook psychology findings that have either not been replicated, or are currently in the midst of a serious reevaluation.

Like:

  • Social priming: People who read “old”-sounding words (like “nursing home”) were more likely to walk slowly — showing how our brains can be subtly “primed” with thoughts and actions.
  • The facial feedback hypothesis: Merely activating muscles around the mouth caused people to become happier — demonstrating how our bodies tell our brains what emotions to feel.
  • Stereotype threat: Minorities and maligned social groups didn’t perform as well on tests due to anxieties about becoming a stereotype themselves.
  • Ego depletion: the idea that willpower is a finite mental resource
  • The “marshmallow test,” a series of studies from the early ’90s that suggested the ability to delay gratification at a young age is correlated with success later in life. New research finds that if the original marshmallow test authors had a larger sample size, and greater research controls, their results would not have been the showstoppers they were in the ’90s.
  • The Stanford Prison Experiment: Recent investigations into the experiment’s archive greatly undermine the experiment’s conclusion — that bad behavior is the result of environments. It turns out many people involved in the experiment were coached into being cruel while working in a simulated prison, and the prisoners acted out, in part, because they simply wanted to leave the experiment.

Again, these reevaluations aren’t evidence that science is doomed. They can be seen as a sign of progress (and like everything in science, even the severity of the replication crisis is hotly debated.) It’s also not the case that we should doubt every single scientific finding that’s out there in the public. Certainly, scientists have put in the painstaking work to prove that climate change is caused by humans. This conclusion is certainly not the result of a single study: It’s the result of thousands of good studies.

A part of this reckoning is recognizing that evidence can be strong or weak. And not all published findings should be treated as equal. In a lot of ways, human beings are a lot harder to study than other natural phenomena.

In science, too often, the first demonstration of an idea becomes the lasting one — in both pop culture and academia. But this isn’t how science is supposed to work at all.

So next time you read about some kernel of conventional wisdom being questioned, know there’s a reason: It’s probably part of the quest to make science better.


Julia Belluz is Vox's senior health correspondent, focused on medicine, science, and public health. She's covered topics as varied as the anti-vaccine movementAmerica's staggering maternal mortality problem, how dark chocolate became a health food, and what makes America's sickest county so unhealthy. She has also debunked numerous medical misinformation peddlers such as Dr. OzGwyneth Paltrow, and Alex Jones.

Brian Resnick is a science reporter at Vox.com, covering social and behavioral sciences, space, medicine, the environment, and anything that makes you think "whoa that's cool." Before Vox, he was a staff correspondent at National Journal where he wrote two cover stories for the (now defunct) weekly print magazine, and reported on breaking news and politics.

Vox explains the news.

We live in a world of too much information and too little context. Too much noise and too little insight. And so Vox's journalists candidly shepherd audiences through politics and policy, business and pop culture, food, science, and everything else that matters. You can find our work wherever you live on the internet — FacebookYouTubeemailiTunesInstagram, and more.

Vox was launched at Vox Media in 2014 by founders Ezra Klein, Melissa Bell, and Matthew Yglesias.