Wednesday, October 02, 2019

What the new studies REALLY say about red and processed meat

New studies this week in the Annals of Internal Medicine have generated much fiery news coverage.

For example, Time's headline says: "Should You Stop Eating Red Meat? A New Paper Has a Controversial Answer." As always, the nutrition reporter portrays nutrition science as fickle, endlessly reversing itself.

It's not true. The actual scientific content in the new studies confirms what we already knew.

The best available evidence suggests that reducing red and processed meat consumption will reduce risk of death from cardiovascular disease and cancer.

It always is the case that nutrition scientists and communicators must accomplish two tasks: 
  1. understand the available evidence; and 
  2. reflect on what burden of proof should be used for nutrition policy decision making.
The new studies made atypical decisions about Task #2. For reasons that are not clear to me -- perhaps out of a scientific sense of caution or perhaps out of a bias in favor of red and processed meat -- they ramped up the burden of proof that they apply to recommendations that advise less red and processed meat. Citing research guidelines that give highest scores for pharmaceutical trials [edited slightly 4pm], they rate most of the available evidence in any direction as weak. This is not how I would have communicated the evidence. In my view, it is understandable that randomized control trials cannot be widely used on this topic, because one would have to wait too long for a sufficient number of cancers or heart attacks attributable to a meat intake intervention in a well-powered study. So, I have long accepted cohort and observational studies as the best available evidence on this topic. The new studies do pretty much the same, but they label each piece of evidence as "weak." They are free to apply these labels, and I feel free to ignore these labels.

Turning to Task #1, the studies confirm what I already knew. For example, here are my statistical interpretation sentences for several main results from the new studies.

On average, reducing weekly unprocessed red meat intake by 3 servings is associated with:
  • 7% lower risk of death, 
  • 10% lower risk of death from cardiovascular disease,
  • 6% lower risk of stroke,
  • 10% lower risk of type 2 diabetes,
  • 7% lower risk of death from cancer.
The list of results continues for processed meat.

If you want to describe these effects as "small" and you want dietary changes that reduce your risk by twice as much, then knock yourself out. You may reduce your red meat intake by perhaps approximately [note: qualifying adjective added 4pm] 6 servings.

I recognize that these results are accompanied by blistering disparagement of recommendations to eat less red and processed meat, but I don't trust these authors enough to place credence on their rhetorical choices. Their scientific results are what matters. 

Reading these scientific results, even acknowledging the limits of our knowledge, I will continue to support existing recommendations to eat less red and processed meat. No better evidence exists.


FYF said...

"Citing research guidelines that originally were designed for pharmaceutical trials, they rate most of the available evidence in any direction as weak." Please provide a reference for this statement, as it seems, at face-value, to be inaccurate according to the GRADE system itself.

The GRADE system rates clinical trials (which is what pharmaceutical trails would be) as "high-quality evidence" to begin with, but downgrades it according to specific flaws. The GRADE system also rates observational trials (which immediately includes something other than "pharmaceutical trials") as "low-quality evidence to begin with, but upgrades it according to specific criteria. Why would the developers of the system create a system that includes how to rate observational studies, if the system was designed to *only* evaluate clinical trials?

Some observational studies do include tracking use of pharmaceuticals, see for instance observational studies that tracked HRT use to find whether it reduced CVD in postmenopausal women. And, as the developers of GRADE note, many women might have been saved from poor health outcomes if a better system for developing guidance for individuals from weak observational data had been used in that case.

For that matter, the Nurses Health Study was originally designed to track contraceptive use. I guess, according to your logic above, we should disregard all studies that use the NHS data regarding diet-chronic disease relationships because the original purpose of the study was not to investigate diet and chronic disease, but to follow women using a pharmaceutical intervention.

And, for that matter, "most of the available evidence" relating diet to chronic disease *is* weak, in any direction. It is time the public knew that.

usfoodpolicy said...

I acknowledge both of your points.

First, made a slight edit to the sentence about grading system (and as always noted the edit). There is more to the story than I had time to explain. Critics have said the new articles used a protocol that screened out some studies that would have scored higher, and then dinged the remaining studies for not scoring higher.

Second, the evidence may be the best available and yet I can understand why you prefer to call it weak. I think the new articles stung more for researchers who overstate the confidence in nutrition science evidence, but it doesn't sting for me because I've long understood and bluntly explained the shortcomings.

FYF said...

If "highest scores" are given to pharmaceutical trial, perhaps it is these trials are administered and data is collected and analyzed with more rigor and safeguards for potential conflicts of interest than are dietary studies (of any sort). We get very exercised if we think a drug company may take advantage of the scientific process to bias results in their favor, and drug trials are designed to prevent such advantage being taken.

We have no such safeguards in place for academic departments who may take advantage of their control over a dataset to bias studies released using that dataset in favor of the theory that they champion. Nor do we ask the groups who administer a dataset to have their results analyzed by a third, outside party (as is often the case in drug trials).

Ideological conflicts of interest are as real and powerful as financial COIs. In the virtually theory-less land of nutritional epidemiology of chronic disease, what variables epidemiologists include or exclude from a model may be treated as “self-evident,requiring no analysis, or else simply a matter of idiosyncratic inspiration (or ideological proclivities)"(Krieger, 2011). Why shouldn't those studies receive much lower ratings than pharmaceutical trials?

For that matter, if we agree that the "best available" evidence is nevertheless quite weak, why issue guidance on meat consumption at all?

From Marantz, Bird & Alderman, 2008, which also called for higher standards of evidence for dietary guidance: "...when adequate evidence is not available, the best option may be to
issue no guideline."