Problems with Nutrition Research

Standard

Appearances to the mind are of four kinds.
Things either are what they appear to be;
Or they neither are, nor appear to be;
Or they are, and do not appear to be;
Or they are not, and yet appear to be.
Rightly to aim in all these cases
Is the wise man’s task.
Epictetus, 2nd century AD

If you pay attention to the news then you don’t go a day without hearing about nutrition research. Alcohol, chocolate, meat, fat, carbs, protein, fiber, sugar, this diet, that diet, and a galaxy of supplements are under constant scrutiny. You may also notice that studies seem to frequently contradict one another. (The health effects of alcohol are a notable example.) It’s easy to become confused and frustrated as you search for accurate information. (And that’s just with the valid research out there. Throw in the junk “research” behind bogus supplements and snake oil and you may simply want to give up being informed.)

I’m neither a researcher nor a statistician but I respect the need for solid research into health, fitness, nutrition, and the like. I understand that valid research requires a large number of study subjects. The best studies are designed as double-blind placebos. Finally, research results must be replicated several times over in order to be seen as valid and worth taking seriously. Beyond that, I don’t have a good grasp of statistical methods so I can’t always tell if the conclusions drawn from the research are accurate. Thus I’m often confused by what I see and hear around nutrition research.

If you consider yourself a well-informed, educated, healthy person who finds yourself confused by conflicting nutritional studies then an article in the New York Times, More Evidence That Nutrition Studies Don’t Add Up, may help you understand your frustration. The story discusses the shoddy research practices of Cornell University researcher Dr. Brian Wansink.

The article goes beyond Dr. Wansink’s malpractice to discuss general, widespread nutrition research problems:

“Dr. Wansink’s lab was known for data dredging, or p-hacking, the process of running exhaustive analyses on data sets to tease out subtle signals that might otherwise be unremarkable. Critics say it is tantamount to casting a wide net and then creating a hypothesis to support whatever cherry-picked findings seem interesting — the opposite of the scientific method. For example, emails obtained by BuzzFeed News showed that Dr. Wansink prodded researchers in his lab to mine their data sets for results that would “’go virally big time.’”

“’P-hacking is a really serious problem,’” said Dr. Ivan Oransky, a co-founder of Retraction Watch, who teaches medical journalism at New York University. “’Not to be overly dramatic, but in some ways it throws into question the very statistical basis of what we’re reading as science journalists and as the public.’”

“Data dredging is fairly common in health research, and especially in studies involving food. It is one reason contradictory nutrition headlines seem to be the norm: One week coffee, cheese and red wine are found to be protective against heart disease and cancer, and the next week a new crop of studies pronounce that they cause it. Marion Nestle, a professor of nutrition, food studies and public health at New York University, said that many researchers are under enormous pressure to churn out papers. One recent analysis found that thousands of scientists publish a paper every five days.”

Further:

“In 2012, Dr. John Ioannidis, the chairman of disease prevention at Stanford, published a study titled “’Is Everything We Eat Associated With Cancer?’” He and a co-author randomly selected 50 recipes from a cookbook and discovered that 80 percent of the ingredients — mushrooms, peppers, olives, lobster, mustard, lemons — had been linked to either an increased or a decreased risk of cancer in numerous studies. In many cases a single ingredient was found to be the subject of questionable cancer claims in more than 10 studies, a vast majority of which “’were based on weak statistical evidence,’” the paper concluded.

Nutrition epidemiology is notorious for this. Scientists routinely scour data sets on large populations looking for links between specific foods or diets and health outcomes like chronic disease and life span. These studies can generate important findings and hypotheses. But they also have serious limitations. They cannot prove cause and effect, for example, and collecting dietary data from people is like trying to catch a moving target: Many people cannot recall precisely what they ate last month, last week or even in the past 48 hours. Plenty of other factors that influence health can also blur the impact of diet, such as exercise, socioeconomic status, sleep, genetics and environment. All of this makes the most popular food and health studies problematic and frequently contradictory.

In one recent example, an observational study of thousands of people published in The Lancet last year made headlines with its findings that high-carb diets were linked to increased mortality rates and that eating saturated fat and meat was protective. Then in August, a separate team of researchers published an observational study of thousands of people in a related journal, The Lancet Public Health, with contrasting findings: Low-carb diets that were high in meat increased mortality rates.

“’You can analyze observational studies in very different ways and, depending on what your belief is — and there are very strong nutrition beliefs out there — you can get some very dramatic patterns,’ Dr. Ioannidis said.”

Read the article to learn more.

If this topic interests you then you should also read Congratulations. Your Study Went Nowhere, also from the New York Times. Among other things, it discusses an interesting problem with research publication. That is research with positive findings gets published far more than research with negative findings.

For instance, let’s say my study finds evidence that eating peanut butter increases IQ. Meanwhile, six other studies find no relationship between peanut butter and IQ: “Nothing to see here folks!” My positive study is more likely to be published than the negative studies. This is a type of publication bias. Positive studies are thus more likely to be mentioned in the news even if they’re outnumbered by negative studies. The article describes two types of biases:

Publication bias refers to the decision on whether to publish results based on the outcomes found. With the 105 studies on antidepressants, half were considered “positive” by the F.D.A., and half were considered “negative.” Ninety-eight percent of the positive trials were published; only 48 percent of the negative ones were.

Outcome reporting bias refers to writing up only the results in a trial that appear positive while failing to report those that appear negative. In 10 of the 25 negative studies, studies that were considered negative by the F.D.A. were reported as positive by the researchers, by switching a secondary outcome with a primary one, and reporting it as if it were the original intent of the researchers, or just by not reporting negative results.

We never hear a TV news reporter say, “Nine studies found absolutely no relationship between food X and cancer.” In other words, we only hear the bell that’s rung, not all the other bells that aren’t rung. The obvious problem is that if shoddy research findings are reported (vaccines linked to autism is a prominent example) and we may hear reports from multiple credible sources, then we start to believe false information. There are serious consequences to this problem.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *