Applying research to practice
The headline of the study was simple and clear: healthier diets linked to better mood and less depression.
But what does that actually mean?
In writing about this article, I want to talk about a larger issue: how do doctors continue to learn medicine?
As a soon-to-be doctor, I realize I don't know the best way to learn and use new knowledge and information. How do I balance my learning between:
1) textbooks (the established knowledge, and the party line)
2) professors (real-life knowledge with a heavy dose of personal idiosyncrasy)
3) industry-sponsored medical education programs (easy access, cutting edge research, but often heavily biased)
4) research journals (the "highest and most pure" form of knowledge acquisition, but with hundreds of new articles every month, how am I to consistently and reliably learn this exponentially-growing body of knowledge).
So, with these questions in mind, I read the abstract.
The take-home point of this article is the following: middle-aged Australian women who ate a "healthy" diet had lower rates of depression and anxiety compared to those women who ate a "unhealthy" diet. To quote the article:
"After adjustments for age, socioeconomic status, education, and health behaviors, a "traditional" dietary pattern characterized by vegetables, fruit, meat, fish, and whole grains was associated with lower odds for major depression or dysthymia and for anxiety disorders. A "western" diet of processed or fried foods, refined grains, sugary products, and beer was associated with a higher [rates of depressive symptoms]."However, as the authors point out in science-speak: “These results demonstrate an association between habitual diet quality and the high-prevalence mental disorders, although reverse causality and confounding cannot be ruled out as explanations."
In other words, we don't know if better diets cause better moods, or if better moods cause better diets, or if some third unknown variable causes both better moods and better diets.
This is a well-known problem in science and medicine: It is "easy" to design a study that finds correlations between two variables. It is “very hard and expensive" to design to study that proves causation. We would need a "prospective, randomized controlled" study that does the following:
Takes a group of depressed people. Randomly split them into two groups. One group gets psychiatric treatment and continues to eat their normal diet. Another group gets psych treatment PLUS a healthy diet. Control for as many variables as you can think of, and see if one group becomes less depressed than another.
Simple in theory, incredibly hard in practice.
So how does that affect me?
If I were a researcher, I would say: Okay, lets try to design that prospective randomized study.
But I am not a researcher. I am a consumer of medical research. So when I read this, I get conflicted:
My instinct says: of course better diets would help with depression. That just makes sense.
My logical mind says: I cannot let the results of this study affect my clinical practice. It would be wrong to use these results in the way I treat patients, because I don't know if it is true. In fact, it may even be harmful. While it seems to hard to believe, a perfect example of "jumping to conclusions" is the selenium / prostate cancer study. For several years, higher selenium levels were correlated with lower rates of prostate cancer. But, when they randomized men to either selenium supplements or nothing, they found, surprisingly, that higher rates of selenium supplementation actually resulted in HIGHER likelihood of prostate cancer. In other words, even though the correlations suggested that selenium was good, when we actually did a randomized controlled study, we discovered that it was actually bad.
With this is mind, I think it is very important to think critically about how medical doctors consume medical information. Studies of correlation abound in medicine research articles. But I believe that if we let these studies influence our practice, we are doing a disservice to our patients. As a thought experiment: Imagine a drug company publishes a study that showed that people who took their drug also happened to be healthier. Would you start prescribing this drug? No! You would demand a randomized control study. So why shouldn't we hold nutrition and "alternative therapy" research to the same scientific standard? Yes, it is hard to do these studies, but if we want to be fair to our patients, we need to be honest with how we consume the research.
Finally, to those potential readers who are not in medical school. I want to reassure you that there are resources for doctors to help keep up with the literature. Non-profit, non-industry organizations like The Cochrane Database and Essential Evidence spend significant time and energy combing through the medical research and publishing the meaningful, scientifically accurate and clinically relevant research for medical professions. Furthermore, I can use the vast research universe not as a book I need to read from beginning to end, but as a database that I can access and evaluate when I have a clinical question. I have a decent understanding of how to evaluate the research. I just need to be consistent in how I choose to consume this information.
I am curious if you think like me, that it is wrong to apply the results of correlational studies like this to your clinical practice?
Mike is a 4th year medical student going into psychiatry.