Article
Author(s):
Reflections on two articles that cover stress and its relationship to human disease, both psychological and physical.
The following was originally posted to the HCPLive network blog Thought Broadcast.
Two interesting articles caught my attention this week, on the important subject of “stress” and its relationship to human disease—both psychological and physical. Each offers some promising ways to prevent stress-related disease, but they also point out some potential biases in precisely how we might go about doing so.
A piece by Paul Tough in the New Yorker profiled Nadine Burke, a San Francisco pediatrician (the article is here, but it’s subscription-only; another link might be here). Burke works in SF’s poverty-stricken Bayview-Hunters Point neighborhood, where health problems are rampant. She recognized that in this population, the precursors of disease are not just the usual suspects like poor access to health care, diet/lifestyle, education, and high rates of substance use, but also the impact of “adverse childhood experiences” or ACEs.
Drawing upon research by Vincent Felitti and Robert Anda, Burke found that patients who were subjected to more ACEs (such as parental divorce, physical abuse, emotional neglect, being raised by a family member with a drug problem, etc.) had worse outcomes as adults. These early traumatic experiences had an effect on the development of illnesses such as cancer, heart disease, respiratory illness, and addiction.
The implication for public health, obviously, is that we must either limit exposure to stressful events in childhood, or decrease their propensity to cause long-term adverse outcomes. The New Yorker article briefly covers some biological research in the latter area, such as how early stress affects DNA methylation in rats, and how inflammatory markers like C-reactive protein are elevated in people who were mistreated as children. Burke is quoted as saying, “In many cases, what looks like a social situation is actually a neurochemical situation.” And a Harvard professor claims, “this is a very exciting opportunity to bring biology into early-childhood policy.”
With words like “neurochemical” and “biology” (not to mention “exciting”) being used this way, it doesn’t take much reading-between-the-lines to assume that the stage is being set for a neurochemical intervention, possibly even a “revolution.” One can almost hear the wheels turning in the minds of academics and pharmaceutical execs, who are undoubtedly anticipating an enormous market for endocrine modulators, demethylating agents, and good old-fashioned antidepressants as ways to prevent physical disease in the children of Hunters Point.
To its credit, the article stops short of proposing that all kids be put on drugs to eliminate the effects of stress. The author emphasizes that Burke’s clinic engages in biofeedback, child-parent therapy, and other non-pharmacological interventions to promote secure attachment between child and caregiver. But in a society that tends to favor the “promises” of neuropharmacology—not to mention patients who might prefer the magic elixir of a pill—is this simply window-dressing? A way to appease patients and give the impression of doing good, until the “real” therapies, medications, become available?
More importantly, are we expecting drugs to reverse the effects of social inequities, cultural disenfranchisement, and personal irresponsibility?
***
The other paper is a study published this month in the Journal of Epidemiology and Community Health. In this paper, researchers from Sweden measured “psychological distress” and its effects on long-term disability in more than 17,000 “average” Swedish adults. The subjects were given a baseline questionnaire in 2002, and researchers followed them over a five-year period to see how many received new disability benefits for medical or psychiatric illness.
Not surprisingly, there was a direct correlation between high “psychological distress” and high rates of disability. It is, of course, quite possible that people who had high baseline distress were distressed about a chronic and disabling health condition, which worsened over the next five years. But the study also found that even low levels of psychological stress at baseline were significantly correlated with the likelihood of receiving a long-term disability benefit, for both medical and psychiatric illness.
The questionnaire used by the researchers was the General Health Questionnaire, a deceptively simple, 12-question survey of psychological distress (a typical question is “Have you recently felt like you were under constant strain?” with four possible answers, “not at all” up to “much more than usual”) and scored on a 12-point scale. Interestingly, people who scored only 1 point out of 12 were twice as likely to receive a disability reward than those who scored zero, and the rates only went up from there.
I won’t delve into other details of the results here, but as Sweden resembles the US in its high rates of psychiatric “disability” (between 1990 and 2007, the percentage of disability rewards due to psychiatric illness rose from ~15% to over 40%), the implication is clear: even mild psychological “distress” is a risk factor for future illness—both physical and mental—and to reverse this trend, the effects of this distress must be treated or prevented in some way.
***
Both of these articles—from different parts of the world, using different measurement instruments, and looking at somewhat different outcomes—nevertheless reach the same conclusion: early life stress is a risk factor for future disease. This is a long-recognized phenomenon (for an easily accessible exploration of the topic, read Why Zebras Don’t Get Ulcers, by Stanford’s Robert Sapolsky, a former mentor of mine).
But what do we do with this knowledge? My fear is that, rather than looking at ways to minimize “stress” in the first place (through social programs, education, and other efforts to raise awareness of the detrimental effects of stress), we as a society are instead conditioned to think about how we can intervene with a drug or some other way to modulate the “neurochemical situation,” as Nadine Burke put it. In other words, we’re less inclined to act than to react, and our reactions are essentially chemical in nature.
As a psychiatrist who has worked with an inner-city population for many years, I’m already called upon to make diagnoses and prescribe medications not for what are obviously (to me) clear-cut cases of significant and disabling mental illness, but, rather, the accumulated effects of stress and trauma. (I’ll write more about this fascinating interface of society and biology in the future.) True, sometimes the diagnoses do “fit,” and indeed sometimes the medications work. But I am doing nothing to prevent the initial trauma, nor do I feel that I am helping people cope with their stress by telling them to take a pill once or twice a day.
We as a society need to make sure we don’t perpetuate the false promises of biological determinism. I applaud Nadine Burke and I’m glad epidemiologists (and the New Yorker) are asking serious questions about precursors of disease. But let’s think about what really helps, rather than looking solely to biology as our savior.
(Thanks to Michael at The Trusting Heart for leading me to the New Yorker article.)