Article
Author(s):
The use of anecdotal evidence in psychopharmacology is common place, but why, and is this acceptable?
The following was originally posted to the HCPLive network blog Thought Broadcast.
Sometimes I feel like a hypocrite.
As a practicing psychiatrist, I have an obligation to understand the data supporting my use of prescription medication. In my attempts to do so, I’ve found some examples of clinical research that, unfortunately, are possibly irrelevant or misleading. Many other writers and bloggers have taken this field to task (far more aggressively than I have) for clinical data that, in their eyes, are incomplete, inconclusive, or downright fraudulent.
In fact, we all like to hold our clinical researchers to an exceedingly high standard, and we complain indignantly when they don’t achieve it.
At the same time, I’ll admit I don’t always do the same in my own day-to-day practice. In other words, I demand precision in clinical trials, but several times a day I’ll use anecdotal evidence (or even a “gut feeling”) in my prescribing practices, completely violating the rigor that I expect from the companies that market their drugs to me.
Of all fields in medicine, psychopharmacology is the one where this is not only common, but it’s the status quo.
“Evidence-based” practice is about making a sound diagnosis and using published clinical data to make a rational treatment decision. Unfortunately, subjects in clinical trials of psychotropic drugs rarely—if ever—resemble “real” patients, and the real world often throws us curve balls that force us to improvise. If an antipsychotic is only partially effective, what do we do? If a patient doesn’t tolerate his antidepressant, then what? What if a drug interferes with my patient’s sleep? Or causes a nasty tremor? There are no hard-and-fast rules for dealing with these types of situations, and the field of psychopharmacology offers wide latitude in how to handle them.
But then it gets really interesting. Nearly all psychiatrists have encountered the occasional bizarre symptom, the unexpected physical finding, or the unexplained lab value (if labs are being checked, that is). Psychopharmacologists like to look at these phenomena and try to concoct an explanation based on what might be happening based on their knowledge of the drugs they prescribe. In fact, I’ve always thought that the definition of an “expert psychopharmacologist” is someone who understands the properties of drugs well enough to make a plausible (albeit potentially wrong) molecular or neurochemical explanation of a complex human phenotype, and then prescribe a drug to fix it.
The psychiatric literature is filled with case studies of interesting encounters or “clinical pearls” that illustrate this principle at work.
For example, consider this case report in the Journal of Neuropsychiatry and Clinical Neurosciences, in which the authors describe a case of worsening mania during slow upward titration of a Seroquel dose and hypothesize that an intermediate metabolite of quetiapine might be responsible for the patient’s mania. Here’s another one, in which Remeron is suggested as an aid to benzodiazepine withdrawal, partially due to its 5-HT3 antagonist properties. And another small study purports to explain how nizatadine (Axid), an H2 blocker, might prevent Zyprexa-induced weight gain. And, predictably, such “hints” have even made their way into drug marketing, as in the ads for the new antipsychotic Latuda which suggest that its 5-HT7 binding properties might be associated with improved cognition.
Of course, for “clinical pearls” par excellence, one need look no further than Stephen Stahl, particularly in his book Essential Psychopharmacology: The Prescriber’s Guide. Nearly every page is filled with tips (and cute icons!) such as these: “Lamictal may be useful as an adjunct to atypical antipsychotics for rapid onset of action in schizophrenia,” or “amoxapine may be the preferred tricyclic/tetracyclic antidepressant to combine with an MAOI in heroic cases due to its theoretically protective 5HT2A antagonist properties.”
These “pearls” or hypotheses are interesting suggestions, and might work, but have never been proven to be true. At best, they are educated guesses. In all honesty, no self-respecting psychopharmacologist would say that any of these “pearls” represents the absolute truth until we’ve replicated the findings (ideally in a proper controlled clinical trial). But that has never stopped a psychopharmacologist from “trying it anyway.”
It has been said that, “every time we prescribe a drug to a patient, we’re conducting an experiment, with n=1.” It’s amazing how often we throw caution to the wind and, just because we think we know how a drug might work, and can visualize in our minds all the pathways and receptors that we think our drugs are affecting, we add a drug or change a dose and profess to know what it’s doing. Unfortunately, when we enter the realm of polypharmacy (not to mention the enormous complexity of human physiology), all bets are usually off.
What’s most disturbing is how often our assumptions are wrong—and how little we admit it. For every published case study like the ones mentioned above, there are dozens—if not hundreds—of failed “experiments.” (Heck, the same could be said even when we’re using something appropriately “evidence-based,” like using a second-generation antipsychotic for schizophrenia.) In psychopharmacology, we like to take pride in our successes (“I added a touch of cyproterone, and his compulsive masturbation ceased entirely!”) but conveniently excuse our failures (“She didn’t respond to my addition of low-dose N-acetylcysteine because of flashbacks from her childhood trauma”). In that way, we can always be right.
Psychopharmacology is a potentially dangerous playground. It’s important that we follow some well-established rules—like demanding rigorous clinical trials—and if we’re going to veer from this path, it’s important that we exercise the right safeguards in doing so. At the same time, we should exercise some humility, because sometimes we have to admit we just don’t know what we’re doing.