Article
Author(s):
Randomized controlled trials remain the gold standard, but a researcher at Kidney Week 2012 says that new approaches and study designs are needed that reflect real-world drug use and account for confounding variables.
Randomized controlled trials remain the gold standard, but a researcher at Kidney Week 2012 says that new approaches and study designs are needed that reflect real-world drug use and account for confounding variables.
The time has arrived to try a new approach to clinical research that can still provide the valuable results of randomized controlled trials but is more affordable and inclusive to subgroups of patient populations, said M. Alan Brookhart, PhD, a researcher from the University of North Carolina, Chapel Hill, who spoke about new study design during a lecture at Kidney Week 2012, held in San Diego and sponsored by the American Society of Nephrology.
There’s little question that randomized controlled studies that are the gold standard of clinical research provide valuable information, said Brookhart. “We still reap the benefits of a trial from a decade ago that established hypertension guidelines,” he noted. But large trials like that one, which cost $120 million and took eight years to complete, also burn through resources, he said. “We’re not going to be able to afford to do these big kinds of studies for all the important clinical questions that we have,” said Brookhart. “Also, we’d like to get results in a more timely fashion.”
Big trials often exclude whole subgroups of people, such as elderly people, or patients with dementia or end-stage renal disease and chronic kidney disease, noted Brookhart. “Even if these trials do include patients with chronic kidney disease, they probably occur in such small numbers that we can’t rely on the estimated treatment effects for these patient groups,” he said. “It’s reasonable to expect that drugs will have different effects on patients with chronic kidney disease… So whether we like it or not we’re going to have to do non experimental, so-called observational studies.”
That means passively observing treatments by physicians and looking at how the patient responds and complies with treatment. For example, the group of patients who start on a medication or treatment may later turn out to be quite different from those who have been on a drug for a long time or conversely just stopped taking medication.
When considering study design, Brookhart said it is critical to take into account the “ever present threat” of confounding bias such as the “sick stopper effect” and compliance bias in which patients do better in areas unrelated to the study focus simply because they are participating in a trial. Patients who lead extra-healthy lives and are prevention conscious also might skew the numbers, he said.
It’s been speculated that the healthy user effect contributed to the problems with hormone replacement therapy (HRT) studies from the 1980s that were first interpreted as greatly reducing cardiovascular problems. Later it was discovered that HRT actually increase risks.
“Typically we see that observational studies suggest big benefits and then trials find that in fact there’s no effect. We saw this with vitamin e, vitamin c, beta carotene, and folic acid. There are many examples of this,” said Brookhart.
A new user design that’s meant to eliminate many of the discussed biases is one that looks at new users of a drug of interest and compares them to new users of a comparator drug, Brookhart said. The design would require that patients have some period of time where they are not taking the drug but it allows researchers to study events that happen shortly after exposure or sensitivity reaction.
Natural experiments that lead to a naturally occurring type of patient randomization may also work to better estimate treatment effects. “I think the field is full of this and we just haven’t been using this approach too often,” Brookhart said. “You essentially try to identify some kind of retrospective occurrence that creates random allocation that can be caused by policy change or reimbursement change.”
“I’ve been very interested in using variation in treatment preferences across providers as the basis of a natural experiment,” said Brookhart.
Brookhart finished his discussion with a word of caution that researchers and those who consume research should always be suspicious of data results that appear too good to be true, because just like the saying, they probably are.
FDA Approves Crinecerfont for Congenital Adrenal Hyperplasia