Publication
Article
Cardiology Review® Online
The first installment of this series on clinical research briefly discussed the history of clinical research, its definition, and our quest for “universal truth.”* This installment addresses some of the central concepts related to clinical research and what is meant by the strength of scientific evidence. We also begin to discuss the different clinical research designs along with their respective strengths and weaknesses.
An essential characteristic of clinical research, and one of its most important weaknesses, is that inferences derived from the results of
a study on how an intervention/
exposure (exposure is the more general term; an intervention is one type of exposure) should be conducted in the general population of patients who may require it in the future are based on a limited sample (a sample is a select subset of a population that the investigator hopes represents the general population, but which is unlikely to do so). This limitation is further compounded by the fact that disease is not distributed randomly, whereas samples tend to be, and the causes of disease are multifactorial.
Another important concept of clinical research is the fact that most, if not all, biological variables have
a linear—semilinear relationship in terms of interventions and outcomes, whereas clinical medicine is replete with “cut points” (figure). A cut point presumes that there is some value or range of values that determines what is normal and what is abnormal.
Another important issue relates to what we mean when we talk about “the strength of evidence.” The highest strength of evidence is often attributed to the randomized clinical trial (RCT). In fact, in answer to the question of what is the best clinical research design, the answer generally given is “the RCT,” when in fact the correct answer is “it depends,” an answer that will be further discussed later in this series. What is actually meant by “the highest level of evidence” is how certain we are that an intervention and outcome are causally related as a result of the study findings (ie, how certain we are that an effect is the result of a cause, and that the observation is not just an association that exists but which is not causally related).
Let’s return to our question: “What is the best study design?” This is a different question than “What is the study design for a
given question, and given the specific question, which study design leads to the highest level of evidence?”; which may finally be different from asking “What is the study design for a given question that will result in the greatest certainty that the results reflect cause and effect?” Other important factors in considering the most appropriate study design, besides the most important factor—ethics—include the natural history of the disease being studied, the prevalence of the exposure, disease frequency, the characteristics and availability of the study population, measurement issues, and cost.
Let us now return to our quest
for universal truth. What are the steps we need to take in order to achieve it? The fact is that truth is
at best elusive and is not actually achievable since it is more a func-tion of our interpretation of data, which is mostly dictated by our
past experiences, than any finite information that is absolute. The steps needed to achieve this un-
certain end begin with a research question, perhaps the result of a question asked during teaching rounds, or stimulated by contact with a patient, or provoked during the reading of a book or journal,
and so on. The research question is usually some general statement such as, “Is there an association between coffee drinking and myo-
cardial infarction?” or “Is passive smoke harmful to a fetus?”
Let us examine this last research question and consider its limitations in terms of a testable hypothesis. One needs first to ask a few questions such as, “What is the definition
of ‘harmful’; what is passive smoke, ie, how is it to be defined in the
study to be proposed; and how will such smoke be measured?” Answering these questions comes closer to something that is testable and be-
gins to define the clinical research design that would have the highest level of evidence for the specific question in mind. For the proposed question, for example, it would be best, from a research design perspective, to randomize exposure of pregnant women to both passive smoke and “placebo passive smoke.” But considering the ethics issue alone, this would not be acceptable; thus, an RCT would not be the best study for this research question, even if it led to the highest level of evidence.
The common clinical research designs are listed in the table. The difference between experimental and observational studies is that in experimental studies, the investigator controls the intervention/exposure. In observational studies, the intervention/exposure occurs in nature, and the investigator determines the effect of that exposure but cannot directly control it.
Following is a brief discussion of the different study designs. A more detailed discussion will follow in the next installment of this series. Ecologic studies use available population data to determine associations. For example, to determine an association between coronary heart disease (CHD) and the intake of saturated fat, one could access public records of beef sales in different states (or counties or regions of the country) and determine if an association existed between sales and the prevalence of CHD.
Case reports and case series are potential ways to suggest an association, but, although limited in this regard, should not be deemed unimportant. In cross-sectional studies, one defines and describes disease status (or outcome), exposure(s), and other characteristics at a point in time (point in time is the operative phrase), in order to evaluate associations between them.
Cross-sectional studies are different from cohort studies in that the latter observe the association between a naturally occurring ex-posure and outcome (eg, between health and a disease or between disease and an event) over a period of time rather than at a point in time. This further contrasts with a case-control study, wherein the investigator identifies a certain outcome in the population, then matches the “diseased group” to a “healthy group,” and finally identifies differences in exposures between the two groups. In the randomized controlled trial, the exposure is controlled by the investigator, which makes it different from all the other study designs.
Study designs will be explored in more detail in a later issue, but for now, one should begin to understand the key differences, and therefore limitations, of each study design, and the circumstances when one design may be preferable to another. n