Article

Certain Vaccine-Effectiveness Studies Lack Context, Precision

Each flu season, the race is on to determine how well the flu vaccine is working. But researchers caution that many of the studies used to calculate effectiveness have potential pitfalls.

Danuta Skowronski, MD

Danuta Skowronski, MD

As flu season rolls on, one of the most pressing issues for public health officials is vaccination, both because it’s our best defense against influenza hospitalizations and deaths, and because we’ve yet to perfect the vaccine-development process. When it comes to influenza, vaccine improvement is a moving target.

Danuta Skowronski, MD, of the University of British Columbia’s School of Population and Public Health, told MD Magazine® that epidemiologists are eager to understand how well each year’s vaccine is performing.

“There’s real pressure to get estimates out,” she said. “Influenza vaccine isn’t like any other vaccine; it changes every year.”

For the past 15 years, the so-called “test-negative” study design (TND) has proven to be a relatively quick and increasingly popular method to quantify vaccine effectiveness. TND studies are observational studies in which the study population is drawn from patients who visit clinics or hospitals with acute respiratory infections or similar concerns and then are tested for the flu. Those who test positive become the cases; those who test negative become the controls. Those data are then analyzed against vaccination rates to determine the likelihood that a vaccinated or unvaccinated person will come down with the flu.

The TND system is quicker than prospective studies and can offer epidemiologists a snapshot of vaccine performance. However, Skowronski says TND studies warrant careful scrutiny. “Just because you can generate a number, which you will, doesn’t mean that number is valid or reliable without a heavy dose of caution applied,” she said.

Skowronski and colleagues presented their concerns in a letter published this month in Clinical Infectious Diseases. They aren’t arguing that test-negative design is invalid. Rather, they simply say it needs to be interpreted with a lot of context.

Skowronski told MD Magazine that because it’s relatively easy for people with access to clinical data to come up with vaccine effectiveness rates using TND, “they may forget...to question how reliable those data sets are for secondary research purposes when they were originally established for other clinical purposes.”

The test-negative design can lead to a number of biases. For instance, variations in the way specific clinicians and healthcare centers decide who to test for the flu can have a major impact on who is included in the study and therefore the results.

In a 2018 study of pregnant women, the original data set contained nearly 20,000 women who were hospitalized with acute respiratory or febrile illness. However, only 1030 were tested for influenza. Among those who were tested, more than half (58%) tested positive for the flu. That high rate of positive results and the fact that a non-specific range of illnesses were included suggests a likelihood that selection bias impacted the vaccine effectiveness findings, Skowronski and colleagues wrote. These and similar data sets could be heavily influenced by individual physician behaviors, hospital policies, and patient healthcare-seeking behaviors, among other factors.

That’s not to say such studies ought to be disregarded, Skowronski said. Rather, she argues the investigators in such studies must clearly demonstrate that they’ve considered these concerns, and then provide enough data and information for outside readers to properly interpret and understand the findings.

“That’s not to say that these issues are insurmountable,” she said. “But as part of due diligence then investigators have to then present in detail what may be driving their estimates and provide reassurance or caution about the interpretation of their findings.”

Skowronski said even with flaws, TND studies can lead to important insights and signals for those developing vaccines. She said imprecise numbers can still be valuable. However, she also said when comparing different estimates from different studies, the details matter. If investigators don’t make such data and considerations public, it’s tough to come up with a meaningful comparison.

“If you are going to compare estimates across studies then you need to wade into the weeds,” she said.

Related Videos
Christian Sadaka, MD: Significant Increase in Pediatric Gastroparesis Hospital Admissions After COVID-19
Developing Risk Assessment Tools for Viruses in School
Using Microbiomes to Diagnose Ventilator-Associated Pneumonia
Aaron Henry, PA-C, MSHS: Regaining Black Male Patient Trust in the Doctor's Office
Tailoring Chest Pain Diagnostics to Patients, with Kyle Fortman, PA-C, MBA
Solutions to Prevent Climate Change-Related Illness, with Janelle Bludhorn, PA-C
Kyle Fortman, PA-C, MBA: Troponin and Heart Injury Risk Screening Recommendations
What Should the American Academy of Physician Associates Focus on in 2025?
The Rising Rate of Heat-Related Illness, with Janelle Bludhorn, PA-C
Danielle O'Laughlin, PA-C, MS: Navigating Long-Term Risks, Family Planning in PCOS
© 2024 MJH Life Sciences

All rights reserved.