This article requires a subscription to view the full text. If you have a subscription you may use the login form below to view the article. Access to this article can also be purchased.
- Address for Correspondence: W. Jon Windsor
, Colorado School of Public Health, wjwindsor58{at}yahoo.com
ABSTRACT
The Influenza Hospitalization Surveillance Network (IHSN or FluSurv-NET) was evaluated using the Centers for Disease Control and Prevention’s (CDC) guidelines for evaluating a public health surveillance system. The IHSN was evaluated for usefulness, simplicity, flexibility, data quality, acceptability, sensitivity, positive predictive value (PPV), representativeness, timeliness, and stability. The IHSN was found to use a broad range of sources for influenza surveillance that can be openly accessed via the CDC’s “FluView” online application. The IHSN is highly adaptable, with its capacity to accommodate additional data sources when needed. The overinclusiveness of different laboratory diagnostic methodologies was found to be detrimental to the overall data quality of the IHSN in the form of variable sensitivity and PPV measures among the CDC’s acceptable testing methods. Overall, the IHSN is a very robust system that allows for timely access to influenza data by public health officials. However, the inclusivity of the IHSN causes it to fall short when considering the importance of consistency in data collection practices. The IHSN fails to take into account several factors that could either artificially increase or decrease case counts. We recommend the IHSN integrate a more streamlined and reliable data collection process and standardize its expectations with all of its reporting sites.
- CDC - Centers for Disease Control and Prevention
- DFA - direct fluorescent antibody
- DOB - date of birth
- EIP - Emerging Infections Program
- FDA - Food and Drug Administration
- FN - false negative
- FP - false positive
- ID - identification
- IFA - indirect fluorescent antibody
- IHSN - Influenza Hospitalization Surveillance Network
- NCHS - National Center for Health Statistics
- PPV - positive predictive value
- RIDT - rapid influenza diagnostic test
- RT-PCR - reverse transcription-polymerase chain reaction
- TN - true negative
- TP - true positive
- WHO - World Health Organization
STAKEHOLDERS
The Stakeholders of the Influenza Hospitalization Surveillance Network (IHSN) include the Emerging Infections Program (EIP) and all their affiliates, the United States Centers for Disease Control and Prevention (CDC), the World Health Organization (WHO), local and state health departments, educators, healthcare officials, and the public.
SYSTEM DESCRIPTION
Importance
Annually, influenza disseminates worldwide, causing widespread illness and, in severe cases, death. In the 2014–15 season for the United States, laboratory-confirmed influenza-associated hospitalizations reached upwards of approximately 65 cases per 100,000 persons, 30 in 2015–16, 60 in 2016–17, and 102 in 2017–18.1 Influenza-associated hospitalization cases are organized by age, underlying medical conditions, virus subtype, and cumulative/weekly rates.1,2 Severity is indexed by accumulating influenza-associated hospitalization case counts and calculating cumulative and weekly (unadjusted) incidence rates using population estimates from the National Center for Health Statistics (NCHS) to estimate hospitalization rates in the United States.1
The inequities of influenza infection result in time away from work and other societal obligations. The economic losses from the effects of influenza are considerable and the cost of hospitalization because of influenza is substantial. A study published in June of 2018 estimated the average annual total economic burden of influenza to the healthcare system and society was $11.2billion. Direct medical costs were estimated to be $3.2billion, and indirect costs $8.0billion.3 Influenza infection can be largely, but not completely, prevented by vaccination. The CDC’s 2017–18 influenza season vaccine effectiveness study showed that for children between 6 months and 8 years old, there was 68% less incidence of influenza (subtype A or B) in those vaccinated compared to those unvaccinated, while in the elderly population (>65 years old), there was only a 17% reduction of influenza in those who were vaccinated compared to those unvaccinated.4 The contents (or viral subtype targets) of influenza vaccines are based on recommendations by the WHO that carefully analyze sentinel surveillance of viral genotyping each year.5 Influenza can only be prevented through vaccinations; there is no cure for the infection outside of physician-prescribed antiviral drugs and basic symptom management. Influenza surveillance benefits the public by outlining the severity of each influenza season in an approximation of real time to help drive public health entities’ intervention strategies within the United States.
Purpose
The purpose of the IHSN within the EIP of the CDC is to conduct population-based surveillance for laboratory-confirmed influenza-associated hospitalizations.5 The objectives of the IHSN are to determine the time and location of where influenza activity is occurring, track influenza-related illness, determine which influenza virus subgroups are circulating, detect influenza virus mutation events, and measure the influence influenza has on hospitalizations and deaths in the US population.4
IHSN-gathered data is used to estimate age-specific hospitalization rates on a weekly basis and display characteristics of persons hospitalized with influenza. Cases are identified by reviewing hospital laboratory and admission databases and infection control logs for patients hospitalized during the influenza season with a documented positive influenza test (ie, viral culture, direct/indirect fluorescent antibody assay [DFA/IFA], rapid influenza diagnostic test [RIDT], or molecular assays, including reverse transcription-polymerase chain reaction [RT-PCR]).4 There is no legal requirement to submit influenza-associated hospitalization data to the CDC because it is not a nationally notifiable disease;7 however, participation is conditional for each participating state to receive funding from the CDC. The IHSN facilitates integration with other systems by aggregating data collected from individual EIP state surveillance systems (Figure 1).
The IHSN conducts surveillance on the individual populations of the 10 EIP-participating states. Data is collected annually and published weekly starting in the beginning of October and ends as late as May. Each of the EIP states have designated counties that contribute data to the IHSN.4 Among the 10 states, there are approximately 70 counties whose hospitals contribute data to the IHSN. The IHSN accumulates data from 267 acute care hospitals and laboratories in counties varying in socioeconomic status within the 10 EIP sites. All sites within the EIP are geographically distributed throughout the United States, and encompass approximately 27 million people.8 Surveillance officers (usually through EIP-participating public health departments) are trained to collect laboratory-confirmed influenza cases from laboratory logs, infection control practitioner logs, weekly calls to data collection sites (hospitals), or (depending on the state) state-reportable condition logs.6 Data is then compiled and sent on a weekly basis to the CDC for analysis and eventual input into the FluView application.1,2 Patient information is recorded with each case in all EIP-participating states. This is because, in contrast to the CDC’s notifiable conditions, laboratory-confirmed influenza (subtype A) is a reportable condition in all EIP states (Table 1) and that same information is required for use at the CDC (Figure 1). However, unique patient information (name, date of birth [DOB], patient identification [ID] number) is encrypted and securely sent, and is not published in weekly surveillance reports, nor is it inputted into the FluView application.
EVALUATION DESIGN
The overall purpose is to evaluate the performance of the IHSN (FluSurv-NET) by assessing the reliability of laboratory-confirmed influenza-related hospitalizations in the United States. The evaluation can be taken under consideration and used to drive improvement or reinforce the IHSN strengths by the previously mentioned stakeholders. Information gathered by the evaluation can be used to highlight noted strengths and weaknesses of the IHSN and to improve overall quality assurance of data collection. An evaluation of the IHSN will consider whether the data collection methods require improvement, determine efficiency of case report flow, identify any discrepancies between the 10 EIP-participating sites, and determine any implications of variable state-level data accumulation. IHSN will be assessed by determining its overall usefulness for detecting trends and associations of influenza occurrences and how they can be used to prompt further research and prevention efforts. The IHSN will also be assessed by investigating each individual system attribute and their levels of contribution to the overall performance of the IHSN. System attributes will include simplicity (structure and ease of operation), flexibility (adaptability to evolution of information and public needs), data quality (validity of gathered data), acceptability (participation rate of EIP states), sensitivity (ability to identify cases and monitor changes), positive predictive value (PPV) (confidence of reported cases being “actual” cases), representativeness (accuracy of influenza occurrence and population distribution), timeliness (turnaround time between data collection steps), and stability (overall reliability of the IHSN).
CREDIBLE EVIDENCE
Usefulness
Through the FluView Interactive application, the IHSN uses laboratory, hospital admission database, and infection control logs to capture hospitalized cases with a documented positive influenza test result during the regular influenza season.1,2 This is a comprehensive approach for accumulating data. The IHSN addresses the variability of testing methods by outlining the Food and Drug Administration (FDA)-cleared, or the Clinical Laboratory Improvement Amendment–waived influenza testing method that includes, but is not limited to, viral culture, DFA/IFA, RIDT, or nucleic acid–detecting molecular assays.2
SYSTEM ATTRIBUTES
Simplicity
FluView application allows for real-time data access and can differentiate cumulative rates based on age group, EIP state, and influenza season. Data is gathered by weekly reports to the CDC Influenza division by each EIP-participating state (Figure 1). The 10 states participating in the EIP that contribute data to the IHSN FluView application are California, Colorado, Connecticut, Georgia, Maryland, Minnesota, New Mexico, New York, Oregon, and Tennessee. Georgia, Maryland, and Tennessee only require that influenza subtype A be reported to the state health department. All other states require all hospital-confirmed influenza cases to be reported to their state health department authorities (subtypes A and B).11⇓⇓⇓⇓⇓⇓⇓⇓-20
Flexibility
Influenza can undergo antigenic drift, which are changes made (through mutation) to its varying subtypes. Because of antigenic drift, previous vaccination targets (subtypes) are then less effective at preventing infection in the population, making influenza difficult to control each year.21 Considering the unpredictable nature of influenza, The IHSN has a high degree of flexibility between influenza seasons. The IHSP can adjust to each influenza season by adding additional reporting sites outside of the EIP states (sites).6 The 2009–10 H1N1 pandemic prompted this change in the IHSP’s surveillance capacity. Additionally, the IHSP can also remove sites as needed. This has potential to compromise the longitudinal validity of data gathering and analysis. Each EIP-participating state has their own unique criteria for reportable conditions (Table 1), which can also compromise the validity IHSN data. However, aggregation of data at the CDC level is simplified because of their strict criteria for each case report (Figure 1).8
Data Quality
Consistent surveillance officer training at EIP sites mitigates variability of the data accumulation process at the state level. The IHSN uses NCHS data to form population estimates used in rate calculations when calculating weekly and cumulative influenza-associated hospitalization rates.1 However, each test method outlined within the CDC’s “Information for Clinicians on Influenza Virus Testing” has variable sensitivity and PPV measures (Table 2).22 This variability has the potential to compromise the overall reliability of rate calculations used in the FluView application via underreporting caused by inaccurate test results (false negatives).
Acceptability
For the IHSN EIP sites to receive funding from the CDC, they are required to comply with basic reporting standards of the CDC’s national notifiable conditions. By having trained surveillance officers for collection of relevant information (and paying them to do so), this allows EIP sites to participate in the IHSN, ensuring as much data is provided as possible. Apart from 3 participating sites (Table 1), laboratory-confirmed influenza (A and B subtypes) is a state-reportable condition ensuring compliance at the site level. Failure to report a “reportable” or “notifiable” condition by a hospital or physician office subjects them to potential revocation of individual medical licenses or operating licenses of the institutions (hospitals) at fault.23
Sensitivity and Positive Predictive Value
Table 2 includes a compilation of 3 tests each selected from the “Available FDA-Cleared Rapid Influenza Diagnostic Tests (Antigen Detection Only)” and the “FDA-Cleared Nucleic Acid Detection Based Tests for Influenza Viruses” pages on the CDC’s website, 22,24 and the sensitivity/PPV calculations for each test. Test selections were made by numbering each test in each table and submitting them into a random number generator. Calculations were performed using “Nasopharyngeal Swab” sample data.
The clinical sensitivity of all 3 nucleic acid testing methodologies ranges from 90% to 100%, while for antigen detection methods, they range from approximately 84% to 97% for influenza subtype A. The confidence that a detected positive value is actually positive within the patient for nucleic acid testing methods are all almost universally 100%, whereas antigen detection tests only had a range of approximately 75%–93% confidence in positive values for influenza subtype A.
The IHSN is heavily reliant on the accuracy of influenza testing methods at the individual laboratories within the EIP states’ participating counties. Sensitivity and positive predictive values were determined at individual testing levels to address this at the IHSN level. There are currently no criteria for confirming positive influenza tests within the IHSN. Confirmation testing for positive results is left to the discretion of the EIP-participating states. Table 1 indicates only 3 EIP-participating state health departments require confirmation testing on all positive influenza tests. The lack of confirmation testing could lead to an inflation of false positive test results on methods with a lower positive predictive value. Table 2 outlines the differences in sensitivity and positive predictive values between the 6 selected tests. It is noted that there is a lot of variability in sensitivity and specificity among the different test types.
Representativeness
The IHSN has a high degree of representativeness in terms of geographic distribution of counties within the EIP-participating states and of the EIP states themselves. This allows for a stratified approach to IHSP data collection, which helps published data to be more generalizable to the rest of the United States.
A key challenge is accurate representation of a grossly underreported disease like influenza.32,33 The CDC has struggled for decades to adjust and refine their models to determine epidemic thresholds and determination of seasonal severity. This is because of changes in diagnostic technology, access to diagnostics, and modeling techniques.34⇓⇓-37 It is important to note that population-based estimates of influenza are based on census data, which is also based on statistical models that have evolved over the decades as well. The dichotomy of having more cases reported may result in stimulating media reporting, which in turn stimulates patient demand that stimulates healthcare providers to order influenza testing. Because of an increase in influenza molecular testing options, increased access of testing options to physicians can cause them to overscreen, which can lead to an artificial inflation of positive influenza cases that may or may not be contributing to patient hospitalizations.38 The IHSN counts all hospitalizations that have a laboratory-confirmed positive influenza test. Artificial inflation of positive cases in the form of overscreening, combined with the IHSN case definition, can lead to a misrepresentation of the population’s influenza-associated hospitalization rates. This raises concerning questions regarding the scientific basis upon which we claim severity: is it based on antigenic shift (ie, a pandemic), or more accurate statistics for an underreported disease?
Timeliness
Each EIP IHSN state has variable reporting conditions and timelines for influenza (Table 1). All participating states require all laboratory-confirmed influenza cases to be reported to the state health department. The reporting timeframe for influenza in each state ranges from immediate to reporting “within 7 days” (Table 1). The CDC estimates there to be a median 7-day lag time from the time a case is identified to when the CDC receives the report for the IHSN.6 It is unclear as to whether the IHSN inputs influenza cases using the identification date at the laboratory level or the date the CDC received the data. However, a 7-day lag time between identification and reporting to the CDC is fairly rapid considering the geographical distribution of EIP sites and frequency of influenza cases.
Stability
There have been no significant events or available evidence that suggest the stability of the IHSP and their FluView application have ever been compromised. The IHSP provide weekly updates and there have been no notable delays in updates as of 2018.
CONCLUSIONS/RECOMMENDATIONS
The IHSN uses a broad range of sources to identify influenza-associated hospitalization cases. This, combined with a narrow case definition, affords the IHSN the benefit of having reliable sources of data collection.13 The added benefit of each EIP state having at least some degree of required reporting for influenza (Table 1) and near identical reporting requirements (Figure 1), indicates that some effort has been made to mitigate underreporting from participating EIP states. The FluView application is user-friendly and easily accessed by the public, ensuring widespread use of IHSN-accumulated data.13 Adaptability of the IHSN allows for timely and appropriate reactions to the constant shifts in influenza activity between seasons. The IHSN data quality can be both effective or ineffective, depending on which data points are being considered. It is also noted that the stability of the IHSN has been proven adequate in the past but vigilance must remain to maintain that security.
By using NCHS data, universal determination of population estimates from each participating county within the EIP states allows for consistent population estimates for rate calculations.12 However, laboratory testing methodologies and individual physician testing behaviors are not universal. Each reporting laboratory uses different testing methodologies that vary in sensitivity and PPV (Table 2). Certain testing methodologies are more reliable than others in terms of sensitivity. Methodologies with lower sensitivity can artificially decrease case counts. Testing platforms that have a lower PPV can artificially increase case counts. All of this can potentially confound site-specific data and lead to inaccurate predictions or comparisons when used for research. Lower rates in certain areas could be a product of less accurate testing methods (eg, RIDT) and not an accurate reflection of the status of influenza in that area. Molecular testing has proven to the be one of the most reliable methods of identifying influenza.4 By incentivizing hospital laboratories to adopt more molecular testing for influenza identification, the IHSN can ensure a higher degree of accuracy in its data sources. Furthermore, state health departments can address artificial increases to case counts, implementing more confirmation testing on positive influenza samples that do not exceed a certain PPV threshold.
The IHSN ensures EIP state participation by making weekly influenza case reporting conditional for the receipt of funding from the CDC.26 This further diminishes the likelihood of cases not being reported to the state health departments for IHSN use. Population-specific socioeconomic status and demographics are well-represented in the IHSN dataset. This is because of a wide geographic distribution of participating counties and EIP states.1,2 However, the IHSN fails to take into account individual hospital policy on screening patients for influenza, which is made possible by the increasing number of affordable influenza testing methods on the market.38 Policies that favor overscreening can artificially increase case counts, deteriorating the quality of IHSN rate estimates. This can potentially be addressed by narrowing the case definition so that laboratory-confirmed influenza-associated hospitalizations only encompass hospitalizations that are a result of influenza.
Each EIP state have varying reporting time frames for influenza. This can result in delays of reporting and lower weekly case counts. This can be addressed by proposing a more universal reporting timeframe among the EIP states. However, the IHSN is still able to provide weekly updates to the FluView application which is fairly rapid considering the scope of the IHSN (Table 1). The variability of influenza each year requires that the United States be vigilant in its evaluation and improvement of influenza-associated hospitalization surveillance to adapt to the ever-growing changes in severity, morbidity, and mortality of influenza.
LESSONS LEARNED
Overall, the IHSN provides a fairly reliable data source when considering its flexibility, usefulness, and timeliness. The IHSN’s ability to add states into its data pool based on need makes it highly adaptable to the unpredictability of the influenza virus, but at the cost of introducing more variability into its dataset. IHSN data can be used to establish incidence rates and trends over time. The FluView application that uses IHSN data can stratify data based on age, underlying conditions, and viral subtypes to help determine measures of association during each influenza season. Data is updated on a weekly basis allowing for analysts and public health officials to implement control and prevention measures in a timely manner. The IHSN is extremely stable and experiences little to no (noticeable) system outages.
The IHSN data collection process requires a more streamlined and reliable approach. Coupled with a lack of confirmation testing, variability in the clinical sensitivity and positive predictive values of each test method deteriorates the overall reliability of data. Measures that ensure confirmation testing for positive influenza results obtained by analytically unreliable tests is paramount to enhancing overall quality of data. The representativeness of IHSN data can be more accurately determined by comparing the influenza screening policies of individual hospital-based laboratories to differentiate volume of testing and potentially eliminate overtesting as an inflation for cases in a future study.
The question remains of how to manage communications in the context of increased accuracy in representing a historically underreported disease like influenza. There are ethical considerations when interpreting data in the context of continually changing data collection processes and assessment methods within in the context of ongoing vaccine skepticism. On the one hand, we are improving awareness of the importance of influenza as a potentially serious disease for which early treatment can reduce cost of care, morbidity, and mortality. On the other hand, overcalling severity without providing key disclaimers regarding changes made over time to improve surveillance may impair credibility with patients and providers.
ACKNOWLEDGMENTS
A special thank you to Ian Wallace for providing unique clinical laboratory testing insights and addressing the implications the variable sensitivities amongst different testing methods. Thank you, Dr. Dawn Comstock, for providing the education that facilitated this evaluation and for the timely feedback on earlier drafts. Thank you, Dr. James Wilson, for providing invaluable insights into health security intelligence and providing your unique perspective on the impact surveillance systems can have on driving public health responses. And thank you to Alicia Cronquist, whose willingness to discuss her experience with electronic disease reporting systems provided great direction when discussing the strengths and limitations of the IHSN.
The opinions expressed by authors contributing to this article do not necessarily reflect the opinions of the CDC or the institutions with which the authors are affiliated.
- Received July 10, 2019.
- Accepted September 23, 2019.
American Society for Clinical Laboratory Science