Chapter 6: The COVID-19 Evidence Accelerator

Learnings from the Parallel Approach to Analysis

The Evidence Accelerator Parallel Analysis taught us many lessons that we can use to further the use and effectiveness of real-world evidence beyond COVID-19:

  1. Collaboration: The willingness to collaborate with other stakeholders is a key component of the Evidence Accelerator. Finding ways to leverage this attitude beyond COVID-19 to increase knowledge of resources across diverse groups in the health care and laboratory data ecosystem is imperative. Various diverse panels at large health care conferences, including stakeholders from pharmaceutical, regulatory, and health care areas,8 have mentioned that the willingness demonstrated by groups working with the Evidence Accelerator to share information and resources is unprecedented. Such collaboration may not easily translate to other disease areas, but it should be considered, especially in areas of high unmet need. The COVID-19 experience highlighted processes that are either duplicative or unnecessary, and we may begin to either do without them or do them more efficiently.
  2. Data Heterogeneity: Data from claims and electronic health records (EHRs) have an inherently different capture of the patient experience, which necessitates different approaches to identifying comorbidities and events in order to understand the full clinical history for the event of interest. Researchers often have extensive experience in only one data source specifically. In parallel analysis, we must consider multiple data sources and allow protocols that are flexible enough to accommodate the specifics of each data source and to align where appropriate.9–11 The FDA Foundation and Accelerators have built catalogues that outline the different study design approaches taken to accommodate these different data sources. We will make this resource publicly available.
  3. Unifying Case Definitions: Multiple definitions that represent the same clinical construct abound in scientific literature. In parallel analysis, we have a unique opportunity to align definitions across studies, intentionally point out differences, and describe how they might explain discrepancies in results. Through this work we have catalogued different case definitions (taking validated algorithms where available) and will post them when publicly available.1
  4. Ascertainment: The ability to fully capture events for a given patient will differ based on the data source. Specifically, EHRs only capture events attended within a specific system and have limited capture of events after discharge if they occur in other systems. Similarly, health care claims systems can typically only capture billable events such as procedures and prescription medications and generally lack the granularity available in the EHR. However, some systems are integrated and able to capture data from both the EHR and claims. There is heterogeneity in the granularity of data regarding the care and medical history of a patient, which is generally more available from EHRs than claims.
  5. Disclosure Limitations: The ability to identify independence across data partners is complicated when data aggregators are prohibited from disclosing the systems included in their network. Disclosure agreements between the aggregator and participating health systems that allow for the identification of the systems for the purposes of public health activities, or by request, would improve transparency. Alternatively (but not as helpfully), independence of samples could be demonstrated by mapping coverage areas for all partners included in parallel analysis in order to understand the amount of overlap and suggest independence of samples across partners.
  6. Data Flow: In a fragmented health care system that often involves siloed technologies, assembling the data is the first task toward building real-world evidence. In diagnostics, for example, test manufacturer information is often not integrated with laboratory and clinical data for the instrument. A lack of interoperability impedes public health reporting and the ability to assess the performance of the test post-market. For devices like COVID-19 tests, the experience during emergency situations demonstrates the need for regulatory authorities to require and incentivize the use of device ID, and to integrate these data for public health reporting purposes so that post-market safety and effectiveness can be more readily calculated.