If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password
If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password
UPMC Children’s Hospital of Pittsburgh, Faculty Pavilion, 4401 Penn Avenue, Suite 0200, Pittsburgh, PA 15224, USADepartment of Critical Care Medicine, University of Pittsburgh School of Medicine, 3550 Terrace Street, 603A, Pittsburgh, PA 15261, USA
“Living, breathing” trials are ushering in a new era of rapid learning, accelerating care improvements, and are providing a solution to some of the biggest challenges in health care.
•
Modern health care information systems can support more efficient trial data collection and facilitate the incorporation of advanced trial analytics at the point of care, helping to realize the vision of the learning health care system.
•
“Living, breathing” trials are a clinical research paradigm that is rapidly evolving, and trial teams will often need to “learn while doing.”
•
Leveraging the electronic record for trial workflow implementation and data collection requires health system leadership support.
•
The workload associated with harmonizing data collection strategies and trial protocols is commonly underestimated; both efforts require ample planning and sufficient resources.
Introduction
The practice of medicine is characterized by uncertainty. “Life is short and the Art long; the occasion fleeting; experience fallacious; and judgement difficult,” posited Hippocrates,
Randomized clinical trials (RCTs) and amalgamations of RCT findings, such as systematic reviews and meta-analyses, are stationed at the top of the hierarchy of medical evidence meant to aid clinicians in wrangling uncertainty.
This position at the top is earned by way of the quality of information generated through the power of randomization, which, unlike any other method, controls for both measured and unmeasured confounders. When properly executed, randomly assigning patients to prespecified treatment regimens provides data affording as close a vantage as possible to assess the effect of an intervention compared with the counterfactual scenario. Consequently, the findings of well-conducted trials constitute the bedrock of contemporary “evidence-based medicine,” are especially influential in determining which recommendations are put forward in consensus guidelines, and play a large role in shaping the delivery of bedside care.
RCTs also have limitations. Conducting an RCT is a cumbersome endeavor, which many argue contributes to longer than necessary times between the identification of a potential therapeutic breakthrough and actionable clinical results. The infrastructure necessary to run a large RCT is specialized and expensive, commonly confining implementation to academic health systems with sufficient economies of scale to successfully enroll patients while reaping adequate returns to make ongoing participation sustainable.
However, differences between academic RCT sites and community health care settings, such as varied staffing patterns, case mixes, and trainee volumes, limit generalizability, as do extensive lists of RCT eligibility criteria. Moreover, RCT findings are traditionally analyzed using frequentist statistical comparisons of the average treatment effect of an intervention in the enrolled population, obscuring heterogenous treatment effects corresponding with substantial benefit (or harm) in patient subpopulations.
To overcome some of these limitations, new trial paradigms rooted in the origins of evidence-based medicine are beginning to disrupt the traditional mold.
Anti-thrombotic therapy to ameliorate complications of COVID-19 (ATTACC): study design and methodology for an international, adaptive Bayesian randomized controlled trial.
UPMC REMAP-COVID Group, on behalf of the REMAP-CAP Investigators Implementation of the randomized embedded multifactorial adaptive platform for COVID-19 (REMAP-COVID) trial in a US health system-lessons learned and recommendations.
These new designs recognize uncertainty permeates medical decision making and aim to capitalize on modern health system infrastructure to integrate investigation as a component of care delivery. Such designs leverage the computational capabilities of modern health care information systems, present opportunities for more efficient trial data collection and monitoring, draw from quality and performance improvement toolkits, promote greater equity in enrollment, and can enmesh dynamically with existing clinical workflows. “Living, breathing” trials represent a major movement in the direction of the National Academy of Medicine’s vision of a large-scale learning health care system,
Committee on the learning health care system in America, Institute of Medicine.
in: Smith M. Saunders R. Stuckhardt L. Best care at lower cost: the path to continuously learning health care in America. National Academies Press (US),
2013
Olsen L. Aisner D. McGinnis J.M. Roundtable on evidence-based medicine. The learning healthcare system: workshop summary. National Academies Press (US),
2007
and such trials have gained equal or greater traction around the globe. The COVID-19 pandemic accelerated adoption of such “living, breathing” trials, with several successful studies demonstrating partial or almost entire integration into clinical workflows, and rapidly generating landmark treatment insights.
UPMC REMAP-COVID Group, on behalf of the REMAP-CAP Investigators Implementation of the randomized embedded multifactorial adaptive platform for COVID-19 (REMAP-COVID) trial in a US health system-lessons learned and recommendations.
Appreciating the relevance of this tectonic shift in the clinical trial landscape necessitates a brief history of the term “evidence-based medicine,” a concise overview of quality improvement science, and a broad-stroke description of both the benefits and the drawbacks of relying on modern information systems for clinical investigation. The rapid evolution of this new paradigm necessitates that investigator teams must frequently “learn while doing” while looking to recent examples of “living, breathing” trials, several of which are presented in this review as guideposts.
Finally, investigators should be aware of anticipated developments in the space of data standards and information systems that are expected to shape how “living, breathing” trials are deployed, as well as areas of controversy regarding the ethics of intertwining clinical trials and care delivery.
“Living, breathing” clinical trials and the COVID-19 pandemic
Apart from established supportive care measures, there were no known treatments for COVID-19 when the World Health Organization declared a pandemic in March 2020. The worldwide desperation to rapidly learn about the disease and determine effective therapies prompted innumerable investigations of varied designs. Multiple randomized trials sprouted up, although many were too small or ineffectively designed to provide meaningful findings.
In contrast, the COVID-19 pandemic highlighted the utility of pragmatic trials to quickly identify effective treatments that were readily translatable into real-world clinical settings. Pragmatic trials can take multiple forms but share the commonality of a design that meshes with clinical workflows. In a sentinel work defining what constitutes a pragmatic trial, French statisticians Schwartz and Lellouch
contrasted “normal,” or the everyday clinical environment, with the “laboratory” context of a traditional clinical trial to distinguish “pragmatic” from what they called “explanatory” trials. Explanatory trials test the effects of an intervention in a select population, whereas pragmatic trials examine the effect of the intervention in real-world conditions. This model helps distinguish between pragmatic versus traditional trial designs; however, the reality is these 2 descriptions sit on opposite ends of a continuum, with no single point of demarcation.
Master protocols are a more recent complement to pragmatic designs that allow investigators to simultaneously assess multiple interventions, multiple diseases, or both multiple diseases and multiple interventions within a single, overarching trial.
There are 3 types of trials that rely on a master protocol: umbrella, basket, and platform trials. An umbrella trial simultaneously studies treatment effects within subgroups of a single disease. A basket trial studies a therapy for multiple diseases within a single trial. A platform trial studies multiple therapies for a disease in perpetuity, allowing study treatments to dynamically enter and exit the platform throughout the trial’s lifespan. Realizing the advantages of master protocols necessitates strategies to reduce the ostensible complexities of these trial designs so that trialists and staff responsible for day-to-day operations are not overwhelmed. Notably, although trial design and analyses can be complex, proper trial implementation should render these complexities invisible to clinical staff and end-users. Modern health care information systems, emerging clinical informatics tools, and cutting-edge bioinformatics techniques are increasingly essential to conduct these trials efficiently. Technology can be leveraged to deploy the trial within an electronic record workflow, incorporate Bayesian inference into randomization and trial analyses, facilitate secure data transfer to support adaptive updates to trial design, and support trial data collection by obviating some or all manual data abstraction. Although the vision of the learning health care system outlined by the National Academy of Medicine
Olsen L. Aisner D. McGinnis J.M. Roundtable on evidence-based medicine. The learning healthcare system: workshop summary. National Academies Press (US),
2007
includes a seamless integration of patient care, electronic workflows, clinical investigation, and system priorities, many COVID-19 trials have been successful by featuring some, but not necessarily all, of these ingredients. Consider the Randomised Evaluation of COVID-19 Therapy (RECOVERY) platform trial conducted in the United Kingdom, which identified dexamethasone as an effective therapy that was rapidly incorporated as standard of care for hospitalized, hypoxemic patients within weeks of the start of the pandemic.
RECOVERY was not fluently integrated into existing electronic record workflows at that time, but did rely on a Web service for randomization and a simple on-line case report form for minimal data collection, while leaning on the structure of the UK’s National Health System (NHS) to accommodate trial workflow within care delivery processes, as well as the NHS’ data sets and national registries to complement manually collected data.
Other examples of platform trials designed and implemented to address the global public health emergency of COVID-19 included the 'Investigation of Serial studies to Predict Your Therapeutic Response with Imaging and moLecular Analysis (I-SPY 2 TRIAL) for COVID-19 coordinated in the United States,
Anti-thrombotic therapy to ameliorate complications of COVID-19 (ATTACC): study design and methodology for an international, adaptive Bayesian randomized controlled trial.
Each of these trials offers its own case study providing insight into both the promise and the challenges of the “living, breathing” paradigm of clinical trials. The present discussion centers on the authors’ experience with the implementation of REMAP-CAP during the COVID-19 pandemic in the United States. To date, REMAP-CAP has provided insights regarding multiple therapies for COVID-19, including corticosteroids,
Effect of hydrocortisone on mortality and organ support in patients with severe COVID-19: the REMAP-CAP COVID-19 corticosteroid domain randomized clinical trial.
Writing Committee for the REMAP-CAP Investigators Effect of convalescent plasma on organ support–free days in critically ill patients with COVID-19: a randomized clinical Trial.
Evidence-based medicine and Bayesian inference in clinical decision making
REMAP-CAP relies on Bayesian inference to increasingly favor randomization into better performing trial arms as outcome data accrue, an approach often termed “response adaptive randomization.“
Bayesian analyses are also used to report the primary outcomes, presented as posterior probabilities of benefit (or harm). The use of Bayesian, instead of frequentist, statistical analyses is an increasingly common design feature of new living, breathing trials. To implement response adaptive randomization in REMAP-CAP, most domains are conducted with an initial run-in period of balanced randomization. A multifactorial Bayesian inference model accounts for patient age, trial site and region, time era, tested regimen, and stratum (eg, moderate vs severe illness), as well as interactions between different interventions and strata. Monthly data updates allow the statistical analysis committee to calculate posterior probabilities for each regimen being tested by stratum and adjusted for sample size. A priori thresholds are defined for superiority (eg, >99% probability that an intervention is superior), equivalence (eg, >90% probability that odds of death for 2 interventions differ by <0.2), or inferiority (eg, <1% probability that an intervention is superior). Monte Carlo simulation is used to assess trial performance under a variety of conditions.
the application of Bayesian inference to medical decision making is a foundational cornerstone of evidence-based medicine and can be traced back to its very inception. The term “evidence-based medicine” was first coined by Eddy
in workshops on physician decision making beginning around 1985 and published in 1990. As a concept, evidence-based medicine was borne out of increasing recognition that physician judgment frequently did not align with available objective data, or in many instances, lacked supporting data altogether. This insight coincided with breakthrough work by psychologists Kahneman and Tversky,
who characterized commonplace flaws in human decisions made under risk. In contrast to the long-held theory that risk-laden decisions are made in a manner that maximizes individual utility, Kahneman and Tversky demonstrated that people routinely rely on heuristic, or pattern-based, assessments in making decisions, rather than an objective assessment of available data and probabilities. One solution Eddy put forward for medical decision making required application of the centuries-old Bayes theorem, which considers the prior probability of an outcome and accounts for the objective weight of influential factors, such as the result of a diagnostic test, to calculate a posterior probability. This proposed Bayesian approach to medical decision making, also articulated by other leaders of analytical thinking at the time, such as Feinstein,
was accordingly installed as a cornerstone of the foundation of evidence-based medicine.
Several challenges are apparent, however, when the practicalities of applying Bayesian decision making in the real world are considered. Generating prior probability distributions can amount to a coarse, variable estimate when adequate data are not available.
Although the probabilistic output of Bayes theorem may represent the range of decisional uncertainty facing a clinician, the decisions themselves are commonly binary or at least discrete (eg, a treatment is administered or not, or a test is considered positive or negative). When designing a clinical trial, traditional sample size calculations are not readily applicable to a Bayesian design, lending uncertainty to both the required number of patients to enroll and the trial costs. Furthermore, calculating probabilities based on multidimensional input can be computationally intensive. Limitations to a Bayesian approach to practicing medicine have been recognized for decades and offer some explanation as to why this paradigm is not regularly used explicitly in modern practice.
Frequentist statistics, in which parameters are considered fixed and an assertation about either accepting or rejecting a null hypothesis is made on the basis of a P value, have long dominated trial design.
In part, this convention arose because software has been widely available for conducting frequentist analyses. However, advances in both computer hardware and software in recent decades have overcome some of the barriers favoring the frequentist route and have made Bayesian analyses much more accessible. In addition, an advantage of using a Bayesian approach is that data can be analyzed and expressed as posterior probabilities, which is more quantitative and potentially more useful than a dichotomous frequentist analysis based on an arbitrary P-value threshold. As a result, an increasing volume of Bayesian analyses are appearing in the medical literature,
Secondary Bayesian analyses have also called into question previous conclusions of frequentist trials, such as in the study of extracorporeal membrane oxygenation for severe acute respiratory distress syndrome
Extracorporeal membrane oxygenation for severe acute respiratory distress syndrome and posterior probability of mortality benefit in a post hoc bayesian analysis of a randomized clinical trial.
notes that by measuring uncertainty and allowing for continuous learning, Bayesian trials allow investigators to make iterative updates to a trial while it is underway. These updates, or adaptations, can include halting enrollment, adding or dropping interventions and arms, and updating prior probabilities and associated randomization weights to assign patients to better performing therapies. Given the resource-intensive computation required for Bayesian analyses, deployment of this approach currently appears best suited to occur within larger data wrangling infrastructures capable of readily gathering and curating the necessary variables to account a priori for factors that need to be incorporated into determination of posterior probabilities. With these traits in mind, learning Bayesian trials can be seen as a natural evolution of an earlier, more embryonic state of evidence-based medicine.
Quality and performance improvement in medicine and the learning health care system
The overlap between increasingly rigorous, cyclical performance improvement strategies in health care, modern health care information systems, and clinical investigation has been acknowledged in the learning health care system framework.
Committee on the learning health care system in America, Institute of Medicine.
in: Smith M. Saunders R. Stuckhardt L. Best care at lower cost: the path to continuously learning health care in America. National Academies Press (US),
2013
In combination with skilled clinical investigators, modern quality improvement implementation, data, and scientific elements provide the necessary ingredients to launch clinical trials embedded within care delivery systems. Implementation of REMAP-CAP across the University of Pittsburgh Medical Center (UPMC) system in the United States relied heavily on supervisory structures typically dedicated to overseeing operations rather than research. For example, at the outset of the pandemic, certain medications that were possible treatments for COVID-19, such as hydroxychloroquine, could only be used within the context of the clinical trial. The UPMC pharmacy and therapeutics committee instituted this edict to simultaneously limit unwarranted variation in COVID-19 treatment practices while promoting system-level learning by relying on the REMAP-CAP platform to assess treatment effectiveness.
Health system and local culture were also shaped by relying on UPMC marketing and education teams to generate enthusiasm about the trial, by crafting reference materials and launching awareness campaigns through both internal and external media outlets.
Information system tools, such as near-real-time dashboards used to track COVID-19 disease burden in the health system, were accessed by clinical leaders focused on resource allocation as well as by trial teams working to identify patients for study enrollment. In some cases, separate from REMAP-CAP, the act of randomization to determine treatment allocations was performed out of necessity owing to resource scarcity. As an example, a weighted lottery system governed by the UPMC quality improvement committee determined the distribution and administration of remdesivir while promoting both equity and an opportunity for causal inference.
In addition, UPMC conducted a pragmatic comparative effectiveness platform trial of monoclonal antibodies, simultaneous with expanding access to these therapies across the system, yielding both new knowledge and increased treatment with these therapies.
The UPMC OPTIMISE-C19 (optimizing treatment and impact of monoclonal antIbodieS through Evaluation for COVID-19) trial: a structured summary of a study protocol for an open-label, pragmatic, comparative effectiveness platform trial with response-adaptive randomization.
Effectiveness of Casirivimab-Imdevimab and Sotrovimab during a SARS-CoV-2 delta variant surge: a cohort study and randomized comparative effectiveness trial.
Together, these efforts created a culture of learning and rapid improvement that was reflected in an estimated 5% lower odds of 30-day COVID-19 mortality across UPMC each month between March 2020 and June 2021.
In acknowledgment of the quality improvement function that REMAP-CAP played at UPMC during the COVID-19 pandemic, the American Board of Internal Medicine approved maintenance of certification quality improvement credit for physicians who played a substantial role in the conduct of the trial.
Leveraging health care information systems to deploy randomized, embedded, multifactorial, adaptive, platform trial for community-acquired pneumonia in the United States
Modifying electronic workflows and data capture to accommodate trial design (even simple designs) often requires substantial effort on the part of health care information technology (IT) teams. At UPMC, implementation of REMAP-CAP required navigating a complex network of IT systems and governance structures across 30 hospitals in the Western and Central Pennsylvania regions. The rapidly growing clinical informatician workforce, which consists of physicians, nurses, pharmacists, and other clinicians with additional training in health information systems and human factors, proved essential to bridge the communication gap between clinical investigators and IT teams.
Depending on trial context, existing electronic workflows may span several arenas of care and related IT systems. Implementation of REMAP-CAP in the United States leveraged IT for virtual screening, recruitment, enrollment, deploying assigned treatment regimens, direct data collection from the electronic health record (EHR), long-term follow-up, iterative submission of outcome updates to the trial statistical analysis committee, and iterative receipt of randomization weights to support adaptive randomization, as previously described.
UPMC REMAP-COVID Group, on behalf of the REMAP-CAP Investigators Implementation of the randomized embedded multifactorial adaptive platform for COVID-19 (REMAP-COVID) trial in a US health system-lessons learned and recommendations.
Completing this effort in the midst of a pandemic was only possible with the support of top-level health system leadership, which allocated IT resources and promoted awareness of the REMAP-CAP efforts, and within a university ecosystem experienced in the execution of clinical trials. Even with such pillars in place, the US REMAP-CAP investigative team had to draft detailed blueprints for trial implementation while also building the software infrastructure for coordinating treatment assignments at the bedside and associated, automated data extraction, a process aptly named “learning while doing.”
When implementing learning IT platforms across multicenter organizations, the act of obtaining necessary, site-level IT governance approvals can consume a plurality of deployment efforts.
A need for expediency, demanded by the pandemic, promoted necessary accommodations and updates to governance structures across UPMC to occur efficiently. In turn, this allowed the trial team to move quickly to intertwine trial operations as part of the system’s larger efforts to address COVID-19.
Throughout the pandemic, UPMC relied on patient data sourced from multiple different EHRs across the system, including products from vendors such as Cerner and Epic. Data extracted from these installations had to be harmonized both locally and globally to integrate with REMAP-CAP data collected from around the world. Fig. 1 displays the major steps of the associated extract, transform, and load (ETL) of data from UPMC to support trial analyses performed by the international data-coordinating center in partnership with contracted biostatistical support. Stage 1 refers to source data in their respective systems, such as the EHR production databases. To avoid negatively impacting patient care systems, portions of these databases are commonly duplicated in institutional warehouses to support business intelligence and research efforts, which is displayed as stage 2. In the case of REMAP-CAP, many international sites relied on a traditional electronic data capture system with online case report forms and an associated data dictionary. Stage 3 represents the curation of EHR data elements needed to fulfill the variables per a study’s case report form. Stage 4 demonstrates the transformation of the curated data to achieve both the structure and the function of the final data model used for analyses. For example, several data elements, such as vital signs, laboratory values, and comorbidities, are necessary to calculate risk adjustment variables, such as the acute physiology and chronic health evaluation (APACHE) score. These elements were curated for enrolled patients in stage 3 and then organized into a composite, calculated score arranged in a format common to other data providers for the trial in stage 4.
Fig. 1The stages of data extraction, transformation, and analysis of real-world data sources, such as the EHR. The dashed red line refers to local access barriers, such as governance policies that may make it difficult for research teams to access these data sources. Stages 2 through 4 are resource-intensive and may consume as much as 90% of the time and effort associated with leveraging these types of data.
Data derived from different systems are not readily interoperable (eg, heart rates might be stored with different variable names requiring different queries for different EHR instances within a given health system). Navigating data governance structures and transforming real-world data into a harmonized endpoint therefore requires adequate local expertise, resources for data analyst, software developer and clinical informatician time, and a secure IT infrastructure for data storage, transformation, and transfer. These ETL steps apply to all clinical investigations aiming to leverage data from the EHR and highlight steps that may be overlooked by investigators who have not previously worked with real-world data provisioned from source systems. Stages 2 to 4 should be expected to demand approximately 90% of the effort spent on acquiring and analyzing data. Each stage can be made substantially more efficient for multicenter studies by engaging clinical informaticians during trial design. With their knowledge of what patient data are available and how they are represented in EHR databases, clinical informaticians can inform the design of trial outcome measures and case report forms to reduce the difficulty of automating case reporting from real-world data. The experience of using multicenter EHR data for trial reporting during a public health emergency provided the insight to formulate a set of recommendations for future pragmatic trials aiming to leverage EHR data (Table 1).
Table 1Recommendations for leveraging electronic health record data in pragmatic trials
Recommendation
Explanation
How It Was Addressed in REMAP-CAP
Define variable labels and associated attributes
Combining multiple data sources requires harmonizing multiple databases. Each variable, such as baseline heart rate, should be assigned a reporting name (eg, “Bas_HeartRate”) and the expected attributes should be clearly defined (eg, integer variable ranging from 0 to 300). Discrete fields should include value sets of acceptable responses
Early in the pandemic, the expected variable names and discrete responses for the REMAP-CAP database were shared with the US data-coordinating center team to serve as a guidepost
Define the reporting schema
The expected format of reported data should be clearly defined at trial outset, such as whether data will be captured on a single table or multiple, relational tables
See above
Establish accepted formats and methods of data transfer
Data formats, such as csv files or SQL exports, should be established, and methods of transfer, such as secure file transfer protocol servers, should be defined early in the trial
Globus secure file exchange has been used to send trial data between the regional and international data-coordinating centers
Include standard vocabulary codes in data definitions
When defining variables, value sets of acceptable standardized codes should be included. For example, the LOINC codes 2951-2, 42570-2, and 77139-4 all refer to sodium measurements and may be included in a value set of acceptable codes
In the United States, reported variables were mapped to LOINC codes where appropriate to aid in local standardization of EHR data
Do not leave logic gaps in the data dictionary
Reporting guidelines written and reviewed by clinicians may not be clear to nonclinicians who cannot fill in logic gaps that require medical knowledge and experience. For example, if methods of supplemental oxygen delivery, such as simple facemask and nonrebreather facemask, are expected to be mapped to fraction of inspired oxygen values (FiO2), the expected FiO2 values associated with each delivery device should be defined in the data dictionary
Close partnerships between programmers, biomedical informaticists, and clinical informaticists helped to ensure that any logic gaps addressed were translated appropriately into data extraction and transformation scripts
Provide explicit sequence and time bounds for composite variables
Composite variables, such as PaO2/FiO2 ratios, may necessitate combining fields documented in close proximity but without identical timestamps. Consider providing guidance on the expected sequence and time bounds for programming such variables (eg, an FiO2 value must have been documented within 4 hours before a PaO2 value to calculate a ratio)
In cases of uncertainty, decisions regarding programming requirements related to composite variables were discussed between regional and international data-coordinating centers to ensure agreement
Use technology to facilitate open communication and harmonization among data providers
Online tools, such as shared word documents, spreadsheets, and GitHub repositories, can facilitate both communication and version control of data dictionaries, schemas, and reporting scripts
Shared spreadsheets containing the expected data schema were created by the international data-coordinating center to facilitate a harmonized data set and served as a guidepost for the regional data-coordinating centers. GitHub repositories were maintained by the United States data-coordinating center to aid in version control of extraction and transformation scripts
Although REMAP-CAP adopted a set of practical, need-based steps to exchange data and rapidly address a global public health emergency, seamless and timely exchange of research data between different systems may one day be substantially easier because of the emergence of interoperability standards and requirements. In the United States and much of the world, health care information systems have emerged without prioritizing interoperability between health care organizations. Lack of interoperability is a major barrier to leveraging real-world data from different sources and poses a substantial challenge to achieving the vision of large-scale learning health care systems.
This challenge is changing by way of legislation in the United States and abroad that is enforcing adherence to common data standards. In the United States, the 21st Century Cures Act, signed in 2016 and finalized in 2020,
includes provisions requiring that EHR vendors support interoperable data transfer to be certified by the Office of the National Coordinator. This includes support for application programming interfaces capable of transferring data in Health Level Seven International’s (HL7) Fast Healthcare Interoperability Resources (FHIR) standard,
The HL7 FHIR standard adheres to a community-involved, staged development cycle and is currently on its fourth released version, known as R4.
which facilitates the exchange of health information using third-party applications. FHIR defines the format and syntax of data exchange based on widely used Internet standards. The Cures Act also requires mapping core data within a hospital or health system to standard terms, such as logical observations identifiers names and codes (LOINC)
for medications. Last, use of common data elements, which are clearly defined variables with specific response values common to multiple data sets and trials, such as the National Heart, Lung, and Blood Institute COVID-19 common data elements, can aid data harmonization.
By adopting standardized value sets and vocabularies, transmitted data can have shared understanding and meaning between source and recipient information systems. For clinical trials, adoption of these standards may eventually give way to easy-to-install applications capable of deploying a customized trial workflow in the EHR while simultaneously collecting pertinent data to support intended analyses.
Although the vision of a robust EHR app store stocked with clinical trial applications will take at least several years to materialize, of more immediate benefit is the growing number of health systems that are transforming their real-world data warehouses into a common data model that facilitates easy exchange of code for extracting and analyzing data elements of interest. The Observational Health Data Science Initiative’s Observational Medical Outcomes Partnership (OMOP) common data model and related tools include a common schema for arranging health care data, adherence to standard vocabularies, and an increasingly robust, open-source toolkit for exchanging analytic code.
One international network participating in REMAP-CAP has developed a modular registry to promote both quality improvement and clinical investigation across several low- and middle-income countries, relying on the OMOP common data model to support participation in multicenter research.
Operationalisation of the randomized embedded multifactorial adaptive platform for COVID-19 trials in a low and lower-middle income critical care learning health system.
Ethical considerations of “living, breathing” trials
The COVID-19 pandemic highlighted the conflict intrinsic in the unresolved ethics debates surrounding the vision of large-scale learning health care systems.
Limited supplies of novel medications, such as vaccines and monoclonal antibodies, forced health systems to develop rationing systems relying on lotteries that incorporated inclusion and exclusion criteria. These steps were taken out of necessity but also took the form of natural experiments providing an opportunity to evaluate a given intervention’s effectiveness under randomized conditions outside of a formal trial.
Similarly, reliance on therapeutic interchange became commonplace with different formulations of mechanistically identical medications substituted by pharmacists on the basis of availability.
Effectiveness of Casirivimab-Imdevimab and Sotrovimab during a SARS-CoV-2 delta variant surge: a cohort study and randomized comparative effectiveness trial.
Controversy exists regarding whether informed consent processes warrant revision to support some living, breathing trial designs in the era of the learning health care system. Bioethicists Faden, Beauchamp, and Kass
have previously dissected whether traditional consent procedures should be uniformly required, offering that “[o]ne major question is whether informed consent should always be required for randomized comparative-effectiveness studies, particularly studies conducted in a learning health care system. Our answer to this question is no.“ However, this viewpoint is not unanimously shared.
For the last half-century, clinical care and clinical research have coexisted as related but separate enterprises. This division represents an overarching effort to ensure that patients’ rights are protected in the course of both care and investigation, according to ethics principles, such as those outlined in The Belmont Report.
The principles of respect for persons, beneficence, and justice are unquestionably just as applicable today as they were when originally outlined by the National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research in the 1970s. Less clear is where the ethical divide exists when balancing considerations, such as how to design studies to answer questions regarding existing, alternative standards of usual care with the potential need to implement consent practices more resource-intensive than would occur during the typical care delivery. Ambiguity in this area is amplified by relatively vague language in major legislation intended to protect patient rights. The Health Insurance Portability and Accountability Act (HIPAA), which includes the Privacy Rule governing protected health information (PHI), and the Common Rule, which includes regulations relating to informed consent processes, provide only broad guidance on how PHI should be accessed by research teams before consent for study participation. Uncertainty in this space has resulted in substantial variation in “cold-calling” practices, or how research teams approach patients for study participation, around the country.
Although separation of research and care has helped create rigorous procedures for informed consent, assessment of risks and benefits, and selection of subjects in clinical investigations, the need for distinct, parallel infrastructures also contributes to slower investigation, increased costs, and less generalizable insights. These considerations have given rise to proposed updates to the clinical research ethics framework that include an increasing role for investigation as part of care delivery, as well as a contextualized approach to informed consent that accounts for scenarios in which individual informed consent may not be necessary despite randomization.
Whether examples of the learning health care system in action during the pandemic will help motivate greater consensus on an updated ethics framework for the design and conduct of living, breathing clinical trials remains to be seen.
Summary
Since the first description of the learning health care system, there has been a growing number of clinical trials demonstrating increasing integration with clinical care that uses medical informatics infrastructure and analytic approaches. This phenomenon appears to have been accelerated by the COVID-19 pandemic. Weaving systematic learning into the delivery of health care is a promising approach to improve the efficiency of clinical trials and more rapidly optimize disease management; however, controversies exist in how to best protect and respect patients within the learning health care system framework. Experiences gained during the pandemic are invaluable for advancing the concept of “living, breathing” trials and can be expected to yield insights for years to come.
Clinics care points
•
Integrating randomized clinical trials into the delivery of care has the potential to overcome many limitations of traditional clinical trials.
•
Modern information systems can be leveraged to promote efficiency in trial patient identification, workflows, and data collection.
•
“Living, breathing” clinical trials are woven into the fabric of health care delivery and promote rigorous, continuous learning for a range of therapies and diseases.
•
More work is necessary to achieve consensus regarding clinical research ethics frameworks that promote seamless integration of research into the delivery of care.
Funding
C.M. Horvat is supported by K23 HD099331.
Disclosure
The authors have nothing to disclose.
References
Weissler A.M.
The Hippocratic ethic in a contemporary era of clinical uncertainty.
Anti-thrombotic therapy to ameliorate complications of COVID-19 (ATTACC): study design and methodology for an international, adaptive Bayesian randomized controlled trial.
UPMC REMAP-COVID Group, on behalf of the REMAP-CAP Investigators
Implementation of the randomized embedded multifactorial adaptive platform for COVID-19 (REMAP-COVID) trial in a US health system-lessons learned and recommendations.
Committee on the learning health care system in America, Institute of Medicine.
in: Smith M. Saunders R. Stuckhardt L. Best care at lower cost: the path to continuously learning health care in America. National Academies Press (US),
2013 (Available at:)
Olsen L. Aisner D. McGinnis J.M. Roundtable on evidence-based medicine. The learning healthcare system: workshop summary. National Academies Press (US),
2007 (Available at:)
Effect of hydrocortisone on mortality and organ support in patients with severe COVID-19: the REMAP-CAP COVID-19 corticosteroid domain randomized clinical trial.
Extracorporeal membrane oxygenation for severe acute respiratory distress syndrome and posterior probability of mortality benefit in a post hoc bayesian analysis of a randomized clinical trial.
The UPMC OPTIMISE-C19 (optimizing treatment and impact of monoclonal antIbodieS through Evaluation for COVID-19) trial: a structured summary of a study protocol for an open-label, pragmatic, comparative effectiveness platform trial with response-adaptive randomization.
Effectiveness of Casirivimab-Imdevimab and Sotrovimab during a SARS-CoV-2 delta variant surge: a cohort study and randomized comparative effectiveness trial.
Operationalisation of the randomized embedded multifactorial adaptive platform for COVID-19 trials in a low and lower-middle income critical care learning health system.