The hospital is an increasingly complex environment. It is challenging to deliver high quality care to an ever-more comorbid population. Despite the difficulties, physicians and nurses do a good job almost all the time. Occasional communication breakdown and human error is inevitable. Adverse events occur.
If a Predictive Model can provide correct, timely and new information, and that information can be integrated into the clinical workflow of health care providers, patient care can be improved, and adverse events can be avoided.
When introducing predictive models, hospitals have a responsibility to both clinical staff and to patients to ensure that any new solutions and tools are clinically validated and enhance care. The only way to demonstrate value for new solutions and tools is to test them prospectively – to see if they actually help improve the care that clinicians provide.
Epic introduced their “cognitive computing models” several years ago. They developed a number of models (Epic Sepsis Model, Deterioration Index, etc.) and tested them retrospectively.
It is up to clinical informaticists and researchers to prospectively validate these models.
Why then have many hospitals implemented these models without prospective testing? A thoughtful commentary on this important point is given in: “Opinion: Amid a Pandemic, a Health Care Algorithm Shows Promise and Peril.”
Where is the standard of evidenced-based medicine?
What can happen when institutions short circuit the process? Recently, there has been press about the Epic Sepsis Model (ESM). From what has been published, it appears that the Epic Sepsis Model does not satisfy the “new” criteria. It’s not providing new information to physicians.
According to Karandeep Singh, MD, the author of External Validation of a Widely Implemented Sepsis Prediction Model in Hospitalized Patients, published in JAMA Internal Medicine:
“In this external validation study, we found the ESM to have poor discrimination and calibration in predicting the onset of sepsis at the hospitalization level… it identifies only 7% of patients with sepsis who were missed by a clinician.”
And in an interview for JAMA Internal Medicine, he added,
“The Epic sepsis model, in the way that it was developed, in my view, is designed to tell clinicians what they already know.”
Further Dr. Singh notes in the interview that there should be a second level of scrutiny on predictive models that may affect care…
“for Class 2 Medical Devices… you have to file what’s called a 510K which is a pre-market notification. … an example of one of these is the Rothman Index which is made by a company called PeraHealth. It’s a deterioration index that tries to predict which patients are going to deteriorate in the hospital. And this is an example of a model that because it is being marketed to customers actually has been reviewed by the FDA and approved… The Epic Sepsis model… hasn’t actually been even put under this level of minimal scrutiny.”
What did FDA clearance entail for the Rothman Index?
FDA review is an extensive process that rigorously assesses technical, safety, and performance data of the medical device. This included a detailed review of the Rothman Index by FDA scientists, as well as an end-to-end deep-dive of the entire product development life cycle of the software that takes electronic medical record data, calculates Rothman Index scores, and delivers scores and warnings to end-users. Data from 3rd-party usability tests (also called human factors evaluation) was an important element. The FDA further requires a quality management system be in place which specifies robust design control processes to ensure software reliability and quality assurance. In fact, FDA cited the Rothman Index as an exemplar of digital health technology that based its FDA filing in real-world evidence (Examples of Real-World Evidence Used in Medical Device Regulatory Decisions, pg. 107)
In addition to complying with FDA regulation, the Rothman Index has been extensively validated in peer-reviewed publications, and in conjunction with appropriate clinical protocols, has yielded improved patient outcomes, leading to reductions in both adverse events and costs of care.
While thoughtful model development, peer reviewed third-party validation, and regulatory clearance are important factors to consider, these are truly only the first steps toward improving care. To drive improved patient outcomes, predictive models need to be effectively incorporated into clinical workflow through education and change management. Software and predictive models in isolation are not a solution. The information must flow seamlessly to the point of need. To that end, the clinicians at PeraHealth work with nurses and physicians at client hospitals to develop tailored protocols that make sense for clinicians and positively impact outcomes.
No shortcuts to responsible model deployment.
Epic’s researchers have taken a first step in developing models, but this was never thought to be sufficient by the clinical informatics community.
Hospitals have a responsibility to provide evidenced-based medicine.
Publish, test, validate, test prospectively, and partner with clinicians to integrate the new information into workflow and develop and share clinical best practices to ensure that a predictive model actually helps the physician or the nurse. That is the scientific process, and the pragmatic process.
When you can help physicians and nurses, when you can give them new information, and when you can deliver it in a way that is easily assimilated, then communication improves, human error decreases, adverse events become rarer, and patient care improves.