Advance care planning — which often begins with a simple, structured conversation — can help patients make decisions and settle what will be done ahead of time, relieving some of chaos and confusion that accompanies end-of-life care. But knowing when to begin this step can be difficult: Families and even doctors can be so optimistic about a loved one’s future that a patient can miss the chance to make wishes clear.
Dr. Stephanie Harman, the clinical chief of palliative care at Stanford Health Care, is leading a pilot program at Stanford Medicine that explores the potential for artificial intelligence (AI) to help doctors guide patients through these decisions. Though the tool isn’t designed to predict a specific time of death — it doesn’t give a precise number of months or years — the predictive analytics model identifies patients who have a high probability of dying in three to 12 months. One day this type of model might transform clinical care. Death is notoriously difficult to predict — at great costs to the health care system and, of course, anyone with a loved one nearing the end of life.
“[Doctors are] terrible at predicting prognosis,” said Harman. “If that information is there [from AI], hopefully that raises the likelihood that the care this patient receives from their health care team matches what they have prioritized. To have care that aligns with what matters most to patients and families — that’s the ultimate goal.”
AI Giving Way to Better Endings
The Stanford AI project began when Dr. Nigam Shah, an associate professor of biomedical informatics at Stanford, came to one of Harman’s clinical quality meetings.
“He said, ‘We have built a predictive analytics model that identifies patients who have a high probability of dying in three to twelve months. Would that be something useful, in terms of a decision, a door for clinical care?’ And that was where we said, ‘Wow, yes, that would be useful!’”
Taking into account the patient’s medical history and millions of other patient records, the AI model uses an algorithm to determine the probability that someone will die within 12 months. Harman receives a daily report of newly-admitted Stanford Hospital patients with a 90 percent or higher probability of dying in three to 12 months, according to Stanford Medicine. Harman assesses the report and records and from there advises patients on palliative care.
“The tool helps [Harman] spend more time with patients and less on record reviews and, most importantly, it leads to better endings,” a Stanford Medicine article says.
Learning About Machine Learning
To assist physicians working with patients nearing the end of their lives, researchers have developed tools for each type of cancer, calibrated for multiple months at a time.
David Hui, a physician in the palliative care department at the University of Texas Anderson Cancer Center in Houston, co-authored a study that found a validated prognostic tool called the Palliative Prognostic Index was more accurate than doctors’ estimates when used to determine whether a patient with advanced cancer will die around 30 days — though not at 100 days.
“The current prognostic scales tend to be a bit crude. They only focus on particular populations,” Hui said.
In a paper currently under peer review, Hui studied patients in the palliative care unit, where the average survival is 10 to 14 days.
“These patients are much sicker, and they have a somewhat more homogeneous kind of survival pattern,” Hui said.
The team compared the AI’s scores to the doctors’ scores. Hui believes that different patient populations, types of cancer, palliative care and more can affect the tool’s accuracy. Machine learning depends fully on the data it is provided: the inputs it begins with, which are built with inherent bias from the start.
We Still Need Physicians
Hui said there’s need for caution about using machine learning in health care. AI reflects only the data fed into it. It’s also difficult to validate the system.
“It will certainly take a lot more validation to really try to understand: Are these tools useful? Can they be applied beyond the setting in which they were developed, to other institutions?,” he noted.
There are few regulations and standards for using A.I. in health care, as well. Few are doing work to change this — how to educate physicians on how to use AI ethically.
“Technology has a very important role, but I think the humanity aspect should not be forgotten. We still need physicians,” Hui said.
It’s too early to tell when this tool might be available for use in other hospitals, but the Stanford team is hopeful. They’re currently working to get their pilot off the ground.