AI-Based Tech Could Speed Patient Documentation Process

Posted on August 27, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

A researcher with a Google AI team, Google Brain, has published a paper describing how AI could help physicians complete patient documentation more quickly. The author, software engineer Peter Lui, contends that AI technology can speed up patient documentation considerably by predicting its content.

On my initial reading of the paper, it wasn’t clear to me what advantage this has over pre-filling templates or even allowing physicians to cut-and-paste text from previous patient encounters. Still, judge for yourself as I outline what author Liu has to say, and by all means, check out the write-up.

In its introduction, the paper notes that physicians spend a great deal of time and energy entering patient notes into EHRs, a process which is not only taxing but also demoralizing for many physicians. Choosing from just one of countless data points underscoring this conclusion, Liu cites a 2016 study noting that physicians spend almost 2 hours of administrative work for every hour of patient contact.

However, it might be possible to reduce the number of hours doctors spend on this dreary task. Google Brain has been working on technologies which can speed up the process of documentation, including a new medical language modeling approach. Liu and his colleagues are also looking at how to represent an EHR’s mix of structured and unstructured text data.

The net of all of this? Google Brain has been able to create a set of systems which, by drawing on previous patient records can predict most of the content a physician will use next time they see that patient.

The heart of this effort is the MIMIC-III dataset, which contains the de-identified electronic health records of 39,597 patients from the ICU of a large tertiary care hospital. The dataset includes patient demographic data, medications, lab results, and notes written by providers. The system includes AI capabilities which are “trained” to predict the text physicians will use in their latest patient note.

In addition to making predictions, the Google Brain AI seems to have been able to pick out some forms of errors in existing notes, including patient ages and drug names, as well as providing autocorrect options for corrupted words.

By way of caveats, the paper warns that the research used only data generated within 24 hours of the current note content. Liu points out that while this may be a wide enough range of information for ICU notes, as things happen fast there, it would be better to draw on data representing larger windows of time for non-ICU patients. In addition, Liu concedes that it won’t always be possible to predict the content of notes even if the system has absorbed all existing documentation.

However, none of these problems are insurmountable, and Liu understandably describes these results as “encouraging,” but that’s also a way of conceding that this is only an experimental conclusion. In other words, these predictive capabilities are not a done deal by any means. That being said, it seems likely that his approach could be valuable.

I am left with at least one question, though. If the Google Brain technology can predict physician notes with great fidelity, how does that differ than having the physician cut-and-paste previous notes on their own?  I may be missing something here, because I’m not a software engineer, but I’d still like to know how these predictions improve on existing workarounds.