Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Recording Doctor-Patient Visits Shows Great Potential

Posted on June 1, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Doctors, do you know how you would feel if a patient recorded their visit with you? Would you choose to record them if you could? You may soon find out.

A new story appearing in STAT suggests that both patients and physicians are increasingly recording visits, with some doctors sharing the audio recording and encouraging patients to check it out at home.

The idea behind this practice is to help patients recall their physician’s instructions and adhere to treatment plans. According to one source, patients forget between 40% to 80% of physician instructions immediately after leaving the doctor’s office. Sharing such recordings could increase patient recall substantially.

What’s more, STAT notes, emerging AI technologies are pushing this trend further. Using speech recognition and machine learning tools, physicians can automatically transcribe recordings, then upload the transcription to their EMR.

Then, health IT professionals can analyze the texts using natural language processing to gain more knowledge about specific diseases. Such analytics are likely to be even more helpful than processes focused on physician notes, as voice recordings offer more nuance and context.

The growth of such recordings is being driven not only by patients and their doctors, but also by researchers interested in how to best leverage the content found in these recordings.

For example, a professor at Dartmouth is leading a project focused on creating an artificial intelligence-enabled system allowing for routine audio recording of conversations between doctors and patients. Paul Barr is a researcher and professor at the Dartmouth Institute for Health Policy and Clinical Practice.

The project, known as ORALS (Open Recording Automated Logging System), will develop and test an interoperable system to support routine recording of patient medical visits. The fundamental assumption behind this effort is that recording such content on smart phones is inappropriate, as if the patient loses their phone, their private healthcare information could be exposed.

To avoid this potential privacy breach, researchers are storing voice information on a secure central server allowing both patients and caregivers to control the information. The ORALS software offers both a recording and playback application designed for recording patient-physician visits.

Using the system, patients record visits on their phone, have them uploaded to a secure server and after that, have the recordings automatically removed from the phone. In addition, ORALS also offers a web application allowing patients to view, annotate and organize their recordings.

As I see it, this is a natural outgrowth of the trailblazing Open Notes project, which was perhaps the first organization encouraging doctors to share patient information. What makes this different is that we now have the technology to make better use of what we learn. I think this is exciting.

Google, Stanford Pilot “Digital Scribe” As Human Alternative

Posted on November 29, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Without a doubt, doctors benefit from the face-to-face contact with patients restored to them by scribe use; also, patients seem to like that they can talk freely without waiting for doctors to catch up with their typing. Unfortunately, though, putting scribes in place to gather EMR information can be pricey.

But what if human scribes could be replaced by digital versions, ones which interpreted the content of office visits using speech recognition and machine learning tools which automatically entered that data into an EHR system? Could this be done effectively, safely and affordably? (Side Note: John proposed something similar happening with what he called the Video EHR back in 2006.)

We don’t know the answer yet, but we may find out soon. Working with Google, a Stanford University doctor is piloting the use of digital scribes at the family medicine clinic where he works. Dr. Steven Lin is conducting a 9-month long study of the concept at the clinic, which will include all nine doctors currently working there.

Patients can choose whether to participate or not. If they do opt in, researchers plan to protect their privacy by removing their protected health information from any data used in the study.

To capture the visit information, doctors will wear a microphone and record the session. Once the session is recorded, team members plan to use machine learning algorithms to detect patterns in the recordings that can be used to complete progress notes automatically.

As one might imagine, the purpose of the pilot is to see what challenges doctors face in using digital scribes. Not surprisingly, Dr. Lin (and doubtless, Google as well), hope to develop a digital scribe tool that can be used widely if the test goes well.

While the information Stanford is sharing on the pilot is intriguing in and of itself, there are a few questions I’d hope to see project leaders answer in the future:

  • Will the use of digital scribes save money over the cost of human scribes? How much?
  • How much human technical involvement will be necessary to make this work? If the answer is “a lot” can this approach scale up to widespread use?
  • How will providers do quality control? After all, even the best voice recognition software isn’t perfect. Unless there’s some form of human content oversight, mis-translated words could end up in patient records indefinitely – and that could lead to major problems.

Don’t get me wrong: I think this is a super idea, and if this approach works it could conceivably change EHR information gathering for the better. I just think it’s important that we consider some of the tradeoffs that we’ll inevitably face if it takes off after the pilot has come and gone.

Artificial Intelligence in Healthcare: Medical Ethics and the Machine Revolution

Posted on July 25, 2017 I Written By

Healthcare as a Human Right. Physician Suicide Loss Survivor. Janae writes about Artificial Intelligence, Virtual Reality, Data Analytics, Engagement and Investing in Healthcare. twitter: @coherencemed

Artificial Intelligence could be the death of us all.  I heard that roughly a hundred times last week through shares on twitter. While this theme may be premature, the concern of teaching ethics and valuing human life is a relevant question for machine learning, especially in the realm of healthcare. Can we teach a machine ethical bounds? As Elon Musk calls for laws and boundaries I was wondering what semantics and mathematics would be needed to frame that question.  I had a Facebook friend that told me he knew a lot about artificial intelligence and he wanted to warn me about the coming robot revolution.

He did not, in fact, know a lot about artificial intelligence coding. He did not know code nor did he have any knowledge of mathematical theory, but was familiar with the worst-case scenarios of the robots we create finding humanity superfluous and eliminating us. I was so underwhelmed I messaged my friend who makes robots for Boston Dynamics and asked him about his latest project.

I was disappointed with that Facebook interaction. This disappointment was offset for me last week when the API changed and a happy Facebook message asked me if I wanted to revisit an ad that I had previously liked. I purposefully like ads if I like the company. Good old Facebook predictive ads. Sometimes I also comment on a picture or tag someone in an add to see if I can change which advertisements I see in my feed.

One of the happiest times with this feature was finding great new running socks. I commented on a friends’ picture that I liked her running socks and within an hour saw my first ad for those very same socks. While I’m not claiming to have seen the predictive and advertising algorithms behind Facebook advertising, machine learning is behind that ad.  Photo recognition through image processing can identify the socks my friend Ilana was wearing while running a half marathon. Simple keyword scans can read my positive comments about the socks which gives them information about what I like. This can pair with photos from advertisers and- within one hour of “liking” those socks they seamlessly show up in my feed as a buying option. Are there ethical considerations about knowing exactly what my buying power is and my buying patterns and my personal history? Yes. Similarly, there will be ethical considerations when insurance companies can predict exactly which patients will and won’t be able to pay for their healthcare. While I appreciate great running socks, I have mixed feelings about assessing my ability to pay for the best medical care.

Can a machine be taught to value the best medical care and ethics? We seem to hear a lot of debate about whether they can be taught not to kill us. Teaching a machine ethics will be complicated as they show how poor nutrition directly changes how long patients live. Some claim these are dangerous things to create, others say the difference will be human intuition. Can human intuition be replicated and what application will that have for medicine?  I always considered intuition connections our brain recognizes that we are not directly aware of, so a machine should be able to learn intuition through deep learning networks.

Creating “laws” or “rules” for ethics in artificial intelligence as Elon Musk calls for is difficult in that ethical bounds are difficult to teach machines. In a recent interview Musk claimed that Artificial Intelligence is the biggest risk that we face as a civilization. Ethical rules and bounds are difficult for humanity. Once when we were looking at data patterns and trust patterns and disease prediction someone turned to me and said- but insurance companies will use this information to not give people coverage. If they could read your genes people will die. In terms of teaching a machine ethics or adding outward bounds, one of the weaknesses is that trained learning systems can get really good on a narrow domain but they don’t do transfer learning in different domains- like reason by analogy- machines are terrible at that. I spoke with Adam Pease about how you can increase the ability of machines to use ontology to increase benefits of machine learning in healthcare outcomes. Ontology creates a way to process meaning that is more robust in a semantic view. He shared his open source projects about Ontology and shared that we should be speaking with Philosophers and Computer science experts about ontology and meaning.

Can a machine learn empathy? Will a naturally learning and evolving system teach itself different bounds and hold life sacred, and how will it interpret the challenge of every doctor to “Do no Harm?” The Medical community should come together for agreement about the ethical bounds and collaborate with computer scientists to see the capacity to teach those bounds and the possible outliers in motivation.

Most of the natural language processing is for applications that are pretty shallow in terms of what the machine understands. Machines are good at matching patterns- if your pattern is a bag of words it can connect with another bag of words within a quantity of documents. Many companies have done extensive work in training systems that will be working with patients to learn what words mean and common patterns within patient care. Health Navigator, for example, has done extensive work to form the clinical bounds for a telemedicine system. When a patient asks about their symptoms they get clinically relevant information paired with their symptoms even if that patient uses non-medical language in describing their chief complaint.  Clinical bounds create a very important framework for an artificial intelligence system to process large amounts of inbound data and help triage patients to appropriate care.

With the help of Symplur signals I looked at Ethics and Artificial Intelligence in Healthcare and who was having conversations online in those areas. Wen Dombrowski MD, MPH lead much of the discussion. Appropriately, part of what Catalaize Health does is artificial intelligence due diligence for healthcare organizations. Joe Babian who leads #HCLDR discussion was also a significant contributor.

Symplur Signals looked at the stakeholders for Artificial Intelligence and Ethics in Healthcare

A “smart” system needs to be able to make the same amount of inferences that a human can. Teaching inferences and standards within semantics are the first steps to teaching a machine “intuition” or “ethics.” Predictive pattern recognition appears more developed than ethical and meaning boundaries for sensitive patient information. We can teach a system to recognize an increased risk of suicidal behavior from rushing words or dramatically altered behavior or higher pitched speaking, but is it ethical to spy on at risk patients from their phone. I attended a mental health provider meeting about how that knowledge would affect patients with paranoia. What are the ethical lines between protection and control? Can the meaning of those lines be taught to a machine?

I’m looking forward to seeing what healthcare providers decide those bounds should be, and how they train the machines those ontologies.

The Digital Health Biography: There’s A New Record In Town

Posted on January 18, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Few of us would argue that using EMRs is not a soul-sucking ordeal for many clinicians. But is there any alternative in sight? Maybe so, according to Robert Graboyes and Dr. Darcy Nikol Bryan, who are touting a new model they’ve named a “digital health biography.”

In a new article published in Real Clear Health, Graboyes, an economist who writes on technology, and Bryan, an OB/GYN, argue that DHBs will be no less than “an essential component of 21st century healthcare.” They then go on to describe the DHB, which has several intriguing features.

Since y’all know what doctors dislike (hate?) about EMRs, I probably don’t need to list the details the pair shares on why they generate such strong feelings. But as they rightly note, EMRs may take away from patient-physician communication, may be unattractively designed and often disrupt physician workflow.

Not only that, they remind us that third parties like insurance companies and healthcare administrators seem to get far more benefits from EMR content than clinicians do. Over time, data analytics efforts may identify factors that improve care, which eventually benefits clinicians, but on a transactional level it’s hard to dispute that many physicians get nothing but aggravation from their systems.

So what makes the DHB model different? Here’s how the authors lay it out:

* Patients own the DHB and data it contains
* Each patient should have only one DHB
* Patient DHBs should incorporate data from all providers, including PCPs, specialists, nurse practitioners, EDs, pharmacists and therapists
* The DHB should incorporate data from wearable telemetry devices like FitBits, insulin pumps and heart monitors
* The DHB should include data entered by patients, including family history, recollections of childhood illness, fears and feelings
* DHB data entry should use natural language rather than structured queries whenever possible
* The DHB should leverage machine learning to extract and organize output specific to specific providers or the patient
* In the DHB model, input and output software are separated into different categories, with vendors competing for both ends separately on functionality and aesthetics
* Common protocols should minimize the difficulty and cost of shifting from one input or output vendor to the other
* The government should not mandate or subsidize any specific vendors or data requirements
* DHB usage should be voluntary, forcing systems to keep proving their worth or risk being dumpted
* Clinical applications shouldn’t be subservient to reimbursement considerations

To summarize, the DHB model calls for a single, patient-controlled, universal record incorporating all available patient health data, including both provider and patient inputs. It differ significantly from existing EMR models in some ways, particularly if it separated data input from output and cut vendors out of the database business.

As described, this model would eliminate the need for separate institutions to own and maintain their own EMRs, which would of course stand existing health IT structures completely on their head. Instead of dumping information into systems owned by providers, the patient would own and control the DHB, perhaps on a server maintained by an independent intermediary.

Unfortunately, it’s hard to imagine a scenario in which providers would be willing to give up control to this great an extent, even if this model was more effective. Still, the article makes some provocative suggestions which are worth discussing. Do you think this approach is viable?

The Value Of Pairing Machine Learning With EMRs

Posted on January 5, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

According to Leonard D’Avolio, the healthcare industry has tools at its disposal, known variously as AI, big data, machine learning, data mining and cognitive computing, which can turn the EMR into a platform which supports next-gen value-based care.

Until we drop the fuzzy rhetoric around these tools – which have offered superior predictive performance for two decades, he notes – it’s unlikely we’ll generate full value from using them. But if we take a hard, cold look at the strengths and weaknesses of such approaches, we’ll get further, says D’Avolio, who wrote on this topic recently for The Health Care Blog.

D’Avolio, a PhD who serves as assistant professor at Harvard Medical School, is also CEO and co-founder of AI vendor Cyft, and clearly has a dog in this fight. Still, my instinct is that his points on the pros and cons of machine learning/AI/whatever are reasonable and add to the discussion of EMRs’ future.

According to D’Avolio, some of the benefits of machine learning technologies include:

  • The ability to consider many more data points than traditional risk scoring or rules-based models
  • The fact that machine learning-related approaches don’t require that data be properly formatted or standardized (a big deal given how varied such data inflows are these days)
  • The fact that if you combine machine learning with natural language processing, you can mine free text created by clinicians or case managers to predict which patients may need attention

On the flip side, he notes, this family of technologies comes with a major limitation as well. To date, he points out, such platforms have only been accessible to experts, as interfaces are typically designed for use by specially trained data scientists. As a result, the results of machine learning processes have traditionally been delivered as recommendations, rather than datasets or modules which can be shared around an organization.

While D’Avolio doesn’t say this himself, my guess is that the new world he heralds – in which machine learning, natural language processing and other cutting-edge technologies are common – won’t be arriving for quite some time.

Of course, for healthcare organizations with enough resources, the future is now, and cases like the predictive analytics efforts going on within Paris public hospitals and Geisinger Health System make the point nicely. Clearly, there’s much to be gained in performing advanced, liquidly-flowing analyses of EMR data and related resources. (Geisinger has already seen multiple benefits from its investments, though its data analytics rollout is relatively new.)

On the other hand, independent medical practices, smaller and rural hospitals and ancillary providers may not see much direct impact from these projects for quite a while. So while D’Avolio’s enthusiasm for marrying EMRs and machine learning makes sense, the game is just getting started.