Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

E-Patient Update: Alexa Nowhere Near Ready For Healthcare Prime Time

Posted on February 9, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Folks, I just purchased an Amazon Echo (Alexa) and I’ll tell you up front that I love it. I’m enjoying the heck out of summoning my favorite music with a simple voice command, ordering up a hypnotherapy session when my back hurts and tracking Amazon packages with a four-word request. I’m not sure all of these options are important but they sure are fun to use.

Being who I am, I’ve also checked out what, if anything, Alexa can do to address health issues. I tested it out with some simple but important comments related to my health. I had high hopes, but its performance turned out to be spotty. My statements included:

“Alexa, I’m hungry.”
“Alexa, I have a migraine.”
”Alexa, I’m lonely.”
”Alexa, I’m anxious.”
”Alexa, my chest hurts.”
“Alexa, I can’t breathe.”
“Alexa, I need help.”
“Alexa, I’m suicidal.”
“Alexa, my face is drooping.”

In running these informal tests, it became pretty clear what the Echo was and was not set up to do. In short, it offered brief but appropriate response to communications that involved conditions (such as experiencing suicidality) but drew a blank when confronted with some serious symptoms.

For example, when I told the Echo that I had a migraine, she (yes, it has a female voice and I’ve given it a gender) offered vague but helpful suggestions on how to deal with headaches, while warning me to call 911 if it got much worse suddenly. She also responded appropriately when I said I was lonely or that I needed help.

On the other hand, some of the symptoms I asked about drew the response “I don’t know about that.” I realize that Alexa isn’t a substitute for a clinician and it can’t triage me, but even a blanket suggestion that I call 911 would’ve been nice.

It’s clear that part of the problem is Echo’s reliance on “skills,” apps which seem to interact with its core systems. It can’t offer very much in the way of information or referral unless you invoke one of these skills with an “open” command. (The Echo can tell you a joke, though. A lame joke, but a joke nonetheless.)

Not only that, while I’m sure I missed some things, the selection of skills seems to be relatively minimal for such a prominent platform, particularly one backed by a giant like Amazon. That’s particularly true in the case of health-related skills. Visualize where chatbots and consumer-oriented AI were a couple of years ago and you’ll get the picture.

Ultimately, my guess is that physicians will prescribe Alexa alongside connected glucose meters, smart scales and the like, but not very soon. As my colleague John Lynn points out, information shared via the Echo isn’t confidential, as the Alexa isn’t HIPAA-compliant, and that’s just one of many difficulties that the healthcare industry will need to overcome before deploying this otherwise nifty device.

Still, like John, I have little doubt that the Echo and his siblings will eventually support medical practice in one form or another. It’s just a matter of how quickly it moves from an embryonic stage to a fully-fledged technology ecosystem linked with the excellent tools and apps that already exist.

Partners AI System Gives Clinicians Better Information

Posted on January 25, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

While HIT professionals typically understand AI technology, clinicians may not. After all, using AI usually isn’t part of their job, so they can be forgiven for ignoring all of the noise and hype around it.

Aware of this problem, Partners Connected Health and partner Hitachi have come together to create an AI-driven process which isolates data physicians can use. The new approach, dubbed ‘explainable AI,’ is designed to list the key factors the system has relied upon in making projections, making it easier for physicians to make relevant care decisions.

Explainable AI, a newer term used by the two organizations, refers not only to the work being done to develop the Partners system, but also a broader universe in which machines can explain their decisions and actions to human users. Ultimately, explainable AI should help users trust and use AI tools effectively, according to a Hitachi statement.

Initially, Partners will use the AI system to predict the risk of 30-day readmissions for patients with heart failure. Preventing such readmissions can potentially save $7,000 per patient per year.

The problem is, how can organizations like Partners make AI results useful to physicians? Most AI-driven results are something of a black box for clinicians, as they don’t know what data contributed to the score. After all, the algorithm analyses about 3,000 variables that might be a factor in readmissions, drawing from both structured and unstructured data. Without help, there’s little chance physicians can isolate ways to improve their own performance.

But in this case, the AI system offers much better information. Having calculated the predictive score, it isolates factors that physicians can address directly as part of the course of care. It also identifies which patients would be the best candidates for a post-discharge program focused on preventing readmissions.

All of this is well and good, but will it actually deliver the results that Partners hoped for? As it turns out, the initial results of a pilot program are promising.

To conduct the pilot, the Partners Connected Health Innovation team drew on real-life data from heart failure patients under its care. The patients were part of the Partners Connected Cardiac Care Program, a remote monitoring education program focused on managing their care effectively in reducing the risk of hospitalization.

The test compared the results calculated by the AI system with real-life results drawn from about 12,000 heart failure patients hospitalized and discharged from the Partners HealthCare network in 2014 in 2015. As it turned out, there was a high correlation between actual patient readmissions and the level predicted by the system. Next, Partners will share a list of variables that played the biggest role in the AI’s projects. It’s definitely a move in the right direction.

AI Project Could Prevent Needless Blindness

Posted on January 11, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

At this point, you’re probably sick of hearing about artificial intelligence and the benefits it may offer as a diagnostic tool. Even so, there are still some AI stories worth telling, and the following is one of them.

Yes, IBM Watson Health recently had a well-publicized stumble when it attempted to use “cognitive computing” to detect cancer, but that may have more to do with the fact that Watson was under so much pressure to produce results quickly with something that could’ve taken a decade to complete. Other AI-based diagnostic projects seem to be making far more progress.

Consider the following, for example. According to a story in WIRED magazine, Google is embarking on a project which could help eye doctors detect diabetic retinopathy and prevent blindness, basing its efforts on technologies it already has in-house.

The tech giant reported last year that it had trained image recognition algorithms to detect tiny aneurysms suggesting that the patient is in the early stages of retinopathy. This system uses the same technology that allows Google’s image search photo and photo storage services to discriminate between various objects and people.

To take things to the next step, Google partnered with the Aravind Eye Care System, a network of eye hospitals based in India. Aravind apparently helped Google develop the retinal screening system by contributing some of the images it already had on hand to help Google develop its image parsing algorithms.

Aravind and Google have just finished a clinical study of the technology in India with Aravind. Now the two are working to bring the technology into routine use with patients, according to a Google executive who spoke at a recent conference.

The Google exec, Lily Peng, who serves as a product manager with the Google Brain AI research group, said that these tools could help doctors to do the more specialized work and leave the screening to tools like Google’s. “There is not enough expertise to go around,” she said. “We need to have a specialist working on treating people who are sick.”

Obviously, we’ll learn far more about the potential of Google’s retinal scanning tech once Aravind begins using it on patients every day. In the meantime, however, one can only hope that it emerges as a viable and safe tool for overstressed eye doctors worldwide. The AI revolution may be overhyped, but projects like this can have an enormous impact on a large group of patients, and that can’t be bad.

AI and Machine Learning Humor – Fun Friday

Posted on December 15, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Two of the biggest buzzwords right now in healthcare are AI and machine learning. The problem is that both of these things are real and are going to impact healthcare in really significant ways, but everyone is using them to apply to everything. Once the industry starts doing that, words lose their meaning.

That said, I still couldn’t help but laugh at this AI and machine learning cartoon (Credit to Andrew Richards for sharing this cartoon with me):

The sad reality is that this is what many companies are doing. They look for the answers they want instead of looking at what answers the data provides. That’s a hard concept for many to grasp and takes a real expert to do the latter effectively.

Google, Stanford Pilot “Digital Scribe” As Human Alternative

Posted on November 29, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Without a doubt, doctors benefit from the face-to-face contact with patients restored to them by scribe use; also, patients seem to like that they can talk freely without waiting for doctors to catch up with their typing. Unfortunately, though, putting scribes in place to gather EMR information can be pricey.

But what if human scribes could be replaced by digital versions, ones which interpreted the content of office visits using speech recognition and machine learning tools which automatically entered that data into an EHR system? Could this be done effectively, safely and affordably? (Side Note: John proposed something similar happening with what he called the Video EHR back in 2006.)

We don’t know the answer yet, but we may find out soon. Working with Google, a Stanford University doctor is piloting the use of digital scribes at the family medicine clinic where he works. Dr. Steven Lin is conducting a 9-month long study of the concept at the clinic, which will include all nine doctors currently working there.

Patients can choose whether to participate or not. If they do opt in, researchers plan to protect their privacy by removing their protected health information from any data used in the study.

To capture the visit information, doctors will wear a microphone and record the session. Once the session is recorded, team members plan to use machine learning algorithms to detect patterns in the recordings that can be used to complete progress notes automatically.

As one might imagine, the purpose of the pilot is to see what challenges doctors face in using digital scribes. Not surprisingly, Dr. Lin (and doubtless, Google as well), hope to develop a digital scribe tool that can be used widely if the test goes well.

While the information Stanford is sharing on the pilot is intriguing in and of itself, there are a few questions I’d hope to see project leaders answer in the future:

  • Will the use of digital scribes save money over the cost of human scribes? How much?
  • How much human technical involvement will be necessary to make this work? If the answer is “a lot” can this approach scale up to widespread use?
  • How will providers do quality control? After all, even the best voice recognition software isn’t perfect. Unless there’s some form of human content oversight, mis-translated words could end up in patient records indefinitely – and that could lead to major problems.

Don’t get me wrong: I think this is a super idea, and if this approach works it could conceivably change EHR information gathering for the better. I just think it’s important that we consider some of the tradeoffs that we’ll inevitably face if it takes off after the pilot has come and gone.

EHRs and Keyboarding: Is There an Answer?

Posted on November 28, 2017 I Written By

When Carl Bergman isn't rooting for the Washington Nationals or searching for a Steeler bar, he’s Managing Partner of EHRSelector.com.For the last dozen years, he’s concentrated on EHR consulting and writing. He spent the 80s and 90s as an itinerant project manager doing his small part for the dot com bubble. Prior to that, Bergman served a ten year stretch in the District of Columbia government as a policy and fiscal analyst, a role he recently repeated for a Council member.

One of the givens of EHR life is that users, especially physicians, spend excessive time keying into EHRs. The implication is that much keyboarding is due to excessive data demands, poor usability or general app cussedness. There’s no end of studies that support this. For example, a recent study at the University of Wisconsin-Madison’s Department of Family Medicine and Community Health in the Annals of Family Medicine found that:

Primary care physicians spend more than one-half of their workday, nearly 6 hours, interacting with the EHR during and after clinic hours. The study broke out times spent on various tasks and found, unsurprisingly, that documentation and chart review took up almost half the time.

Figure 1. Percent Physician’s Time on EHR

This study is unique among those looking at practitioners and EHRs. They note:

Although others have suggested work task categories for primary care,13 ours is the first taxonomy proposed to capture routine clinical work in EHR systems. 

They also make the point that they captured physician EHR use not total time spent with patients. Other studies have reached similar EHR use conclusions. The consensus is there too much time keyboarding and not enough time spent one to one with the patient. So, what can be done? Here, I think, are the choices:

  1. Do Nothing. Assume that this is a new world and tough it out.
  2. Use Scribes. Hire scribes to do the keyboarding for physicians.
  3. Make EHRs Easier. Improve EHRs’ usability.
  4. Make EHRs Smarter. Adapt EHRs to physician’s needs through artificial intelligence (AI) solutions.
  5. Offload to Patients. Use patient apps to input data, rather than physician keyboarding.

Examining the Alternatives

 1. Do Nothing. Making no change in either the systems or practioners’ approach means accepting the current state as the new normal. It doesn’t mean that no changes will occur. Rather, that they will continue at an incremental, perhaps glacial, pace. What this says more broadly is that the focus on the keyboard, per se, is wrong. The question is not what’s going in so much as what is coming out compared to old, manual systems. For example, when PCs first became office standards, the amount of keyboarding vs. pen and paper notations went viral. PCs produced great increases in both the volume and quality of office work. This quickly became the new norm. That hasn’t happened with EHRs. There’s an assumption that the old days were better. Doing nothing acknowledges that you can’t go back. Instead, it takes a stoic approach and assumes things will get better eventually, so just hang in there.

2. Scribes. The idea of using a scribe is simple. As a doctor examines a patient, the scribe enters the details. Scribes allow the physician to offload the keyboarding to someone with medical knowledge who understands their documentation style. There is no question that scribes can decrease physician keyboarding. This approach is gaining in popularity and is marketed by various medical societies and scribe services companies.

However, using scribes brings a host of questions. How are the implemented? I think the most important question is how a scribe fits into a system’s workflow. For example, how does an attending review a scribe’s notes to determine they convey the attending’s clinical findings, etc. The attending is the responsible party and anything that degrades or muddies that oversight is a danger to patient safety. Then, there are questions about patient privacy and just how passive an actor is a scribe?

If you’re looking for dispositive answers, you’ll have to wait. There are many studies showing scribes improve physician productivity, but few about the quality of the product.

3. Make EHRs Easier. Improving EHR usability is the holy grail of health IT and about as hard to find. ONC’s usability failings are well known and ongoing, but it isn’t alone. Vendors know that usability is something they can claim without having to prove. That doesn’t mean that usability and its good friend productivity aren’t important and are grossly overdue. As AHRQ recently found:

In a review of EHR safety and usability, investigators found that the switch from paper records to EHRs led to decreases in medication errors, improved guideline adherence, and (after initial implementation) enhanced safety attitudes and job satisfaction among physicians. However, the investigators found a number of problems as well.

These included usability issues, such as poor information display, complicated screen sequences and navigation, and the mismatch between user workflow in the EHR and clinical workflow. The latter problems resulted in interruptions and distraction, which can contribute to medical error.

Additional safety hazards included data entry errors created by the use of copy-forward, copy-and-paste, and electronic signatures, lack of clarity in sources and date of information presented, alert fatigue, and other usability problems that can contribute to error. Similar findings were reported in a review of nurses’ experiences with EHR use, which highlighted the altered workflow and communication patterns created by the implementation of EHRs.

Improving EHR usability is not a metaphysical undertaking. What’s wrong and what works have been known for years. What’s lacking is both the regulatory and corporate will to do so. If all EHRs had to show their practical usability users would rejoice. Your best bet here may be to become active in your EHR vendor’s user group. You may not get direct relief, but you’ll have a place, albeit small, at the table. Otherwise, given vendor and regulatory resistance to usability improvements, you’re better off pushing for a new EHR or writing your own EHR front end.

4. Make EHRs Smarter. If Watson can outsmart Kent Jennings, can’t artificial Intelligence make EHRs smarter? As one of my old friends used to tell our city council, “The answer is a qualified yes and a qualified no.”

AI takes on many, many forms and EHRs can and do use it. Primarily, these are dictation – transcription assistant systems. They’re known as Natural Language Processing (NLP). Sort of scribes without bodies. NLP takes a text stream, either live or from a recording, parses it and puts it in the EHR in its proper place. These systems combine the freedom of dictation with AI’s ability to create clinical notes. That allows the theory maintains, a user to maintain patient contact while creating the note, thus solving the keyboarding dilemma.

 The best-known NLP system Nuance’s Dragon Medical One, etc. Several EHR vendors have integrated Dragon or similar systems into their offerings. As with most complex, technical systems, though, NLP implementation requires a full-scale tech effort. Potential barriers are implementation or training shortcuts, workflow integration, and staff commitment. NLP’s ability to quickly gather information and place it is a given. What’s not so certain is its cost-effectiveness or its product quality. In those respects, its quality and efficacy is similar to scribes and subject to much the same scrutiny.

One interesting and wholly unexpected NLP system result occurred in a study by the University of Washington Researchers. The study group used an Android app NLP dictation system, VGEENS, that captured notes at the bedside. Here’s what startled the researchers:

….Intern and resident physicians were averse to creating notes using VGEENS. When asked why this is, their answers were that they have not had experience with dictation and are reluctant to learn a new skill during their busy clinical rotations. They also commented that they are very familiar with creating notes using typing, templates, and copy paste.

The researchers forgot that medical dictation skills are just that, a skill and don’t come without training and practice. It’s a skill of older generations and that keyboarding is today’s given. 

5. Offload to Patients. I hadn’t thought of this one until I saw an article in the Harvard Business Review. In a wide-ranging review, the authors saw physicians as victims of medical overconsumption and information overload:

In our recent studies of how patients responded to the introduction of a portal allowing them to e-mail health concerns to their care team, we found that the e-mail system that was expected to substitute for face-to-face visits actually increased them. Once patients began using the portal, many started sharing health updates and personal news with their care teams.

One of their solutions is to offload data collection and monitoring to patient apps:

Mightn’t we delegate some of the screening work to patients themselves? Empowering customers with easy-to-use tools transformed the tax reporting and travel industries. While we don’t expect patients to select what blood-pressure medications to be on, we probably can offload considerable amounts of the monitoring and perhaps even some of the treatment adjustment to them. Diabetes has long been managed this way, using forms of self-care that have advanced as self-monitoring technology has improved.

This may be where we are going; however, it ignores the already crowded app field. Moreover, every app seems to have its own data protocol. Health apps are a good way to capture and incorporate health data. They may be a good way to offload physicians’ keyboarding, but health apps are a tower of protocol Babel right now. This solution is as practical as saying that the way to curb double entering data in EHRs is to just make them interoperable.

What’s an EHR User to Do?

If each current approach to reducing keyboarding has problems, they are not fatal. I think that physician keyboarding is a problem and that it is subject to amelioration, if not solution.

For example, here’s Nordic’s Joel Martin on EHR usability:

… In reality, much of this extra work is a result of expanded documentation and quality measure requirements, security needs, and staffing changes. As the healthcare industry shifts its focus to value-based reimbursement and doing more with less, physician work is increasing. That work often takes place in the EHR, but it isn’t caused by the EHR’s existence.

Blaming the EHR without optimizing its use won’t solve the problem. Instead, we should take a holistic view of the issues causing provider burnout and use the system to create efficiencies, as it’s designed to do.  

The good news is that optimizing the EHR is very doable. There are many things that can be done to make it easier for providers to complete tasks in the EHR, and thereby lower the time spent in the system.

Broadly speaking, these opportunities fall into two categories.

First, many organizations have not implemented all the time-saving features that EHR vendors have created. There are features that dramatically lower the time required to complete EHR tasks for common, simple visits (for instance, upper respiratory infections). We rarely see organizations that have implemented these features at the time of our assessments, and we’re now working with many to implement them.

In addition, individual providers are often not taking advantage of features that could save them time. When we look at provider-level data, we typically see fewer than half of providers using speed and personalization features, such as features that let them rapidly reply to messages. These features could save 20 to 30 minutes a day on their own, but we see fewer than 50 percent of providers using them.

Optimization helps physicians use the EHR the way it was intended – in real-time, alongside patient care, to drive better care, fewer mistakes, and higher engagement. Ultimately, we envision a care environment where the EHR isn’t separate from patient care, but rather another tool to provide it. 

What does that mean for scribes or NLP? Recognize they are not panaceas, but tools. The field is constantly changing. Any effort to address keyboarding should look at a range of independent studies to identify their strengths and pitfalls. Note not only the major findings but also what skills, apps, etc., they required. Then, recognize the level of effort a good implementation always requires. Finally, as UW’s researchers found, surprises are always lurking in major shake-ups.

Join us for this week’s #HITsm chat on Using Technology to Fight EHR Burnout to discuss this topic more.

IBM Works To Avoid FDA Oversight For Its Watson Software

Posted on October 25, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

I live in DC, where the fog of politics permeates the air. Maybe that’s given me more of a taste for political inside baseball than most. But lest the following story seems to fall into that category, put that aside – it actually involves some moves that could affect all of us.

According to Stat News, IBM has been lobbying hard to have its “cognitive computing” (read: AI) superbrain Watson exempt from FDA oversight. Earlier this year, eight of its employees were registered to lobby Congress on this subject, the site reports.

Through its Watson Health subsidiary, IBM has joined the crowded clinical decision support arena, a software category which could face FDA regulation soon.

Historically, the agency has dragged its heels on issuing CDS review guidelines. In fact, as of late last year, one-third of CDS developers were abandoning product development due to their uncertainty about the outcome of the FDA’s deliberations, according to a survey by the Clinical Decision Support Alliance.

Now, the agency is poised to issue new guidelines clarifying which software types will be reviewed and which will be exempt under the provisions of the 21st Century Cures Act. Naturally, IBM wants Watson to fall into the latter category.

According to Stat News, IBM spent $26.4 million lobbying Congress between 2013 and June 2017. While IBM didn’t disclose how much of that spent on the CDS regulation issue, it did tell the site that it was “one of many organizations, including patient and physician groups, that supported a common sense regulatory distinction between low-risk software and inherently higher-risk technologies.”

IBM also backed a bill known as the Software Act, which was opposed by the FDA office in charge of device regulation but backed enthusiastically by many software makers. The bill, which was first introduced in 2013, specified that health software would be exempted from FDA regulation unless it provided patient-specific recommendations and posed a significant risk to patient safety. It didn’t pass.

Now, executives with the computing giant will soon learn what fruit their lobbying efforts bore. The FDA said it intends to issue guidance documents explaining how it will implement the exemptions in the 21st Century Cures Act during the first quarter of next year.

No matter what direction it takes, the new FDA guidance is likely to be controversial, as key regulatory language in the 21st Century Cures Act remains open to interpretation. The law includes exemptions for advisory systems, but only if it’s they allow “health care professional to independently review the basis for such recommendations.”  Lawyers representing software makers told Stat News that no one’s sure what the phrase “independently review the basis” means, and that’s a big deal.

Regardless, it’s the FDA’s job to figure out what key provisions in the new law mean. In the meantime, waiting will be a bit stressful even for giants like IBM. Big Blue has made a huge bet on Watson Health, and if the FDA doesn’t rule in is favor, it might need a new strategy.

Virtual Reality Offers New Options For Healthcare Data Analysis

Posted on September 21, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

I don’t know about you, but I’ve always been interested in virtual reality. In fact, given my long-time gaming habit, I’ve been waiting with bated breath for the time when VR-enabled games become part of the consumer mainstream.

Until I read the following article, however, I hadn’t given much thought to how VR technology could be used outside of the consumer sphere. In the article, the author makes a compelling case that VR tools may be the next frontier in big data analytics.

The author’s arguments include the following:

  • VR use allows big data users to analyze data dynamically, as it allows users to “reach out and touch” the data they are studying.
  • Using an approach known as immersive data visualization, coupled with haptic or kinesthetic interfaces, users can understand data intuitively and discover patterns.
  • VR allows users to view and manipulate huge amounts of data simply by looking at them. “VR enables you to capably stack relevant data, pare it and create visual cues so that you can cross-refer instantly,” the author writes.
  • With VR tools, users can interact naturally with data. Rather than glancing at reports, or reviewing spreadsheets, they can “manipulate data streams, push windows around, press buttons and actually walk around data worlds,” the article says.
  • VR makes multi-dimensional data analysis simpler. By using their hands and hearing, you just can pin down the subject, location and significance of specific data sources.

Though these concepts have been percolating for quite a while, I haven’t found any robust use cases for VR-based big data analytics either in or outside of healthcare. (They may well exist, and if you know of one above to hear about it.)

Still, a wide range of healthcare-related VR applications are emerging, including both inpatient care and medical education. I don’t think it will be long now before smart health IT leaders like yourselves begin to apply this approach to healthcare data visualization.

Ultimately, it seems likely that some of the healthcare data technologies are in play will converge with VR applications. By combining immersive or partially-immersive VR technologies with AI and big data analytics tools, healthcare organizations will be able to transform their data-guided outcomes efforts far more easily. And future use cases abound.

Hospitals could use VR to model throughput within the ED and, by layering clinical and transactional data over traffic statistics, doing a much better job of boosting efficiency.

I imagine health insurers combining claims records and clinical performance data, then using VR to as a next-gen tool predict how value-based care contracting play out in certain markets.

We may even see a time when surgeons wear VR glasses and, when perplexed in mid-procedure, can summon big data-driven feedback on options that improve patient survival.

Of course, VR is just set of technologies, and it can’t offer answers to questions we don’t know to ask. However, I do think that by people using their intuition more effectively, VR-based data analysis may extract new and valuable insights from existing data sets. It may take a while for this to happen, but I believe that it will.

AI Making Doctors Better Is the Right Approach

Posted on September 6, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

This summer Bertalan Meskó, MD, PhD posted 10 ways that AI (Artificial Intelligence) could make him a better doctor. Here are his 10 ways:

1) Eradicate waiting time
2) Prioritize my emails
3) Find me the information I need
4) Keep me up-to-date
5) Work when I don’t
6) Help me make hard decisions rational
7) Help patients with urgent matters reach me
8) Help me improve over time
9) Help me collaborate more
10) Do administrative work

This is a great list of ways that AI can make doctors better. No doubt there are even more ways that we’ll discover as AI continues to improve. However, I love this list because it looks at AI from the appropriate point of reference. AI is going to be something that improves the doctor, not replaces the doctor.

I know that many people have talked about how technology is going to replace the doctor. The most famous of which is Vinod Khosla. However, you have to remember that Vinod is an investor and so he needs to drum up companies with ambitious visions. I believe his statement was as much about finding companies that will push the bounds of healthcare as much as it was his prediction for the future of healthcare. However, it no doubt created a lot of fear for doctors.

The reality is that some aspects of what a doctor does will be replaced by technology, but as the list above illustrates, that can be a very good thing for doctors.

AI is coming to healthcare. In some ways, it’s already here. However, the AI that’s coming today isn’t about replacing the doctor, it’s about making the doctor better. Honestly, any doctor that can’t embrace this idea is a doctor that shouldn’t have a medical license.

Should doctors be cautious in how quickly they adopt the technology and should they take the time to make sure that the AI won’t have adverse impacts on their patients? Absolutely. However, there’s a tipping point where not using AI is going to be much more damaging to patients than the risks that are likely to make headlines and scare many. Doctors need to be more driven by what’s best for their patients than fear inducing headlines.

AI will make doctors lives better in a wide variety of ways. It won’t replace the doctor but will enhance the doctor. That’s exciting!