Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Real-world Health AI Applications in 2018 and Further

Posted on August 29, 2018 I Written By

The following is a guest blog post by Inga Shugalo, Healthcare Industry Analyst at Itransition.

In contrast to legacy systems that are just algorithms performing strict tasks, artificial intelligence can extend the task itself, creating new insights from the information fed to it. Current healthcare AI is powerful enough to undertake such complex challenges as automated diagnosis, medical image analysis, virtual patient assistance, and risk analysis, supporting health specialists in making more swift and informed decisions.

In 2016, Frost & Sullivan predicted the healthcare AI market to reach $6.6 billion by 2021. Meanwhile, 2017’s Accenture report estimates AI saving $150 billion annually for the U.S. healthcare economy by 2026. “At hyper-speed, AI is re-wiring our modern conception of healthcare delivery,” researchers from Accenture say.

Standing in the middle of 2018, the industry already hints on its course regarding further AI expansion. Spoiler alert: as well as with blockchain AR, VR, and any other kind of innovative custom medical software, the adoption challenges persist.

Current and prospective AI directions in healthcare

Diagnosis support

One of the most fascinating and valuable directions for AI to evolve is its ability to help providers diagnose patients more accurately and at a higher pace. We are thrilled to see how 2018 erupts with many healthcare organizations adopting artificial intelligence and creating unprecedented cases of assisted diagnostics with it.

Geisinger specialists applied AI to analyze CT scans of patients’ heads and detect intracranial hemorrhage early. Intracranial hemorrhage is a life-threatening form of internal bleeding, affecting about 50,000 patients per year, with 47% dying within 30 days.

Geisinger was able to automatically pinpoint and prioritize the cases of intracranial hemorrhage, focusing the attention of radiologists on them and thus allowing for timely interventions. This approach reduced the time to diagnosis by 96%.

Mayo Clinic currently uses IBM Watson’s superpowers to match patients with fitting clinical trials. The clinic’s officials stated that only 5% of patients enrolled in trials in the U.S., which significantly hinders clinical research and innovation in cancer therapies. On the other side, manual patient-trial matching is a time-exhausting process.

Watson runs this process on the background, comparing the patients’ conditions with available trials and suggesting the appropriate trials for providers and patients to consider including in a treatment plan. Since its implementation in 2016, Watson was able to deliver about an 80% increase in enrollment to Mayo’s trials for breast cancer.

Patient risk analysis

“…Healthcare is one of the most important fields AI is going to transform,” Google CEO Sundar Pichai noted during the Google I/O 2018 keynote. Last year, the event presented Google AI, a “collection of our teams and efforts to bring the benefits of AI to everyone.”

In 2018, Google uses their AI to tap into critical patient risks, such as mortality, readmission, and prolonged LOS. Cooperating with UC San Francisco, The University of Chicago Medicine, and Stanford Medicine, they analyzed over 46 billion anonymized retrospective EHR data points collected from over 216 thousand adult patients hospitalized for at least 24 hours at two US academic medical centers.

The deep learning model built by researchers reviewed each patient’s chart as a timeline, from its creation to the point of hospitalization. This data allowed clinicians to make various predictions on patient health outcomes, including prolonged length of stay, 30-day unplanned readmission, upcoming in-hospital mortality, and even a patient’s final discharge diagnosis. Remarkably, the model achieved an accuracy level that significantly outperformed traditional predictive models.

According to Pichai, “If you go and analyze over 100,000 data points per patient, more than any single doctor could analyze, we can actually quantitatively predict the chance of readmission 24 to 48 hours earlier than traditional methods. It gives doctors time to act.”

Of course, researchers don’t claim that their approach is ready for implementation in clinical settings, but they are looking forward to collaborating with providers to test this model further. Hopefully, we will see field trials and, who knows, even early adoption in 2019.

EHRs “on steroids”

HIMSS18 was all about artificial intelligence and machine learning. Surprisingly, all major EHR vendors – Allscripts, Cerner, athenahealth, Epic, and eClinicalWorks – came up with a promise to include AI into upcoming iterations of their platforms.

At the event, Epic announced a new partnership with Nuance to integrate their AI-powered conversational virtual assistant into the Epic EHR workflow. Particularly, the assistant will enable health specialists to access patient information and lab results, record patient vitals as well as check schedules and manage patient appointments using voice.

Similarly, eClinicalWorks puts AI into work on voice control but also prioritizes telemedicine, pop health, and clinical decision support. According to the company’s CEO Girish Navani, “We spent the last decade putting data in EHRs. The next decade is about intelligence and creating inferences that improve care outcomes. We can have the computer do things for the clinician to make them aware of actions they can take.” The new EHR’s launch is expected in late 2018 or early 2019.

Athenahealth also added a virtual assistant into their EHRs to improve mobile connectivity and welcomes NoteSwift’s AI-based Samantha technology to enhance clinical workflows by introducing robust automation. Samantha can grasp free-text and natural language, process information, structure it, assign ICD-10, SNOMED or CPT codes, prepare e-prescriptions and orders.

Pre-existing challenges for healthcare AI adoption

Gartner predicted that 50% of organizations will miss AI and data literacy skills to gain business value by 2020. Certainly, a lot of healthcare organizations will get in this 50%, and there are two reasons for that.

Regulations and security concerns are the main pre-existing challenges that delay practically any technology adoption in healthcare and entail an array of new challenges along with them.

First, an AI application or device has to be approved by the FDA. The catch is that the existing process focuses on the hardware or the way that algorithms work, but not the data it should or would interact with.

Speaking of data, another challenge is security breaches. Safeguarding sensitive information is a must for healthcare because patient data is a constant target for identity theft and reimbursement fraud. In Accenture’s new report, nearly 25% of healthcare execs admitted experiencing “adversarial AI behaviors, like falsified location data or bot fraud.” While this doesn’t mean AI threatens patient data, such claims do increase the concerns related to its adoption.

Still, artificial intelligence is growing in healthcare and will continue to do so. Maybe not at rocket speed, but the most recent cases show consistent improvements in major care delivery gaps. Healthcare AI’s future appears bright.

About Inga Shugalo
Inga Shugalo is a Healthcare Industry Analyst at Itransition. She focuses on Healthcare IT, highlighting the industry challenges and technology solutions that tackle them. Inga’s articles explore diagnostic potential of healthcare IoT, opportunities of precision medicine, robotics and VR in healthcare and more.

AI-Based Tech Could Speed Patient Documentation Process

Posted on August 27, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

A researcher with a Google AI team, Google Brain, has published a paper describing how AI could help physicians complete patient documentation more quickly. The author, software engineer Peter Lui, contends that AI technology can speed up patient documentation considerably by predicting its content.

On my initial reading of the paper, it wasn’t clear to me what advantage this has over pre-filling templates or even allowing physicians to cut-and-paste text from previous patient encounters. Still, judge for yourself as I outline what author Liu has to say, and by all means, check out the write-up.

In its introduction, the paper notes that physicians spend a great deal of time and energy entering patient notes into EHRs, a process which is not only taxing but also demoralizing for many physicians. Choosing from just one of countless data points underscoring this conclusion, Liu cites a 2016 study noting that physicians spend almost 2 hours of administrative work for every hour of patient contact.

However, it might be possible to reduce the number of hours doctors spend on this dreary task. Google Brain has been working on technologies which can speed up the process of documentation, including a new medical language modeling approach. Liu and his colleagues are also looking at how to represent an EHR’s mix of structured and unstructured text data.

The net of all of this? Google Brain has been able to create a set of systems which, by drawing on previous patient records can predict most of the content a physician will use next time they see that patient.

The heart of this effort is the MIMIC-III dataset, which contains the de-identified electronic health records of 39,597 patients from the ICU of a large tertiary care hospital. The dataset includes patient demographic data, medications, lab results, and notes written by providers. The system includes AI capabilities which are “trained” to predict the text physicians will use in their latest patient note.

In addition to making predictions, the Google Brain AI seems to have been able to pick out some forms of errors in existing notes, including patient ages and drug names, as well as providing autocorrect options for corrupted words.

By way of caveats, the paper warns that the research used only data generated within 24 hours of the current note content. Liu points out that while this may be a wide enough range of information for ICU notes, as things happen fast there, it would be better to draw on data representing larger windows of time for non-ICU patients. In addition, Liu concedes that it won’t always be possible to predict the content of notes even if the system has absorbed all existing documentation.

However, none of these problems are insurmountable, and Liu understandably describes these results as “encouraging,” but that’s also a way of conceding that this is only an experimental conclusion. In other words, these predictive capabilities are not a done deal by any means. That being said, it seems likely that his approach could be valuable.

I am left with at least one question, though. If the Google Brain technology can predict physician notes with great fidelity, how does that differ than having the physician cut-and-paste previous notes on their own?  I may be missing something here, because I’m not a software engineer, but I’d still like to know how these predictions improve on existing workarounds.

Let Vendors Lead The Way? Are You Nuts?

Posted on August 13, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Every now and then, a vendor pops up and explains how the next-gen EHR should work. It’s easy to ask yourself why anyone should listen, given that you’re the one dishing out the care. But bear with me. I’ve got a theory working here.

First of all, let’s start with a basic assumption, that EHRs aren’t going to stay in their current form much longer. We’re seeing them grow to encompass virtually every form of medical data and just about every transaction, and nobody’s sure where this crazy process is going to end.

Who’s going to be our guide to this world? Vendors. Yup, the people who want to sell you stuff. I will go out on a limb and suggest that at this point in the health data revolution, they’re in a better position to predict the future.

Sure, that probably sounds obnoxious. While vendors may employ reputable, well-intended physicians, the vast majority of those physicians don’t provide care themselves anymore. They’re rusty. And unless they’re in charge of the company they serve, their recommendations may be overruled by people who have never touched a patient.

On the flip side, though, vendor teams have the time and money to explore emerging technologies, not just the hip stuff but the ones that will almost certainly be part of medical practice in the future. The reality is that few practicing physicians have time to keep up with their progress. Heck, I spend all day researching these things, and I’m going nuts trying to figure out which tech has gone from a nifty idea to a practical one.

Given that vendors have the research in hand, it may actually make sense to let them drive the car for a while. Honestly, they’re doing a decent job of riding the waves.

In fact, it seems to me that the current generation of health data management systems are coming closer to where they should be.  For example, far more of what I’d call “enhanced EHR” systems include care management tools, integrating support for virtual visits and modules that help practices pull together MIPS data. As always, they aren’t perfect – for example, few ambulatory EHRs are flexible enough to add new functions easily — but they’re getting better.

I guess what I’m saying is that even if you have no intention of investing in a given product, you might want to see where developers’ ideas are headed. Health data platforms are at an especially fluid stage right now, tossing blockchain, big data analytics, AI and genomic data together and creating new things. Let’s give developers a bit of slack and see what they can do to tame these beasts.

Physicians Lack IT Tools Needed For Value-Based Care

Posted on July 23, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

A new study sponsored by Quest Diagnostics has concluded that progress toward value-based care has slowed because physicians lack the IT tools they need.  In fact, the survey of health plan executives and physicians found that both groups see the progress of VBC as backsliding, with 67% reporting that the U.S. still has a fee-for-service system in place.

The study, which was conducted by Regina Corso Consulting, took place between April 26 and May 7 of 2018, included 451 respondents, 300 which for primary care physicians in private practice. The other 151 were health plan executives holding director-level positions.

More than half (57%) of health plan respondents said that a lack of tools is preventing doctors from moving ahead with VBC, compared with 45% last year. Also, 72% of physicians and health plan leaders said that doctors don’t have all the information they need about their patients to proceed with VBC.

A minority of doctors (39%) reported that EHRs provide all the data they need to care for the patients, though 86% said they could provide better care for patients if their EHR was interoperable with other technologies. Eighty-eight percent of physicians and health plan execs said that such data can provide insights that prescribing and claims data typically can’t.

All of the survey respondents agreed that making do with existing health IT tools is better than spending more. Fifty-three percent said that optimizing existing health information technology made sense, compared with 25% recommending investing in some new information technology and just 11% suggesting that large information technology infrastructure investments were a good idea.

Survey respondents said that a lack of interoperability between health IT systems with the biggest barrier to investing in new technology, followed by the perception that it would create more work while producing little or no benefit.

On the other hand, respondents named several technologies which could help speed VBC adoption. They include bioinformatics (73%), AI (68%), SMART app platform (65%), FHIR (64%), machine learning (64%), augmented reality (51%) and blockchain (47%). In its commentary, the report noted that SMART app platform use and FHIR might offer near-term benefits, as they allow companies to plug new technologies into existing platforms.

Bottom line, new ideas and technologies can make a difference. Eighty-nine percent off health plan execs and physicians said that healthcare organizations need to be more innovative and integrate more options and tools that support patient care.

Some Alexa Health “Skills” Don’t Comply With Amazon Medical Policies

Posted on July 18, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

It’s becoming predictable: A company offering AI assistant for scheduling medical appointments thinks that consumers want to use Amazon’s Alexa to schedule appointments with their doctor. The company, Nimblr, is just one of an expanding number of developers that see Alexa integration as an opportunity for growth.

However, Nimblr and its peers have stepped into an environment where the standards for health applications are a bit slippery. That’s no fault of theirs, but it might affect the future of Amazon Alexa health applications, which can ultimately affect every developer that works with the Alexa interface.

Nimblr’s Holly AI has recently begun to let patients book and reschedule appointments using Alexa voice commands. According to its prepared statement, Nimblr expects to integrate with other voice command platforms as well, but Alexa is clearly an important first step.

The medical appointment service is integrated with a range of EHRs, including athenahealth, Care Cloud and DrChrono.  To use the service, doctors sign up and let Holly access their calendar and EHR.

Patients who choose to use the Amazon interface go through a scripted dialogue allowing them to set, change or cancel an appointment with their doctor. The patient uses Alexa to summon Holly, then tells Holly the doctor with whom they’d like to book an appointment. A few commands later, the patient has booked a visit. No need to sit at a computer or peer at a smartphone screen.

For Amazon, this kind of agreement is the culmination of a long-term strategy. According to an article featured in Quartz Alexa is now in roughly 20 million American homes and owns more than 70% of the US market for voice-driven assistants. Recently it’s made some power moves in healthcare — including the acquisition of online pharmacy PillPack. It’s has also worked to build connections with healthcare partners, including third-party developers that can enrich the healthcare options available to Alexa users.

Most of the activity that drives Alexa comes from “skills,” which resemble smartphone apps, made available on the Alexa store by independent developers. According to Quartz, the store hosted roughly 900 skills in its “health and fitness” category on the Alexa skills store as of mid-April.

In theory, externally-developed health skills must meet three criteria: they may not collect personal information from customers, cannot imply that they are life-saving by names and descriptions and must include a disclaimer stating that they are not medical devices — and that users should ask their providers if they believe they need medical attention.

However, according to Quartz, as of mid-April there were 65 skills in the store that didn’t provide the required disclaimer. If so, this raises questions as to how stringently Amazon supervises the skills uploaded by its third-party developers.

Let me be clear that I’m not criticizing Nimblr in any way. As far as I know, the company is doing everything the right way. My only critiques would be that it’s not clear to me why its Alexa tool is much more useful than a plain old portal, and that of the demo video is any indication, that the interactions between Alexa and the consumer are a trifle awkward. On the whole, it seems like a useful tool and will likely get better over time.

However, with a growing number of healthcare developers featuring apps Alexa’s skills store, it will be worth watching to see if Amazon enforces its own rules. If not, reputable developers like Nimblr might not want to go there.

AMA Hopes To Drive Healthcare AI

Posted on July 6, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Last month, the AMA adopted a new policy setting standards for its approach to the use of AI. Now, the question is how much leverage it will actually have on the use in the practice of medicine.

In its policy statement, the trade group said it would work to set standards on how AI can improve patient outcomes and physicians’ professional satisfaction. It also hopes to see that physicians get a say-so in the development, design, validation implementation of healthcare AI tools.

More specifically, the AMA said it would promote the development of well-designed, clinically-validated standards for healthcare AI, including that they:

  • Are designed and evaluated using best-practices user-centered design
  • Address bias and avoid introducing or exacerbating healthcare disparities when testing or deploying new AI tools
  • Safeguard patients’ and other individuals’ privacy and preserve security and integrity of personal information

That being said, I find myself wondering whether the AMA will have the chance to play a significant role in the evolution of AI tools. It certainly has a fair amount of competition.

It’s certainly worth noting that the organization is knee-deep in the development of digital health solutions. Its ventures include the MATTER incubator, which brings physicians and entrepreneurs together to solve healthcare problems; biotech incubator Sling Health, which is run by medical students; Health2047, which brings helps healthcare organizations and entrepreneurs work together and Xcertia, an AMA-backed non-profit which has developed a mobile health app framework.

On the other hand, the group certainly has a lot of competition for doctors’ attention. Over the last year or two, the use of AI in healthcare has gone from a nifty idea to a practical one, and many health systems are deploying platforms that integrate AI features. These platforms include tools helping doctors collaborate with care teams, avoid errors and identify oncoming crises within the patient population.

If you’re wondering why I’m bringing all this up, here’s why. Ordinarily, I wouldn’t bother to discuss an AMA policy statement — some of them are less interesting than watching grass grow — but in this case, it’s worth thinking about for a bit.

When you look at the big picture, it matters who drive the train when it comes to healthcare AI. If physicians take the lead, as the AMA would obviously prefer, we may be able to avoid the deployment of user-hostile platforms like many of the first-generation EHRs.

If hospitals end up dictating how physicians use AI technology, it might mean that we see another round of kludgy interfaces, lousy decision-support options and time-consuming documentation extras which will give physicians an unwanted feeling of deja-vu. Not to mention doctors who refuse to use it and try to upend efforts to use AI in healthcare.

Of course, some hospitals will have learned from their mistakes, but I’m guessing that many may not, and things could go downhill from there. Regardless, let’s hope that AI tools don’t become the next albatross hung around doctors’ necks.

This Futurist Says AI Will Never Replace Physicians

Posted on June 6, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Most of us would agree that AI technology has amazing — almost frightening — potential to change the healthcare world. The thing is, no one is exactly sure what form those changes will take, and some fear that AI technologies will make their work obsolete. Doctors, in particular, worry that AI will undercut their decision-making process or even take their jobs.

Their fears are not entirely misplaced. Vendors in the healthcare AI world insist that their products are intended solely to support care, but of course, they need to say that. It’s not surprising that doctors fret as AI software starts to diagnose conditions, triage patients and perform radiology readings.

But according to medical futurist Bertalan Mesko, MD, Ph.D., physicians have nothing to worry about. “AI will transform the meaning of what it means to be a doctor; some tasks will disappear while others will be added to the work routine,” Mesko writes. “However, there will never be a situation where the embodiment of automation, either a robot or an algorithm, will take the place of a doctor.”

In the article, Mesko lists five reasons why he takes this position:

  1. Empathy is irreplaceable: “Even if the array of technologies will offer brilliant solutions, it would be difficult for them to mimic empathy,” he argues. “… We will need doctors holding our hands while telling us about life-changing diagnoses, their guide to therapy and their overall support.”
  2. Physicians think creatively: “Although data, measurements and quantitative analytics are a crucial part of a doctor’s work…setting up a diagnosis and treating a patient is not a linear process. It requires creativity and problem-solving skills that algorithms and robots will ever have,” he says.
  3. Digital technologies are just tools: “It’s only doctors together with their patients who can choose [treatments], and only physicians can evaluate whether the smart algorithm came up with potentially useful suggestions,” Mesko writes.
  4. AI can’t do everything: “There are responsibilities and duties which technologies cannot perform,” he argues. “… There will always be tasks where humans will be faster, more reliable — or cheaper than technology.”
  5. AI tech isn’t competing with humans: “Technology will help bring medical professionals towards a more efficient, less error-prone and more seamless healthcare,” he says. “… The physician will have more time for the patient, the doctor can enjoy his work in healthcare will move into an overall positive direction.”

I don’t have much to add to his analysis. I largely agree with what he has to say.

I do think he may be wrong about the world needing physicians to make all diagnoses – after all, a sophisticated AI tool could access millions of data points in making patient care recommendations. However, I don’t think the need for human contact will ever go away.

Recording Doctor-Patient Visits Shows Great Potential

Posted on June 1, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Doctors, do you know how you would feel if a patient recorded their visit with you? Would you choose to record them if you could? You may soon find out.

A new story appearing in STAT suggests that both patients and physicians are increasingly recording visits, with some doctors sharing the audio recording and encouraging patients to check it out at home.

The idea behind this practice is to help patients recall their physician’s instructions and adhere to treatment plans. According to one source, patients forget between 40% to 80% of physician instructions immediately after leaving the doctor’s office. Sharing such recordings could increase patient recall substantially.

What’s more, STAT notes, emerging AI technologies are pushing this trend further. Using speech recognition and machine learning tools, physicians can automatically transcribe recordings, then upload the transcription to their EMR.

Then, health IT professionals can analyze the texts using natural language processing to gain more knowledge about specific diseases. Such analytics are likely to be even more helpful than processes focused on physician notes, as voice recordings offer more nuance and context.

The growth of such recordings is being driven not only by patients and their doctors, but also by researchers interested in how to best leverage the content found in these recordings.

For example, a professor at Dartmouth is leading a project focused on creating an artificial intelligence-enabled system allowing for routine audio recording of conversations between doctors and patients. Paul Barr is a researcher and professor at the Dartmouth Institute for Health Policy and Clinical Practice.

The project, known as ORALS (Open Recording Automated Logging System), will develop and test an interoperable system to support routine recording of patient medical visits. The fundamental assumption behind this effort is that recording such content on smart phones is inappropriate, as if the patient loses their phone, their private healthcare information could be exposed.

To avoid this potential privacy breach, researchers are storing voice information on a secure central server allowing both patients and caregivers to control the information. The ORALS software offers both a recording and playback application designed for recording patient-physician visits.

Using the system, patients record visits on their phone, have them uploaded to a secure server and after that, have the recordings automatically removed from the phone. In addition, ORALS also offers a web application allowing patients to view, annotate and organize their recordings.

As I see it, this is a natural outgrowth of the trailblazing Open Notes project, which was perhaps the first organization encouraging doctors to share patient information. What makes this different is that we now have the technology to make better use of what we learn. I think this is exciting.

Competition Heating Up For AI-Based Disease Management Players

Posted on May 21, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Working in collaboration with a company offering personal electrocardiograms to consumers, researchers with the Mayo Clinic have developed a technology that detects a dangerous heart arrhythmia. In so doing, the two are joining the race to improve disease management using AI technology, a contest which should pay the winner off handsomely.

At the recent Heart Rhythm Scientific Sessions conference, Mayo and vendor AliveCor shared research showing that by augmenting AI with deep neural networks, they can successfully identify patients with congenital Long QT Syndrome even if their ECG is normal. The results were accomplished by applying AI from lead one of a 12-lead ECG.

While Mayo needs no introduction, AliveCor might. While it started out selling a heart rhythm product available to consumers, AliveCor describes itself as an AI company. Its products include KardiaMobile and KardiaBand, which are designed to detect atrial fibrillation and normal sinus rhythms on the spot.

In their statement, the partners noted that as many as 50% of patients with genetically-confirmed LQTS have a normal QT interval on standard ECG. It’s important to recognize underlying LQTS, as such patients are at increased risk of arrhythmias and sudden cardiac death. They also note that that the inherited form affects 160,000 people in the US and causes 3,000 to 4,000 sudden deaths in children and young adults every year. So obviously, if this technology works as promised, it could be a big deal.

Aside from its medical value, what’s interesting about this announcement is that Mayo and AliveCor’s efforts seem to be part of a growing trend. For example, the FDA recently approved a product known as IDx-DR, the first AI technology capable of independently detecting diabetic retinopathy. The software can make basic recommendations without any physician involvement, which sounds pretty neat.

Before approving the software, the FDA reviewed data from parent company IDx, which performed a clinical study of 900 patients with diabetes across 10 primary care sites. The software accurately identified the presence of diabetic retinopathy 87.4% of the time and correctly identified those without the disease 89.5% of the time. I imagine an experienced ophthalmologist could beat that performance, but even virtuosos can’t get much higher than 90%.

And I shouldn’t forget the 1,000-ton presence of Google, which according to analyst firm CBInsights is making big bets that the future of healthcare will be structured data and AI. Among other things, Google is focusing on disease detection, including projects targeting diabetes, Parkinson’s disease and heart disease, among other conditions. (The research firm notes that Google has actually started a limited commercial rollout of its diabetes management program.)

I don’t know about you, but I find this stuff fascinating. Still, the AI future is still fuzzy. Clearly, it may do some great things for healthcare, but even Google is still the experimental stage. Don’t worry, though. If you’re following AI developments in healthcare you’ll have something new to read every day.

AI Software Detects Diabetic Retinopathy Without Physician Involvement

Posted on April 27, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

The FDA has approved parent company IDx to market IDx-DR, the first AI technology which can independently detect diabetic retinopathy. The software can make basic recommendations without any physician involvement.

Before approving the software, the FDA reviewed data from a clinical study of 900 patients with diabetes across 10 primary care sites. IDx-DR accurately identified the presence of diabetic retinopathy 87.4% of the time and accurately identified those without the disease 89.5% of the time. In other words, it’s not perfect but it’s clearly pretty close.

To use IDx-DR, providers upload digital images of a diabetic patient’s eyes taken with a retinal camera to the IDx cloud server. Once the image reaches the server, IDx-DR uses an AI algorithm to analyze the images, then tells the user whether the user has anything more than mild retinopathy.

If it finds significant retinopathy, the software suggests referring the patient to an eye care specialist for an in-depth diagnostic visit. On the other hand, if the software doesn’t detect retinopathy, it recommends a standard rescreen in 12 months.

Apparently, this is the first time the FDA has allowed a company to sell a device which screens and diagnoses patients without involving a specialist. We can expect further AI approvals by the FDA in the future, according to Commissioner Scott Gottlieb, MD. “Artificial Intelligence and Machine Learning hold enormous promise for the future of medicine,” Gottlieb tweeted. “The FDA is taking steps to promote innovation and support the use of artificial intelligence-based medical devices.”

The question this announcement must raise in the minds of some readers is “How far will this go?” Both for personal and clinical reasons, doctors are likely to worry about this sort of development. After all, putting aside any impact it may have on their career, they may be concerned that patient will get short-changed.

They probably don’t need to worry, though. According to an article in the MIT Technology Review, a recent research project done by Google Cloud suggests that AI won’t be replacing doctors anytime soon.

Jia Li, who leads research and development at Google Cloud, told a conference audience that while applying AI to radiology imaging might be a useful tool, it can automate only a small part of radiologists’ work. All it will be able to do is help doctors make better judgments and make the process more efficient, Li told conference attendees.

In other words, it seems likely that for the foreseeable future, tools like IDx-DR and its cousins will help doctors automate tasks they didn’t want to do anyway. With any luck, using them will both save time and improve diagnoses. Not at all scary, right?