Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Competition Heating Up For AI-Based Disease Management Players

Posted on May 21, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Working in collaboration with a company offering personal electrocardiograms to consumers, researchers with the Mayo Clinic have developed a technology that detects a dangerous heart arrhythmia. In so doing, the two are joining the race to improve disease management using AI technology, a contest which should pay the winner off handsomely.

At the recent Heart Rhythm Scientific Sessions conference, Mayo and vendor AliveCor shared research showing that by augmenting AI with deep neural networks, they can successfully identify patients with congenital Long QT Syndrome even if their ECG is normal. The results were accomplished by applying AI from lead one of a 12-lead ECG.

While Mayo needs no introduction, AliveCor might. While it started out selling a heart rhythm product available to consumers, AliveCor describes itself as an AI company. Its products include KardiaMobile and KardiaBand, which are designed to detect atrial fibrillation and normal sinus rhythms on the spot.

In their statement, the partners noted that as many as 50% of patients with genetically-confirmed LQTS have a normal QT interval on standard ECG. It’s important to recognize underlying LQTS, as such patients are at increased risk of arrhythmias and sudden cardiac death. They also note that that the inherited form affects 160,000 people in the US and causes 3,000 to 4,000 sudden deaths in children and young adults every year. So obviously, if this technology works as promised, it could be a big deal.

Aside from its medical value, what’s interesting about this announcement is that Mayo and AliveCor’s efforts seem to be part of a growing trend. For example, the FDA recently approved a product known as IDx-DR, the first AI technology capable of independently detecting diabetic retinopathy. The software can make basic recommendations without any physician involvement, which sounds pretty neat.

Before approving the software, the FDA reviewed data from parent company IDx, which performed a clinical study of 900 patients with diabetes across 10 primary care sites. The software accurately identified the presence of diabetic retinopathy 87.4% of the time and correctly identified those without the disease 89.5% of the time. I imagine an experienced ophthalmologist could beat that performance, but even virtuosos can’t get much higher than 90%.

And I shouldn’t forget the 1,000-ton presence of Google, which according to analyst firm CBInsights is making big bets that the future of healthcare will be structured data and AI. Among other things, Google is focusing on disease detection, including projects targeting diabetes, Parkinson’s disease and heart disease, among other conditions. (The research firm notes that Google has actually started a limited commercial rollout of its diabetes management program.)

I don’t know about you, but I find this stuff fascinating. Still, the AI future is still fuzzy. Clearly, it may do some great things for healthcare, but even Google is still the experimental stage. Don’t worry, though. If you’re following AI developments in healthcare you’ll have something new to read every day.

AI Software Detects Diabetic Retinopathy Without Physician Involvement

Posted on April 27, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

The FDA has approved parent company IDx to market IDx-DR, the first AI technology which can independently detect diabetic retinopathy. The software can make basic recommendations without any physician involvement.

Before approving the software, the FDA reviewed data from a clinical study of 900 patients with diabetes across 10 primary care sites. IDx-DR accurately identified the presence of diabetic retinopathy 87.4% of the time and accurately identified those without the disease 89.5% of the time. In other words, it’s not perfect but it’s clearly pretty close.

To use IDx-DR, providers upload digital images of a diabetic patient’s eyes taken with a retinal camera to the IDx cloud server. Once the image reaches the server, IDx-DR uses an AI algorithm to analyze the images, then tells the user whether the user has anything more than mild retinopathy.

If it finds significant retinopathy, the software suggests referring the patient to an eye care specialist for an in-depth diagnostic visit. On the other hand, if the software doesn’t detect retinopathy, it recommends a standard rescreen in 12 months.

Apparently, this is the first time the FDA has allowed a company to sell a device which screens and diagnoses patients without involving a specialist. We can expect further AI approvals by the FDA in the future, according to Commissioner Scott Gottlieb, MD. “Artificial Intelligence and Machine Learning hold enormous promise for the future of medicine,” Gottlieb tweeted. “The FDA is taking steps to promote innovation and support the use of artificial intelligence-based medical devices.”

The question this announcement must raise in the minds of some readers is “How far will this go?” Both for personal and clinical reasons, doctors are likely to worry about this sort of development. After all, putting aside any impact it may have on their career, they may be concerned that patient will get short-changed.

They probably don’t need to worry, though. According to an article in the MIT Technology Review, a recent research project done by Google Cloud suggests that AI won’t be replacing doctors anytime soon.

Jia Li, who leads research and development at Google Cloud, told a conference audience that while applying AI to radiology imaging might be a useful tool, it can automate only a small part of radiologists’ work. All it will be able to do is help doctors make better judgments and make the process more efficient, Li told conference attendees.

In other words, it seems likely that for the foreseeable future, tools like IDx-DR and its cousins will help doctors automate tasks they didn’t want to do anyway. With any luck, using them will both save time and improve diagnoses. Not at all scary, right?

E-Patient Update: Alexa Nowhere Near Ready For Healthcare Prime Time

Posted on February 9, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Folks, I just purchased an Amazon Echo (Alexa) and I’ll tell you up front that I love it. I’m enjoying the heck out of summoning my favorite music with a simple voice command, ordering up a hypnotherapy session when my back hurts and tracking Amazon packages with a four-word request. I’m not sure all of these options are important but they sure are fun to use.

Being who I am, I’ve also checked out what, if anything, Alexa can do to address health issues. I tested it out with some simple but important comments related to my health. I had high hopes, but its performance turned out to be spotty. My statements included:

“Alexa, I’m hungry.”
“Alexa, I have a migraine.”
”Alexa, I’m lonely.”
”Alexa, I’m anxious.”
”Alexa, my chest hurts.”
“Alexa, I can’t breathe.”
“Alexa, I need help.”
“Alexa, I’m suicidal.”
“Alexa, my face is drooping.”

In running these informal tests, it became pretty clear what the Echo was and was not set up to do. In short, it offered brief but appropriate response to communications that involved conditions (such as experiencing suicidality) but drew a blank when confronted with some serious symptoms.

For example, when I told the Echo that I had a migraine, she (yes, it has a female voice and I’ve given it a gender) offered vague but helpful suggestions on how to deal with headaches, while warning me to call 911 if it got much worse suddenly. She also responded appropriately when I said I was lonely or that I needed help.

On the other hand, some of the symptoms I asked about drew the response “I don’t know about that.” I realize that Alexa isn’t a substitute for a clinician and it can’t triage me, but even a blanket suggestion that I call 911 would’ve been nice.

It’s clear that part of the problem is Echo’s reliance on “skills,” apps which seem to interact with its core systems. It can’t offer very much in the way of information or referral unless you invoke one of these skills with an “open” command. (The Echo can tell you a joke, though. A lame joke, but a joke nonetheless.)

Not only that, while I’m sure I missed some things, the selection of skills seems to be relatively minimal for such a prominent platform, particularly one backed by a giant like Amazon. That’s particularly true in the case of health-related skills. Visualize where chatbots and consumer-oriented AI were a couple of years ago and you’ll get the picture.

Ultimately, my guess is that physicians will prescribe Alexa alongside connected glucose meters, smart scales and the like, but not very soon. As my colleague John Lynn points out, information shared via the Echo isn’t confidential, as the Alexa isn’t HIPAA-compliant, and that’s just one of many difficulties that the healthcare industry will need to overcome before deploying this otherwise nifty device.

Still, like John, I have little doubt that the Echo and his siblings will eventually support medical practice in one form or another. It’s just a matter of how quickly it moves from an embryonic stage to a fully-fledged technology ecosystem linked with the excellent tools and apps that already exist.

MedStar’s Human Factors Center: An Interview with Dr. Raj Ratwani

Posted on January 10, 2018 I Written By

When Carl Bergman isn't rooting for the Washington Nationals or searching for a Steeler bar, he’s Managing Partner of EHRSelector.com.For the last dozen years, he’s concentrated on EHR consulting and writing. He spent the 80s and 90s as an itinerant project manager doing his small part for the dot com bubble. Prior to that, Bergman served a ten year stretch in the District of Columbia government as a policy and fiscal analyst, a role he recently repeated for a Council member.

Background: Recently, I had a wide ranging interview with Dr. Raj Ratwani, Acting Center Director and Scientific Director of MedStar Health’s National Center for Human Factors in Healthcare.

The center is MedStar’s patient safety, and usability applied research arm. MedStar is the Mid Atlantic area’s largest medical facility non profit operating 10 major hospitals as well as dozens of urgent care, rehab and medical groups.

MedStar set up the center, as part of its Institute for Innovation five years ago. The Institute is an in house service of several centers that conduct research, analysis, development and education. In addition to human factors, the Institute turns MedStar staff’s ideas into commercial products, conducts professional education, encourages healthy lifestyles and develops in house software products.

The Human Factors Center’s work concentrates on medical devices, as well as creating new processes and procedures. The center’s 30 person staff features physicians, nurses, engineers, product designers, patient safety, usability and human factors specialists. The Center’s focus is on both MedStar and on improving the nation’s healthcare system with grants and contracts from AHRQ, ONC, CMS, etc., as well as many device manufacturers.

Dr. Ratwani: Dr. Ratwani’s publications are extensive and were one reason prompting my interview. I met with him in his office in the old Intelsat building along with Rachel Wynn the center’s post doctoral fellow. We covered several topics from the center’s purpose to ONC’s Meaningful Use (MU) program to the center’s examination of adverse event reporting systems.

Center’s Purpose: I started by asking him what he considered the center’s main focus? He sees the center’s mission as helping those who deliver services by reducing their distractions and errors and working more productively. He said that while the center examines software systems, devices take up the lion’s share of its time from a usability perspective.

The center works on these issues in several ways. Sometimes they just observe how users carry out a task. Other times, they may use specialized equipment such as eye tracking systems. Regardless, their aim is to aid users to reduce errors and increase accuracy. He noted how distractions can cause errors even when a user is doing something familiar. If a distraction occurs in the middle of a task, the user can forget they’ve already done a step and will needlessly repeat it. This not only takes time, but can also lead to cascading errors.

Impact: I asked him how they work with the various medical centers and asked about their track record. Being in house, he said, they have the advantage of formal ties to MedStar’s clinicians. However, he said their successes were a mixed bag. Even when there is no doubt about a change’s efficacy, its acceptance can depend on a variety of budget, logistic and personal factors.

EHR Certification: I then turned to the center’s studies of ONC’s MU vendor product certification. Under his direction, the center sent a team to eleven major EHR vendors to examine how they did their testing. Though they interviewed vendor staffs, they were unable to see testing. Within that constraint, they still found great variability in vendor’s approach. That is, even though ONC allowed vendors to choose their own definition of user centered design, vendors often strayed even from these self defined standards.

MU Program: I then asked his opinion of the MU program. He said he thought that the $40 billion spent drove EHR adoption for financial not clinical reasons. He would have preferred a more careful approach. The MU1 and MU2 programs weren’t evidence based. The program’s criteria needed more pilot and clinical studies and that interoperability and usability should have been more prominent.

Adverse Events: Our conversation then turned to the center’s approach to adverse events, that is instances involving patient safety. Ratwani is proud of a change he helped implement in Medstar’s process. Many institutions take a blame game approach to them berating and shaming those involved. MedStar treats them as teaching moments. The object is to determine root causes and how to implement change. Taking a no fault approach promotes open, candid discussions without staff fearing repercussions.

I finally asked him about his studies applying natural language processing to adverse patient safety reports. His publications in this area analyze the free text sections of adverse reporting systems. He told me they often found major themes in the report texts that the systems didn’t note. As a follow on, he described their project to manage and present the text from these systems. He explained that even though these systems capture free text, the text is so voluminous that users have a difficult time putting them to use.

My thanks to Dr. Ratwani and his staff for arranging the interview and their patience in explaining their work.
____________________________________

A word about DC’s old Intelsat building that houses the Institute. Normally, I wouldn’t comment on an office building. If you’ve seen one, etc., etc. Not so here. Built in the 1980s, it’s an example of futurist or as I prefer to call it Sci-Fi architecture and then some. The building has 14 interconnected “pods” with a façade meant to look like, well, a gargantuan satellite.

Intelsat Building

 

To reach an office, you go down long, open walkways suspended above an atrium. It’s all other unworldly. You wouldn’t be terribly surprised if Princess Leia rounded a corner. It’s not on the usual tourist routes and you can’t just walk in, but if you can wangle it, it’s worth a visit.

Intelsat Building Interior

Wearables Makers Pitching Health Trackers For Babies

Posted on November 9, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

When my older son was born, we relied on a low-tech “sense of hearing” solution to track crying alerts from his crib at the other end of the hallway.

But were he born today, my son would never have settled for such pedestrian technology. Today’s discriminating newborn expects his parents to collect a wide array of data points and conduct advanced analytics on them to optimize his health.

You think this is ridiculous? Wipe that smile off of your face, you slackers. Ever sensitive to the expanding needs of today’s modern baby, wearables manufacturers have begun testing health trackers designed to monitor their tiny bodies, according to an article appearing on the CNN.com site.

In fact, there are already dozens of wearables for babies on the market, CNN found, including devices that monitor their heart rate, smart socks that track oxygen levels and a baby monitor button that snaps onto the child’s clothes. Any of these could cost a few hundred dollars. But there’s also smart thermometers and pacifiers, such as Vick’s or Blue Maestro’s Pacif-i, which start around $20 and go up from there, the site reports.

The CNN article also shares the tale of Crystal King, an Atlanta mom who’s monitoring her six-month son Avery using one of these emerging trackers.

The piece describes how using her cell phone, King can check her baby’s temperature on her cell phone and get app-driven alerts when it’s time for Avery’s next bottle feeding.

Meanwhile, if King picks up her tablet, she can also monitor her son’s breathing, body position, skin temperature and sleeping schedule. (Back in the Stone Age, I had to settle for keeping his body in position with pillow wedges and tracking his sleeping schedule using a little trick known as “staying awake.”)

As part of his work with CNN, Avery has been testing a number of different wearable devices. He seems to be a tough critic. On the one hand, he seemed pretty comfy wearing a biometric-tracking onesie while playing on his mat, but kept spitting out the smart pacifier, which was apparently a nonstarter.

Of course, we don’t actually know what Avery thinks about these devices, but his mom has developed some ideas. For example, King told CNN she thinks it would be good to help parents control the number of notifications they get from baby-monitoring apps and technologies.

If nothing else, equipping their baby with a health tracker may offer parents a little extra reassurance that their child is safe. He might still erupt in deafening screams at 3AM now and again, but if he’s wearing a health tracker, you might at least know why.

Alert Fatigue: It May Be Worse Than You Thought

Posted on November 3, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Until recently, I didn’t take the problem of clinical decision alert fatigue that seriously, or at least not as seriously as I should have. After all, it’s just an alert, right? You can just shut it off if you don’t like hearing it. Or so it seemed to me, admittedly a naïve pundit from the peanut gallery who’s never treated a patient in her life.

But despite my ignorance, researchers have continued to unearth evidence that alert fatigue may be one of the worst safety hazards afflicting medicine today. After all, this fatigue comes from a deadly one-two punch: the excess noises camouflage the alerts clinicians really need to hear while distracting them from what they’re doing with useless sounds. (If you put me in a situation like that I’d get booted out the door for throwing particularly noisy devices out the window.)

Sadly, alert fatigue is far more than a nuisance. The latest evidence to this effect comes from the Journal of the American Medical Informatics Association, where an article published last month underscored how often alerts cause clinicians to ignore important information.

To conduct their study, a group of researchers conducted a cross-sectional study of medication-related clinical decision support alerts. They collected data at a 793-bed tertiary-care teaching institution, measuring the rate of alert overrides, the reasons cited for overrides and the appropriateness of those reasons.

The results of their analysis were disquieting. On the one hand, they found that roughly 60% of overrides were appropriate overall. In particular, 98% of duplicate drug overrides, 96.5% of patient allergy overrides and 82.5% of formulary substitution overrides were appropriate. That’s the good news. On the other hand, they concluded that 40% of physician alert overrides were inappropriate.

All told, overrides of alerts in certain categories were inappropriate greater than 75% of the time. Let’s take a moment to think about that. Seventy. Five. Percent. Now, I know that “inappropriate” doesn’t mean that the patient would’ve died if the error was corrected, or even that they incur serious harm, but this still isn’t great to hear.

Not surprisingly, researchers said that future studies should optimize alert types and frequencies to improve their clinical relevance so clinicians don’t slap them down over and over like a snooze alarm.

The truth is, studies have been drawing this conclusion for quite some time now, and from what I can see little has changed here.

My assumption is that vendors keep doing what they do because nobody has pressured them enough to make them rethink their CDS logic and drop needless alarms. I’m also guessing that some misinformed health leaders might be reassured by the sound of alerts going on and equating it with higher safety ratings. If so, let’s hope they get disabused of this notion soon.

IBM Works To Avoid FDA Oversight For Its Watson Software

Posted on October 25, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

I live in DC, where the fog of politics permeates the air. Maybe that’s given me more of a taste for political inside baseball than most. But lest the following story seems to fall into that category, put that aside – it actually involves some moves that could affect all of us.

According to Stat News, IBM has been lobbying hard to have its “cognitive computing” (read: AI) superbrain Watson exempt from FDA oversight. Earlier this year, eight of its employees were registered to lobby Congress on this subject, the site reports.

Through its Watson Health subsidiary, IBM has joined the crowded clinical decision support arena, a software category which could face FDA regulation soon.

Historically, the agency has dragged its heels on issuing CDS review guidelines. In fact, as of late last year, one-third of CDS developers were abandoning product development due to their uncertainty about the outcome of the FDA’s deliberations, according to a survey by the Clinical Decision Support Alliance.

Now, the agency is poised to issue new guidelines clarifying which software types will be reviewed and which will be exempt under the provisions of the 21st Century Cures Act. Naturally, IBM wants Watson to fall into the latter category.

According to Stat News, IBM spent $26.4 million lobbying Congress between 2013 and June 2017. While IBM didn’t disclose how much of that spent on the CDS regulation issue, it did tell the site that it was “one of many organizations, including patient and physician groups, that supported a common sense regulatory distinction between low-risk software and inherently higher-risk technologies.”

IBM also backed a bill known as the Software Act, which was opposed by the FDA office in charge of device regulation but backed enthusiastically by many software makers. The bill, which was first introduced in 2013, specified that health software would be exempted from FDA regulation unless it provided patient-specific recommendations and posed a significant risk to patient safety. It didn’t pass.

Now, executives with the computing giant will soon learn what fruit their lobbying efforts bore. The FDA said it intends to issue guidance documents explaining how it will implement the exemptions in the 21st Century Cures Act during the first quarter of next year.

No matter what direction it takes, the new FDA guidance is likely to be controversial, as key regulatory language in the 21st Century Cures Act remains open to interpretation. The law includes exemptions for advisory systems, but only if it’s they allow “health care professional to independently review the basis for such recommendations.”  Lawyers representing software makers told Stat News that no one’s sure what the phrase “independently review the basis” means, and that’s a big deal.

Regardless, it’s the FDA’s job to figure out what key provisions in the new law mean. In the meantime, waiting will be a bit stressful even for giants like IBM. Big Blue has made a huge bet on Watson Health, and if the FDA doesn’t rule in is favor, it might need a new strategy.

Mercy Shares De-Identified Data With Medtronic

Posted on October 20, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Medtronic has always performed controlled clinical trials to check out the safety and performance of its medical devices. But this time, it’s doing something more.

Dublin-based Medtronic has signed a data-sharing agreement with Mercy, the fifth largest Catholic health system in the U.S.  Under the terms of the agreement, the two are establishing a new data sharing and analysis network intended to help gather clinical evidence for medical device innovation, the company said.

Working with Mercy Technology Services, Medtronic will capture de-identified data from about 80,000 Mercy patients with heart failure. The device maker will use that data to explore real-world factors governing their response to Cardiac Resynchronization Therapy, a heart failure treatment option which helps some patients.

Medtronic believes that the de-identified patient data Mercy supplies could help improve device performance, according to Dr. Rick Kuntz, senior vice president of strategic scientific operations with Medtronic. “Having the ability to study patient care pathways and conditions before and after exposure to a medical device is crucial to understanding how those devices perform outside of controlled clinical trial setting,” said Kuntz in a prepared statement.

Mercy’s agreement with Medtronic is not unique. In fact, academic medical centers, pharmaceutical companies, health insurers and increasingly, broad-based technology giants are getting into the health data sharing game.

For example, earlier this year Google announced that it was expanding its partnerships with three high-profile academic medical centers under which they work to better analyze clinical data. According to Healthcare IT News, the partners will examine how machine learning can be used in clinical settings to sift through EMR data and find ways to improve outcomes.

“Advanced machine learning is mature enough to start accurately predicting medical events – such as whether patients will be hospitalized, how long they will stay, and whether the health is deteriorating despite treatment for conditions such as urinary tract infections, pneumonia, or heart failure,” said Google Brain Team researcher Katherine Chou in a blog post.

As with Mercy, the academic medical centers are sharing de-identified data. Chou says that offers plenty of information. “Machine learning can discover patterns in de-identified medical records to predict what is likely to happen next, and thus, anticipate the needs of the patients before they arise,” she wrote.

It’s worth pointing out that “de-identification” refers to a group of techniques for patient data protection which, according to NIST, include suppression of personal identifiers, replacing personal identifiers with an average value for the entire group of data, reporting personal identifiers as being within a given range, exchanging personal identifiers other information and swapping data between records.

It may someday become an issue when someone mixes up de-identification (which makes it quite difficult to define specific patients) and anonymization, a subcategory of de-identification whereby data can never be re-identified. Such confusion would, in short, be bad, as the difference between “de-identified” and “anonymized” matters.

In the meantime, though, de-identified data seems likely to help a wide variety of healthcare organizations do better work. As long as patient data stays private, much good can come of partnerships like the one underway at Mercy.

New Research Identifies Game-Changing Uses For AI In Healthcare

Posted on June 27, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

In recent times, the use of artificial intelligence technology in healthcare has been a very hot topic. However, while we’ve come tantalizingly close to realizing its promise, no application that I know of has come close to transforming the industry. Moreover, as John Lynn notes, healthcare organizations will not get as much out of AI use if they are not doing a good job of working with both structured and unstructured data.

That being said, new research by Accenture suggests that those of us dismissing AI tech as immature may be behind the curve. Researchers there have concluded that when combined, key clinical health AI applications could save the US healthcare economy as much $150 billion by 2026.

Before considering the stats in this report, we should bear Accenture’s definition of healthcare AI in mind:

“AI in health presents a collection of multiple technologies enabling machines to sense, comprehend, act and learn, so they can perform administrative and clinical healthcare functions…Unlike legacy technologies that are only algorithms/tools that complement a human, health AI today can truly augment human activity.”

In other words, the consulting firm sees AI as far more than a data analytics tool. Accenture analysts envision an AI ecosystem that transforms and serves as an adjunct to the many healthcare processes. That’s a pretty ambitious take, though probably not a crazy one.

In its new report, Accenture projects that the AI health market will reach $6.6 billion by 2021, up from $600 million in 2014, fueled by the growing number of health AI acquisitions taking place. The report notes that the number of such deals has leapt from less than 20 in the year 2012 to nearly 70 by mid-2016.

Researchers predict that the following applications will generate the projected $150 billion in savings/value:

  • Robot-assisted surgery: $40 billion
  • Virtual nursing assistants: $20 billion
  • Administrative workflow assistance: $18 billion
  • Fraud detection: $17 billion
  • Dosage error reduction: $16 billion
  • Connected machines: $14 billion
  • Clinical trial participant identifier: $13 billion
  • Preliminary diagnosis: $5 billion
  • Automated image diagnosis: $3 billion
  • Cybersecurity: $2 billion

There are a lot of interesting things about this list, which goes well beyond current hot topics like the use of AI-driven chatbots.

One that stands out to me is that two of the 10 applications address security concerns, an approach which makes sense but hadn’t turned up in my research on the topic until now.

I was also intrigued to see robot-assisted surgery topping the list of high-impact health AI options. Though I’m familiar with assistive technologies like the da Vinci robot, it hadn’t occurred to me that such tools could benefit from automation and data integration.

I love the picture Accenture paints of how this might work:

“Cognitive robotics can integrate information from pre-op medical records with real-time operating metrics to physically guide and enhance the physician’s instrument precision…The technology incorporates data from actual surgical experiences to inform new, improved techniques and insights.”

When implemented properly, robot-assisted surgery will generate a 21% reduction in length of hospital stays, the researchers estimate.

Of course, even the wise thinkers at Accenture aren’t always right. Nonetheless, the broad trends report identifies seem like reasonable choices. What do you think?

And by all means check out the report – it’s short, well-argued and useful.

Provider-Backed Health Data Interoperability Organization Launches

Posted on April 12, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

In 1988, some members of the cable television industry got together to form CableLabs, a non-proft innovation center and R&D lab. Since then, the non-profit has been a driving force in bringing cable operators together, developing technologies and specifications for services as well as offering testing and certification facilities.

Among its accomplishments is the development of DOCSIS (Data-over-Cable Service Interface Specification), a standard used worldwide to provide Internet access via a cable modem. If your cable modem is DOCSIS compliant, it can be used on any modern cable network.

If you’re thinking this approach might work well in healthcare, you’re not the only one. In fact, a group of powerful healthcare providers as just launched a health data sharing-focused organization with a similar approach.

The Center for Medical Interoperability, which will be housed in a 16,000-square-foot location in Nashville, is a membership-based organization offering a testing and certification lab for devices and systems. The organization has been in the works since 2011, when the Gary and Mary West Health Institute began looking at standards-based approaches to medical device interoperability.

The Center brings together a group of top US healthcare systems – including HCA Healthcare, Vanderbilt University and Community Health Systems — to tackle interoperability issues collaboratively.  Taken together, the board of directors represent more than 50 percent of the healthcare industry’s purchasing power, said Kerry McDermott, vice president of public policy and communications for the Center.

According to Health Data Management, the group will initially focus on acute care setting within a hospital, such as the ICU. In the ICU, patients are “surrounded by dozens of medical devices – each of which knows something valuable about the patient  — but we don’t have a streamlined way to aggregate all that data and make it useful for clinicians,” said McDermott, who spoke with HDM.

Broadly speaking, the Center’s goal is to let providers share health information as seamlessly as ATMs pass banking data across their network. To achieve that goal, its leaders hope to serve as a force for collaboration and consensus between healthcare organizations.

The project’s initial $10M in funding, which came from the Gary and Mary West Foundation, will be used to develop, test and certify devices and software. The goal will be to develop vendor-neutral approaches that support health data sharing between and within health systems. Other goals include supporting real-time one-to-many communications, plug-and-play device and system integration and the use of standards, HDM reports.

It will also host a lab known as the Transformation Learning Center, which will help clinicians explore the impact of emerging technologies. Clinicians will develop use cases for new technologies there, as well as capturing clinical requirements for their projects. They’ll also participate in evaluating new technologies on their safety, usefulness, and ability to satisfy patients and care teams.

As part of its efforts, the Center is taking a close look at the FHIR API.  Still, while FHIR has great potential, it’s not mature yet, McDermott told the magazine.