Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

New Research Identifies Game-Changing Uses For AI In Healthcare

Posted on June 27, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

In recent times, the use of artificial intelligence technology in healthcare has been a very hot topic. However, while we’ve come tantalizingly close to realizing its promise, no application that I know of has come close to transforming the industry. Moreover, as John Lynn notes, healthcare organizations will not get as much out of AI use if they are not doing a good job of working with both structured and unstructured data.

That being said, new research by Accenture suggests that those of us dismissing AI tech as immature may be behind the curve. Researchers there have concluded that when combined, key clinical health AI applications could save the US healthcare economy as much $150 billion by 2026.

Before considering the stats in this report, we should bear Accenture’s definition of healthcare AI in mind:

“AI in health presents a collection of multiple technologies enabling machines to sense, comprehend, act and learn, so they can perform administrative and clinical healthcare functions…Unlike legacy technologies that are only algorithms/tools that complement a human, health AI today can truly augment human activity.”

In other words, the consulting firm sees AI as far more than a data analytics tool. Accenture analysts envision an AI ecosystem that transforms and serves as an adjunct to the many healthcare processes. That’s a pretty ambitious take, though probably not a crazy one.

In its new report, Accenture projects that the AI health market will reach $6.6 billion by 2021, up from $600 million in 2014, fueled by the growing number of health AI acquisitions taking place. The report notes that the number of such deals has leapt from less than 20 in the year 2012 to nearly 70 by mid-2016.

Researchers predict that the following applications will generate the projected $150 billion in savings/value:

  • Robot-assisted surgery: $40 billion
  • Virtual nursing assistants: $20 billion
  • Administrative workflow assistance: $18 billion
  • Fraud detection: $17 billion
  • Dosage error reduction: $16 billion
  • Connected machines: $14 billion
  • Clinical trial participant identifier: $13 billion
  • Preliminary diagnosis: $5 billion
  • Automated image diagnosis: $3 billion
  • Cybersecurity: $2 billion

There are a lot of interesting things about this list, which goes well beyond current hot topics like the use of AI-driven chatbots.

One that stands out to me is that two of the 10 applications address security concerns, an approach which makes sense but hadn’t turned up in my research on the topic until now.

I was also intrigued to see robot-assisted surgery topping the list of high-impact health AI options. Though I’m familiar with assistive technologies like the da Vinci robot, it hadn’t occurred to me that such tools could benefit from automation and data integration.

I love the picture Accenture paints of how this might work:

“Cognitive robotics can integrate information from pre-op medical records with real-time operating metrics to physically guide and enhance the physician’s instrument precision…The technology incorporates data from actual surgical experiences to inform new, improved techniques and insights.”

When implemented properly, robot-assisted surgery will generate a 21% reduction in length of hospital stays, the researchers estimate.

Of course, even the wise thinkers at Accenture aren’t always right. Nonetheless, the broad trends report identifies seem like reasonable choices. What do you think?

And by all means check out the report – it’s short, well-argued and useful.

Is Your Health Data Unstructured? – Enabling an AI Powered Healthcare Future

Posted on June 22, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

If you asked a hospital IT executive how much of their data is unstructured data, most of them would reasonably respond that a lot or most of their data was unstructured. If you asked a practice manager or doctor how much of health data is unstructured, they’d likely respond “What do you mean?”

The reality is that most doctors, nurses, practice managers, etc don’t really care if their data is structured data or not. However, they should care about it and more importantly they should care about how they’re going to extract value out of the structured and unstructured data in their organizations.

The reality in healthcare, as the above tweet and image point out, is that much of the data we have and are going to get is going to be unstructured data. Our systems and software need to handle unstructured data in order to facilitate the AI powered healthcare future. That’s right. An AI powered healthcare future is coming and it’s going to be built on the back of structured and unstructured healthcare data.

I think the reason so many healthcare providers are concerned with this AI powered future is that they know the data they currently have is not very good. That’s going to be a problem for many organizations. Bad data is going to produce bad AI powered support.

We shouldn’t expect technology to solve our problems of bad data but, technology will amplify the state of your organization. If your organization is doing an amazing job creating high quality health data, then the AI powered future will propel you in amazing ways to be an even better organization. However, the opposite is also true. If your health data is poor, then these new AI powered systems are going to highlight how poorly your organization is being run. I get why that’s scary for many people.

This should be one of the big lessons we take away from the EHR experience. Healthcare organizations with poor workflows hoped that implementation of an EHR would help them fix their workflows. Instead of EHR fixing the workflows it just highlighted the poor workflows. Technology accentuates and accelerates your current state. It doesn’t usually fix it. You have to fix your organization and workflows first and then use technology to accelerate your organization.

The next step after that is what Rasu Shrestha highlighted when he said, “How can we move from ‘doing digital’ to ‘being digital’. Let’s not replicate analog workflows. Let’s rethink!”

Selecting the Right AI Partner in Healthcare Requires a Human Network

Posted on March 1, 2017 I Written By

Healthcare as a Human Right. Physician Suicide Loss Survivor. Janae writes about Artificial Intelligence, Virtual Reality, Data Analytics, Engagement and Investing in Healthcare. twitter: @coherencemed

Artificial Intelligence, or AI for short, does not always equate to high intelligence and this can have a high cost for healthcare systems. Navigating the intersection of AI and healthcare requires more than clinical operations expertise; it requires advanced knowledge in business motivation, partnerships, legal considerations, and ethics.

Learning to Dance at HIMSS17

This year I had the pleasure of attending a meetup for people interested in and working with AI for healthcare at the Healthcare Information and Management Systems Society (HIMSS) annual meeting in Orlando, Florida. At the beginning of the meetup Wen Dombrowski, MD, asked everyone to stand up and participate in a partner led movement activity. Not your average trust fall, this was designed to teach about AI and machine leaning while pushing most of us out of our comfort zones and to spark participants to realize AI-related lessons. One partner led and the other partner followed their actions.

Dedicated computer scientists, business professionals, and proud data geeks tested their dancing skills. My partner quit when it was my turn to lead the movement. About half of the participants avoided eye contact and reluctantly shuffled their feet while they half nursed their coffee. But however awkward, half the participants felt the activity was a creative way to get us thinking about what it takes for machines to ‘learn’. Notably Daniel Rothman of MyMee had some great dance moves.

I found both the varying feedback and equally varying willingness to participate interesting. One of the participants said the activity was a “waste of time.” They must have come from the half of the room that didn’t follow mirroring instructions. I wonder if I could gather data about what code languages were the specialty of those most resistant. Were the Python coders bad at dancing? I hope not. My professional training is actually as a licensed foreign language teacher so I immediately corroborated the instructional design effectiveness of starting with a movement activity.

There is evidence that participating in physical activity preceding learning makes learners more receptive and allows them to retain the experience longer. “Physical activity breaks throughout the day can improve both student behavior and learning (Trost 2007)” (Reilly, Buskist, and Gross, 2012). I assumed that knowledge of movement and learning capacity was common knowledge. Many of the instructional design comments Dr. Dombrowski received while helpful, revealed participants’ lack of knowledge about teaching and cognitive learning theory.

I could have used some help at the onset in choosing a dance partner that would have matched and anticipated my every move. The same goes for healthcare organizations and their AI solutions.  While they may be a highly respected institution employing some of the most brilliant medical minds, they need to also become or find a skilled matchmaker to bring the right AI partner (our mix of partners) to the dance floor.

AI’s Slow Rise from Publicity to Potential

Artificial Intelligence has experienced a difficult and flashy transition into the medical field. For example, AI computing has been used to establish consensus with imaging for radiologists. While these tools have helped reduce false positives for breast cancer patients, errors remain and not every company entering AI has equal computing abilities. The battle cry that suggested physicians be replaced with robots seems to have slowed robots. While AI is gaining steam, the potential is still catching up with the publicity.

Even if an AI company has stellar computing ability, buyers should question if they also have the same design for outcome. Are they dedicated to protecting your patients and providing better outcomes, or simply making as much profit as possible? Human FTE budgets have been replaced by computing AI costs, and in some instances at the expense of patient and data security.  When I was asking CIOs and smaller companies about their experiences, many were reluctant to criticize a company they had a non-disclosure agreement with.

Learning From the IBM Watson and MD Anderson Breakup

During HIMSS week, the announcement that the MD Anderson and IBM Watson dance party was put on hold was called a setback for AI in medicine by Forbes columnist Matthew Herper. In addition, a scathing report detailing the procurement process written by the University of Texas System Administration Audit System reads more like a contest for the highest consulting fees. This suggests to me that perhaps one of the biggest threats to patient data security when it comes to AI is a corporation’s need to profit from the data.

Moving on, reports of the MD Anderson breakup also mention mismanagement including failing to integrate data from the hospital’s Epic migration. Epic is interoperable with Watson but in this case integration of new data was included in Price Waterhouse Cooper’s scope of work. If poor implementation stopped the project, should a technology partner be punished? Here is an excerpt from the IBM statement on the failed partnership:

 “The recent report regarding this relationship, published by the University of Texas System Administration (“Special Review of Procurement Procedures Related to the M.D. Anderson Cancer Center Oncology Expert Advisor Project”), assessed procurement practices. The report did not assess the value or functionality of the OEA system. As stated in the report’s executive summary, “results stated herein are based on documented procurement activities and recollections by staff, and should not be interpreted as an opinion on the scientific basis or functional capabilities of the system in its current state.”

With non-disclosure agreements and ongoing lawsuits in place, it’s unclear whether this recent example will and should impact future decisions about AI healthcare partners. With multiple companies and interests represented no one wants to be the fall guy when a project fails or has ethical breaches of trust. The consulting firm of Price Waterhouse Coopers owned many of the portions of the project that failed as well as many of the questionable procurement portions.

I spoke with Christine Douglas part of IBM Watson’s communications team and her comments about the early adoption of AI were interesting. She said “you have to train the system. There’s a very big difference between the Watson that’s available commercially today and what was available with MD Anderson in 2012.”  Of course that goes for any machine learning solution large or small as the longer the models have to ‘learn’ the better or more accurate the outcome should be.

Large project success and potential project failure have shown that not all AI is created equally, and not every business aspect of a partnership is dedicated to publicly shared goals. I’ve seen similar proposals from big data computing companies inviting research centers to pay for use of AI computing that also allowed the computing partner to lease the patient data used to other parties for things like clinical trials. How’s that for patient privacy! For the same cost, that research center could put an entire team of developers through graduate school at Stanford or MIT. By the way, I’m completely available for that team! I would love to study coding more than I do now.

Finding a Trusted Partner

So what can healthcare organizations and AI partners learn from this experience? They should ask themselves what their data is being used for. Look at the complaint in the MD Anderson report stating that procurement was questionable. While competitive bidding or outside consulting can help, in this case it appears that it crippled the project. The layers of business fees and how they were paid kept the project from moving forward.

Profiting from patient data is the part of AI no one seems willing to discuss. Maybe an AI system is being used to determine how high fees need to be to obtain board approval for hospital networks.

Healthcare organizations need to ask the tough questions before selecting any AI solution. Building a human network of trusted experts with no financial stake and speaking to competitors about AI proposals as well as personal learning is important for CMIOs, CIOs and healthcare security professionals. Competitive analysis of industry partners and coding classes has become a necessary part of healthcare professionals. Trust is imperative and will have a direct impact on patient outcomes and healthcare organization costs. Meetups like the networking event at HIMSS allow professionals to expand their community and add more data points, gathered through real human interaction, to their evaluation of and AI solutions for healthcare. Nardo Manaloto discussed the meetup and how the group could move forward on Linkedin you can join the conversation.

Not everyone in artificial intelligence and healthcare is able to evaluate the relative intelligence and effectiveness of machine learning. If your organization is struggling, find someone who can help, but be cognizant of the value of the consulting fees they’ll charge along the way.

Back to the dancing. Artificial does not equal high intelligence. Not everyone involved in our movement activity realized it was actually increasing our cognitive ability. Even those who quit, like my partner did, may have learned to dance just a little bit better.

 

Resources

California Department of Education. 2002. Physical fitness testing and SAT9 Retrieved May 20, 2003, from www.cde.ca.gov/statetests/pe/pe.html

Carter, A. 1998. Mapping the mind, Berkeley: University of California Press.

Czerner, T. B. 2001. What makes you tick: The brain in plain English, New York: John Wiley.

Dennison, P. E. and Dennison, G. E. 1998. Brain gym, Ventura, CA: Edu-Kinesthetics.

Dienstbier, R. 1989. Periodic adrenalin arousal boosts health, coping. New Sense Bulletin, : 14.9A

Dwyer, T., Sallis, J. F., Blizzard, L., Lazarus, R. and Dean, K. 2001. Relation of academic performance to physical activity and fitness in children. Pediatric Exercise Science, 13: 225–237. [CrossRef], [Web of Science ®]

Gavin, J. 1992. The exercise habit, Champaign, IL: Human Kinetics.

Hannaford, C. 1995. Smart moves: Why learning is not all in your head, Arlington, VA: Great Ocean.

Howard, P. J. 2000. The owner’s manual for the brain, Austin, TX: Bard.

Jarvik, E. 1998. Young and sleepless. Deseret News, July 27: C1

Jensen, E. 1998. Teaching with the brain in mind, Alexandria, VA: Association for Supervision and Curriculum Development.

Jensen, E. 2000a. Brain-based learning, San Diego: The Brain Store.

Reilly, E., Buskist, C., & Gross, M. K. (2012). Movement in the Classroom: Boosting Brain Power, Fighting Obesity. Kappa Delta Pi Record, 48(2), 62-66. doi:10.1080/00228958.2012.680365.

The Healthcare AI Future, From Google’s DeepMind

Posted on February 22, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

While much of its promise is still emerging, it’s hard to argue that AI has arrived in the health IT world. As I’ve written in a previous article, AI can already be used to mine EMR data in a sophisticated way, at least if you understand its limitations. It also seems poised to help providers predict the incidence and progress of diseases like congestive heart failure. And of course, there are scores of companies working on other AI-based healthcare projects. It’s all heady stuff.

Given AI’s potential, I was excited – though not surprised – to see that world-spanning Google has a dog in this fight. Google, which acquired British AI firm DeepMind Technologies a few years ago, is working on its own AI-based healthcare solutions. And while there’s no assurance that DeepMind knows things that its competitors don’t, its status as part of the world’s biggest data collector certainly comes with some advantages.

According to the New Scientist, DeepMind has begun working with the Royal Free London NHS Foundation Trust, which oversees three hospitals. DeepMind has announced a five-year agreement with the trust, in which it will give it access to patient data. The Google-owned tech firm is using that data to develop and roll out its healthcare app, which is called Streams.

Streams is designed to help providers kick out alerts about a patient’s condition to the cellphone used by the doctor or nurse working with them, in the form of a news notification. At the outset, Streams will be used to find patients at risk of kidney problems, but over the term of the five-year agreement, the developers are likely to add other functions to the app, such as patient care coordination and detection of blood poisoning.

Streams will deliver its news to iPhones via push notifications, reminders or alerts. At present, given its focus on acute kidney injury, it will focus on processing information from key metrics like blood tests, patient observations and histories, then shoot a notice about any anomalies it finds to a clinician.

This is all part of an ongoing success story for DeepMind, which made quite a splash in 2016. For example, last year its AlphaGo program actually beat the world champion at Go, a 2,500-year-old strategy game invented in China which is still played today. DeepMind also achieved what it terms “the world’s most life-like speech synthesis” by creating raw waveforms. And that’s just a couple of examples of its prowess.

Oh, and did I mention – in an achievement that puts it in the “super-smart kid you love to hate” category – that DeepMind has seen three papers appear in prestigious journal Nature in less than two years? It’s nothing you wouldn’t expect from the brilliant minds at Google, which can afford the world’s biggest talents. But it’s still a bit intimidating.

In any event, if you haven’t heard of the company yet (and I admit I hadn’t) I’m confident you will soon. While the DeepMind team isn’t the only group of geniuses working on AI in healthcare, it can’t help but benefit immensely from being part of Google, which has not only unimaginable data sources but world-beating computing power at hand. If it can be done, they’re going to do it.