Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

AI Making Doctors Better Is the Right Approach

Posted on September 6, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

This summer Bertalan Meskó, MD, PhD posted 10 ways that AI (Artificial Intelligence) could make him a better doctor. Here are his 10 ways:

1) Eradicate waiting time
2) Prioritize my emails
3) Find me the information I need
4) Keep me up-to-date
5) Work when I don’t
6) Help me make hard decisions rational
7) Help patients with urgent matters reach me
8) Help me improve over time
9) Help me collaborate more
10) Do administrative work

This is a great list of ways that AI can make doctors better. No doubt there are even more ways that we’ll discover as AI continues to improve. However, I love this list because it looks at AI from the appropriate point of reference. AI is going to be something that improves the doctor, not replaces the doctor.

I know that many people have talked about how technology is going to replace the doctor. The most famous of which is Vinod Khosla. However, you have to remember that Vinod is an investor and so he needs to drum up companies with ambitious visions. I believe his statement was as much about finding companies that will push the bounds of healthcare as much as it was his prediction for the future of healthcare. However, it no doubt created a lot of fear for doctors.

The reality is that some aspects of what a doctor does will be replaced by technology, but as the list above illustrates, that can be a very good thing for doctors.

AI is coming to healthcare. In some ways, it’s already here. However, the AI that’s coming today isn’t about replacing the doctor, it’s about making the doctor better. Honestly, any doctor that can’t embrace this idea is a doctor that shouldn’t have a medical license.

Should doctors be cautious in how quickly they adopt the technology and should they take the time to make sure that the AI won’t have adverse impacts on their patients? Absolutely. However, there’s a tipping point where not using AI is going to be much more damaging to patients than the risks that are likely to make headlines and scare many. Doctors need to be more driven by what’s best for their patients than fear inducing headlines.

AI will make doctors lives better in a wide variety of ways. It won’t replace the doctor but will enhance the doctor. That’s exciting!

Artificial Intelligence in Healthcare: Medical Ethics and the Machine Revolution

Posted on July 25, 2017 I Written By

Healthcare as a Human Right. Physician Suicide Loss Survivor. Janae writes about Artificial Intelligence, Virtual Reality, Data Analytics, Engagement and Investing in Healthcare. twitter: @coherencemed

Artificial Intelligence could be the death of us all.  I heard that roughly a hundred times last week through shares on twitter. While this theme may be premature, the concern of teaching ethics and valuing human life is a relevant question for machine learning, especially in the realm of healthcare. Can we teach a machine ethical bounds? As Elon Musk calls for laws and boundaries I was wondering what semantics and mathematics would be needed to frame that question.  I had a Facebook friend that told me he knew a lot about artificial intelligence and he wanted to warn me about the coming robot revolution.

He did not, in fact, know a lot about artificial intelligence coding. He did not know code nor did he have any knowledge of mathematical theory, but was familiar with the worst-case scenarios of the robots we create finding humanity superfluous and eliminating us. I was so underwhelmed I messaged my friend who makes robots for Boston Dynamics and asked him about his latest project.

I was disappointed with that Facebook interaction. This disappointment was offset for me last week when the API changed and a happy Facebook message asked me if I wanted to revisit an ad that I had previously liked. I purposefully like ads if I like the company. Good old Facebook predictive ads. Sometimes I also comment on a picture or tag someone in an add to see if I can change which advertisements I see in my feed.

One of the happiest times with this feature was finding great new running socks. I commented on a friends’ picture that I liked her running socks and within an hour saw my first ad for those very same socks. While I’m not claiming to have seen the predictive and advertising algorithms behind Facebook advertising, machine learning is behind that ad.  Photo recognition through image processing can identify the socks my friend Ilana was wearing while running a half marathon. Simple keyword scans can read my positive comments about the socks which gives them information about what I like. This can pair with photos from advertisers and- within one hour of “liking” those socks they seamlessly show up in my feed as a buying option. Are there ethical considerations about knowing exactly what my buying power is and my buying patterns and my personal history? Yes. Similarly, there will be ethical considerations when insurance companies can predict exactly which patients will and won’t be able to pay for their healthcare. While I appreciate great running socks, I have mixed feelings about assessing my ability to pay for the best medical care.

Can a machine be taught to value the best medical care and ethics? We seem to hear a lot of debate about whether they can be taught not to kill us. Teaching a machine ethics will be complicated as they show how poor nutrition directly changes how long patients live. Some claim these are dangerous things to create, others say the difference will be human intuition. Can human intuition be replicated and what application will that have for medicine?  I always considered intuition connections our brain recognizes that we are not directly aware of, so a machine should be able to learn intuition through deep learning networks.

Creating “laws” or “rules” for ethics in artificial intelligence as Elon Musk calls for is difficult in that ethical bounds are difficult to teach machines. In a recent interview Musk claimed that Artificial Intelligence is the biggest risk that we face as a civilization. Ethical rules and bounds are difficult for humanity. Once when we were looking at data patterns and trust patterns and disease prediction someone turned to me and said- but insurance companies will use this information to not give people coverage. If they could read your genes people will die. In terms of teaching a machine ethics or adding outward bounds, one of the weaknesses is that trained learning systems can get really good on a narrow domain but they don’t do transfer learning in different domains- like reason by analogy- machines are terrible at that. I spoke with Adam Pease about how you can increase the ability of machines to use ontology to increase benefits of machine learning in healthcare outcomes. Ontology creates a way to process meaning that is more robust in a semantic view. He shared his open source projects about Ontology and shared that we should be speaking with Philosophers and Computer science experts about ontology and meaning.

Can a machine learn empathy? Will a naturally learning and evolving system teach itself different bounds and hold life sacred, and how will it interpret the challenge of every doctor to “Do no Harm?” The Medical community should come together for agreement about the ethical bounds and collaborate with computer scientists to see the capacity to teach those bounds and the possible outliers in motivation.

Most of the natural language processing is for applications that are pretty shallow in terms of what the machine understands. Machines are good at matching patterns- if your pattern is a bag of words it can connect with another bag of words within a quantity of documents. Many companies have done extensive work in training systems that will be working with patients to learn what words mean and common patterns within patient care. Health Navigator, for example, has done extensive work to form the clinical bounds for a telemedicine system. When a patient asks about their symptoms they get clinically relevant information paired with their symptoms even if that patient uses non-medical language in describing their chief complaint.  Clinical bounds create a very important framework for an artificial intelligence system to process large amounts of inbound data and help triage patients to appropriate care.

With the help of Symplur signals I looked at Ethics and Artificial Intelligence in Healthcare and who was having conversations online in those areas. Wen Dombrowski MD, MPH lead much of the discussion. Appropriately, part of what Catalaize Health does is artificial intelligence due diligence for healthcare organizations. Joe Babian who leads #HCLDR discussion was also a significant contributor.

Symplur Signals looked at the stakeholders for Artificial Intelligence and Ethics in Healthcare

A “smart” system needs to be able to make the same amount of inferences that a human can. Teaching inferences and standards within semantics are the first steps to teaching a machine “intuition” or “ethics.” Predictive pattern recognition appears more developed than ethical and meaning boundaries for sensitive patient information. We can teach a system to recognize an increased risk of suicidal behavior from rushing words or dramatically altered behavior or higher pitched speaking, but is it ethical to spy on at risk patients from their phone. I attended a mental health provider meeting about how that knowledge would affect patients with paranoia. What are the ethical lines between protection and control? Can the meaning of those lines be taught to a machine?

I’m looking forward to seeing what healthcare providers decide those bounds should be, and how they train the machines those ontologies.

Google’s DeepMind Runs Afoul Of UK Regulators Over Patient Data Access

Posted on July 20, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Back in February, I recounted the tale of DeepMind, a standout AI startup acquired by Google a few years ago. In the story, I noted that DeepMind had announced that it would be working with the Royal Free London NHS Foundation Trust, which oversees three hospitals, to test out its healthcare app

DeepMind’s healthcare app, Streams, is designed to help providers kick out patient status updates to physicians and nurses working with them. Under the terms of the deal, which was to span five years, DeepMind was supposed to gain access to 1.6 million patient records managed by the hospitals.

Now, the agreement seems to have collapsed under regulatory scrutiny. The UK’s data protection watchdog has ruled that DeepMind’s deal with the Trust “failed to comply with data protection law,” according to a story in Business Insider. The watchdog, known as the Information Commissioner’s Office (ICO), has spent a year investigating the deal, BI reports.

As it turns out, the agreement empowered the Trust hospitals to share the data without the patients’ prior knowledge, something that presumably wouldn’t fly in the U.S. either. This includes data, intended for use in developing the Streams’ app kidney monitoring technology, which includes info on whether people are HIV-positive, along with details of drug overdoses and abortions.

In its defense, DeepMind and the Royal Free Trust argued that patients had provided “implied consent” for such data sharing, given that the app was delivering “direct care” to patients using it. (Nice try. Got any other bridges you wanna sell?) Not surprisngly, that didn’t satisfy the ICO, which found several other shortcomings and how the data was handled as well.

While the ICO has concluded that the DeepMind/Royal Free Trust deal was illegal, it doesn’t plan to sanction either party, despite having the power to hand out fines of up to £500,000, BI said. But DeepMind, which set up his own independent review panel to oversee its data sharing agreements, privacy and security measures and product roadmaps last year, is taking a closer look at this deal. Way to self-police, guys! (Or maybe not.)

Not to be provincial, but what worries me about this is less the politics of UK patient protection laws, and bore the potential for Google subsidiaries to engage in other questionable data sharing activities. DeepMind has always said that they do not share patient data with its corporate parent, but while this might be true now, Google could do incalculable harm to patient privacy if they don’t maintain this firewall.

Hey, just consider that even for an entity the size of Google, healthcare data is an incredibly valuable asset. Reportedly, even street-level data thieves pay 10x for healthcare data as they do for, say, credit card numbers. It’s hard to even imagine what an entity the size of Google could do with such data if crunched in incredibly advanced ways. Let’s just say I don’t want to find out.

Unfortunately, as far as I know U.S. law hasn’t caught up with the idea of crime-by-analytics, which could be an issue even if an entity has legal possession of healthcare data. But I hope it does soon. The amount of harm this kind of data manipulation could do is immense.

Artificial Intelligence in Healthcare Series: Women in Technology

Posted on June 29, 2017 I Written By

Healthcare as a Human Right. Physician Suicide Loss Survivor. Janae writes about Artificial Intelligence, Virtual Reality, Data Analytics, Engagement and Investing in Healthcare. twitter: @coherencemed

Meeting with Lauren Hayes, the model behind Amelia, an AI cognitive agent.

What I Learned from Lauren Hayes: the Face of Artificial Intelligence.

This month I was invited to a workforce summit with companies interested in Artificial Intelligence (AI) cognitive agents in New York City. I had the opportunity to hear from great thinkers about AI, including research about workforce transformation from the McKinzie institute. I also met Lauren Hayes, the face behind Amelia, a cognitive agent for IPsoft specializing in customer experience.

Michael Chui – Partner at the McKinsey Global Institute.

One of the most impactful things for me personally was Lauren’s perspective about women in technology. Lauren has worked as a partner for a Jacaranda Ventures focusing on early stage startups, and served as an executive and communications expert, as well as being a model for Wilhelmina models. As a veteran of the technology space Lauren commented on male dominated events  “One of my past jobs as a Director of Communications & PR included hosting events that typically ended up being 90% male. The audience was comprised of our investors, partners, and C-level business development folks. It’s always sad when there’s no line for the women’s restroom.”  Her  grace in dealing with the dynamics taught me two valuable lessons: Be fiercely positive and seek out your people.

Today Lauren works in technology as a Founder at Ritual and the face of a cognitive agent that interfaces with customers for several industries, (patients for a healthcare system.) What does current customer experience look like? In my experience- not great. There is a definite need to improve the experience for patients online and many companies and healthcare systems have solutions that help improve outcomes and cost.  My personal strategy? Get on the phone and press as many buttons as I can, while hoping a real human comes on the line since I don’t remember my insurance ID number. Or my account number with the power company.

Lauren is part of the future of healthcare as AI automates repetitive tasks. A little background on the potentials and current benefits can start with the patient as a consumer. Many healthcare companies use an automated system when a patient calls with medical questions or personal patient information. They may want a copy of their records and need identity confirmation or need to know if they should make an appointment with a doctor or go to a local emergency room. These questions can be answered through digital tools and phones.

Systems can range in sophistication from a series of recordings to a chat bot to an artificial intelligence cognitive agent and a human with highly specialized training and clinical knowledge. Not to brag, but at one of my jobs the company asked me to be the voice for their system so I can relate to being the face of AI. A cognitive agent can use artificial intelligence technology and interact with a clinical framework to help patients get great care. This can be paired with the clinical bounds of a program like Health Navigator and use natural language processing to help patients get appropriate support quickly and in the context of their personal history and insurance or healthcare information. Adoption and development of these technologies will see huge positive impact on patient outcomes and security.

I interacted with Lauren on twitter before the conference to discuss working as a woman in tech. The thing that struck me meeting her was her grace. Some people have powerful positive energy and I wonder how we can teach that type of interaction to a machine learning system. We can teach a system to have an asymmetrical appearance like humans. Artificial intelligence engines are learning to identify customers by voice and appearance. The human experience in medicine is also about presence and connecting us digitally. I asked Lauren what she thought about working with Amelia, and about being a woman in Technology. Mainly I wanted to understand the way she has established expectations and boundaries.

Janae: What is it like working in technology as a woman?

Lauren: This is not specific to one of the roles I’ve held particularly, whether at IPSoft or any of my other jobs, however, I think in some of the male dominated industries, there’s a feeling as though you have to prove yourself and get over the “female hump” before a conversation with someone who expects to be talking to another man. I’ve had past jobs that bred a bit of a “bro” culture, where there are no women in high-level positions and I think that really trickles down and impacts the rest of the culture. It goes without saying that I’ve also overheard and been part of situations where sexist comments were made, or where visitors of the company assumed the first girl they saw was an assistant/office manager, etc.

Janae: What do you wish men understood about being a woman in tech?

Lauren: “That the same way racism is still rampant in the US, the same goes for sexism. Even when there’s not overt instances or actions that are clearly offensive, there are small, every day micro instances of things that are said under the breath or actions that are hard to prove clear wrongdoing that still add up and take a toll over a period of time.”

Janae: What do you love about working with Amelia?

Lauren:  “I think Amelia can potentially have such a positive impact on the workforce and ultimately world. After all, to date, she’s the most sophisticated AI in history. Throughout history we’ve changed our jobs to leverage technology. AI is going to do that too. I heard a lot of the execs presenting at the conference talking about how they are changing the structure of their teams in order to have Amelia take on a lot of the high volume repetitive queries and let their staff evolve to take on more exception cases that help them have more interesting conversations with customers. I think most of us would prefer to spend our time on tasks we find challenging and rewarding and less on repetitive chores. That idea of freeing up our day to spend more time doing things we love really appeals to me.”

Overcoming general fatigue from interactions that question credibility based on gender can be hard to grasp. Repetitive music and actions that themselves are harmless have been weaponized into torture. Constant references about appearance can be difficult. Talking to Lauren about women in technology was positive. For women, the sum is greater than it’s parts. The result for providers can be burnout or a lack of empathy for patient requests.

Artificial intelligence will restructure workforce roles and take some of the stress of repetitive tasks and recording. Building positive interactions while filtering through repetitive actions that lead to burnout can provide better support. Physician time can be used for helping and connecting on a personal level. I was grateful for the time I had discussing women in technology and the future. Establishing boundaries in workforce interactions can be like structuring the bounds of a healthcare customer service system. Creating purposeful positive interactions improves the system. Be fiercely positive to other women in technology.

Could AI And Healthcare Chatbots Help Clinicians Communicate With Patients?

Posted on April 25, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

AI-driven chatbots are becoming increasingly popular for a number of reasons, including improving technology and a need to automate some routine processes. (I’d also argue that these models are emerging because millennials and Gen Z-ers have spent their lives immersed in online-based social environments, and are far less likely to be afraid of or uncomfortable with such things.)

Given the maturation of the technology, I’m not surprised to see a number of AI-driven chatbots for healthcare emerging.  Some of these merely capture symptoms, such as the diabetes, CHF and mental health monitoring options by Sense.ly.

But other AI-based chatbots attempt to go much further. One emerging company, X2ai, is rolling out a psychology-oriented chatbot offering mental health counseling, Another, UK-based startup Babylon Health, offers a text-only mobile apps which provides medical evaluations and screenings. The app is being pilot-tested with the National Health Service, where early reports say that it’s diagnosing and triaging patients successfully.

One area I haven’t seen explored, though, is using a chatbot to help doctors handle routine communications with patients. Such an app could not only triage patients, as with the NHS example, but also respond to routine email messages.

Scheduling and administration

The reality is that while doctors and nurses are used to screening patients via telephone, they’re afraid of being swamped by tons of electronic patient messages. Many feel that if they agree to respond to patient email messages via a patient portal, they’ll spend too much time doing so. With most already time-starved, it’s not surprising that they’re worried about this.

But a combination of AI and healthcare chatbot technology could reduce their time required to engage patients. In fact, the right solution could address a few medical practice workflow issues at one time.

First, it could triage and route patient concerns to doctors and advanced practice nurses, something that’s done now by unqualified clerks or extremely busy nurses. For example, the patient would be able to tell the chatbot why they wanted to schedule a visit, with the chatbot teasing out some nuances in their situation. Then, the chatbot could kick the information over to the patient’s provider, who could, with a few clicks, forward a request to schedule either an urgent or standard consult.

Perhaps just as important, the AI technology could sit atop messages sent between provider and patient. If the patient message asked a routine question – such as when their test results would be ready – the system could bounce back a templated message stating, for instance, that test results typically take five business days to post on the patient portal. It could also send templated responses to requests for medical records, questions about doctor availability or types of insurance accepted and so on.

Diagnosis and triage

Meanwhile, if the AI concludes that the patient has a health concern to address, it could send back a link to the chatbot, which would ask pertinent questions and send the responses to the treating clinician. At that point, if things look questionable, the doctor might choose to intervene with their own email message or phone call.

Of course, providers will probably be worried about relying on a chatbot for patient triage, especially the legal consequences if the bot misses something important. But over time, if health chatbot pilots like the UK example offer good results, they may eventually be ready to give this approach a shot.

Also, patients may be uncertain about working with a chatbot at first. But if physicians stress that they’re not trying put them off, but rather, to save time so they can take their time when patients need them, I think they’ll be satisfied.

I admit that under ideal circumstances, clinicians would have more time to communicate with patients directly. But the truth is, they simply don’t, and pressuring them to take phone calls or respond to every online message from patients won’t work.

Besides, as providers work to prepare for value-based care, they’ll need not only physician extenders, but physician extender-extenders like chatbots to engage patients and keep track of their needs. So let’s give them a shot.

Artificial Intelligence Can Improve Healthcare

Posted on July 20, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

In recent times, there has been a lot of discussion of artificial intelligence in public forums, some generated by thought leaders like Bill Gates and Stephen Hawking. Late last year Hawking actually argued that artificial intelligence “could spell the end of the human race.”

But most scientists and researchers don’t seem to be as worried as Gates and Hawking. They contend that while machines and software may do an increasingly better job of imitating human intelligence, there’s no foreseeable way in which they could become a self-conscious threat to humanity.

In fact, it seems far more likely that AI will work to serve human needs, including healthcare improvement. Here’s five examples of how AI could help bring us smarter medicine (courtesy of Fast Company):

  1. Diagnosing disease:

Want to improve diagnostic accuracy? Companies like Enlitic may help. Enlitic is studying massive numbers of medical images to help radiologists pick up small details like tiny fractures and tumors.

  1. Medication management

Here’s a twist on traditional med management strategies. The AiCure app is leveraging a smartphone webcam, in tandem with AI technology, to learn whether patients are adhering to their prescription regimen.

  1. Virtual clinicians

Though it may sound daring, a few healthcare leaders are considering giving no-humans-involved health advice a try. Some are turning to startup Sense.ly, which offers a virtual nurse, Molly. The Sense.ly interface uses machine learning to help care for chronically-ill patients between doctor’s visits.

  1. Drug creation:

AI may soon speed up the development of pharmaceutical drugs. Vendors in this field include Atomwise, whose technology leverages supercomputers to dig up therapies for database of molecular structures, and Berg Health, which studies data on why some people survive diseases.

  1. Precision medicine:

Working as part of a broader effort seeking targeted diagnoses and treatments for individuals, startup Deep Genomics is wrangling huge data sets of genetic information in an effort to find mutations and linkages to disease.

In addition to all of these clinically-oriented efforts, which seem quite promising in and of themselves, it seems clear that there are endless ways in which computing firepower, big data and AI could come together to help healthcare business operations.

Just to name the first applications that popped into my head, consider the impact AI could have on patient scheduling, particularly in high-volume hostile environments. What about using such technology to do a better job of predicting what approaches work best for collecting patient balances, and even to execute those efforts is sophisticated way?

And of course, there are countless other ways in which AI could help providers leverage clinical data in real time. Sure, EMR vendors are already rolling out technology attempting to help hospitals target emergent conditions (such as sepsis), but what if AI logic could go beyond condition-specific modules to proactively predicting a much broader range of problems?

The truth is, I don’t claim to have a specific expertise in AI, so my guesses on what applications makes sense are no better than any other observer’s. On the other hand, though, if anyone reading this has cool stories to tell about what they’re doing with AI technology I’d love to hear them.