Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

AI Making Doctors Better Is the Right Approach

Posted on September 6, 2017 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

This summer Bertalan Meskó, MD, PhD posted 10 ways that AI (Artificial Intelligence) could make him a better doctor. Here are his 10 ways:

1) Eradicate waiting time
2) Prioritize my emails
3) Find me the information I need
4) Keep me up-to-date
5) Work when I don’t
6) Help me make hard decisions rational
7) Help patients with urgent matters reach me
8) Help me improve over time
9) Help me collaborate more
10) Do administrative work

This is a great list of ways that AI can make doctors better. No doubt there are even more ways that we’ll discover as AI continues to improve. However, I love this list because it looks at AI from the appropriate point of reference. AI is going to be something that improves the doctor, not replaces the doctor.

I know that many people have talked about how technology is going to replace the doctor. The most famous of which is Vinod Khosla. However, you have to remember that Vinod is an investor and so he needs to drum up companies with ambitious visions. I believe his statement was as much about finding companies that will push the bounds of healthcare as much as it was his prediction for the future of healthcare. However, it no doubt created a lot of fear for doctors.

The reality is that some aspects of what a doctor does will be replaced by technology, but as the list above illustrates, that can be a very good thing for doctors.

AI is coming to healthcare. In some ways, it’s already here. However, the AI that’s coming today isn’t about replacing the doctor, it’s about making the doctor better. Honestly, any doctor that can’t embrace this idea is a doctor that shouldn’t have a medical license.

Should doctors be cautious in how quickly they adopt the technology and should they take the time to make sure that the AI won’t have adverse impacts on their patients? Absolutely. However, there’s a tipping point where not using AI is going to be much more damaging to patients than the risks that are likely to make headlines and scare many. Doctors need to be more driven by what’s best for their patients than fear inducing headlines.

AI will make doctors lives better in a wide variety of ways. It won’t replace the doctor but will enhance the doctor. That’s exciting!

Artificial Intelligence in Healthcare: Medical Ethics and the Machine Revolution

Posted on July 25, 2017 I Written By

Healthcare as a Human Right. Physician Suicide Loss Survivor.
Janae writes about Artificial Intelligence, Virtual Reality, Data Analytics, Engagement and Investing in Healthcare.
twitter: @coherencemed

Artificial Intelligence could be the death of us all.  I heard that roughly a hundred times last week through shares on twitter. While this theme may be premature, the concern of teaching ethics and valuing human life is a relevant question for machine learning, especially in the realm of healthcare. Can we teach a machine ethical bounds? As Elon Musk calls for laws and boundaries I was wondering what semantics and mathematics would be needed to frame that question.  I had a Facebook friend that told me he knew a lot about artificial intelligence and he wanted to warn me about the coming robot revolution.

He did not, in fact, know a lot about artificial intelligence coding. He did not know code nor did he have any knowledge of mathematical theory, but was familiar with the worst-case scenarios of the robots we create finding humanity superfluous and eliminating us. I was so underwhelmed I messaged my friend who makes robots for Boston Dynamics and asked him about his latest project.

I was disappointed with that Facebook interaction. This disappointment was offset for me last week when the API changed and a happy Facebook message asked me if I wanted to revisit an ad that I had previously liked. I purposefully like ads if I like the company. Good old Facebook predictive ads. Sometimes I also comment on a picture or tag someone in an add to see if I can change which advertisements I see in my feed.

One of the happiest times with this feature was finding great new running socks. I commented on a friends’ picture that I liked her running socks and within an hour saw my first ad for those very same socks. While I’m not claiming to have seen the predictive and advertising algorithms behind Facebook advertising, machine learning is behind that ad.  Photo recognition through image processing can identify the socks my friend Ilana was wearing while running a half marathon. Simple keyword scans can read my positive comments about the socks which gives them information about what I like. This can pair with photos from advertisers and- within one hour of “liking” those socks they seamlessly show up in my feed as a buying option. Are there ethical considerations about knowing exactly what my buying power is and my buying patterns and my personal history? Yes. Similarly, there will be ethical considerations when insurance companies can predict exactly which patients will and won’t be able to pay for their healthcare. While I appreciate great running socks, I have mixed feelings about assessing my ability to pay for the best medical care.

Can a machine be taught to value the best medical care and ethics? We seem to hear a lot of debate about whether they can be taught not to kill us. Teaching a machine ethics will be complicated as they show how poor nutrition directly changes how long patients live. Some claim these are dangerous things to create, others say the difference will be human intuition. Can human intuition be replicated and what application will that have for medicine?  I always considered intuition connections our brain recognizes that we are not directly aware of, so a machine should be able to learn intuition through deep learning networks.

Creating “laws” or “rules” for ethics in artificial intelligence as Elon Musk calls for is difficult in that ethical bounds are difficult to teach machines. In a recent interview Musk claimed that Artificial Intelligence is the biggest risk that we face as a civilization. Ethical rules and bounds are difficult for humanity. Once when we were looking at data patterns and trust patterns and disease prediction someone turned to me and said- but insurance companies will use this information to not give people coverage. If they could read your genes people will die. In terms of teaching a machine ethics or adding outward bounds, one of the weaknesses is that trained learning systems can get really good on a narrow domain but they don’t do transfer learning in different domains- like reason by analogy- machines are terrible at that. I spoke with Adam Pease about how you can increase the ability of machines to use ontology to increase benefits of machine learning in healthcare outcomes. Ontology creates a way to process meaning that is more robust in a semantic view. He shared his open source projects about Ontology and shared that we should be speaking with Philosophers and Computer science experts about ontology and meaning.

Can a machine learn empathy? Will a naturally learning and evolving system teach itself different bounds and hold life sacred, and how will it interpret the challenge of every doctor to “Do no Harm?” The Medical community should come together for agreement about the ethical bounds and collaborate with computer scientists to see the capacity to teach those bounds and the possible outliers in motivation.

Most of the natural language processing is for applications that are pretty shallow in terms of what the machine understands. Machines are good at matching patterns- if your pattern is a bag of words it can connect with another bag of words within a quantity of documents. Many companies have done extensive work in training systems that will be working with patients to learn what words mean and common patterns within patient care. Health Navigator, for example, has done extensive work to form the clinical bounds for a telemedicine system. When a patient asks about their symptoms they get clinically relevant information paired with their symptoms even if that patient uses non-medical language in describing their chief complaint.  Clinical bounds create a very important framework for an artificial intelligence system to process large amounts of inbound data and help triage patients to appropriate care.

With the help of Symplur signals I looked at Ethics and Artificial Intelligence in Healthcare and who was having conversations online in those areas. Wen Dombrowski MD, MPH lead much of the discussion. Appropriately, part of what Catalaize Health does is artificial intelligence due diligence for healthcare organizations. Joe Babian who leads #HCLDR discussion was also a significant contributor.

Symplur Signals looked at the stakeholders for Artificial Intelligence and Ethics in Healthcare

A “smart” system needs to be able to make the same amount of inferences that a human can. Teaching inferences and standards within semantics are the first steps to teaching a machine “intuition” or “ethics.” Predictive pattern recognition appears more developed than ethical and meaning boundaries for sensitive patient information. We can teach a system to recognize an increased risk of suicidal behavior from rushing words or dramatically altered behavior or higher pitched speaking, but is it ethical to spy on at risk patients from their phone. I attended a mental health provider meeting about how that knowledge would affect patients with paranoia. What are the ethical lines between protection and control? Can the meaning of those lines be taught to a machine?

I’m looking forward to seeing what healthcare providers decide those bounds should be, and how they train the machines those ontologies.

Artificial Intelligence in Healthcare Series: Women in Technology

Posted on June 29, 2017 I Written By

Healthcare as a Human Right. Physician Suicide Loss Survivor.
Janae writes about Artificial Intelligence, Virtual Reality, Data Analytics, Engagement and Investing in Healthcare.
twitter: @coherencemed

Meeting with Lauren Hayes, the model behind Amelia, an AI cognitive agent.

What I Learned from Lauren Hayes: the Face of Artificial Intelligence.

This month I was invited to a workforce summit with companies interested in Artificial Intelligence (AI) cognitive agents in New York City. I had the opportunity to hear from great thinkers about AI, including research about workforce transformation from the McKinzie institute. I also met Lauren Hayes, the face behind Amelia, a cognitive agent for IPsoft specializing in customer experience.

Michael Chui – Partner at the McKinsey Global Institute.

One of the most impactful things for me personally was Lauren’s perspective about women in technology. Lauren has worked as a partner for a Jacaranda Ventures focusing on early stage startups, and served as an executive and communications expert, as well as being a model for Wilhelmina models. As a veteran of the technology space Lauren commented on male dominated events  “One of my past jobs as a Director of Communications & PR included hosting events that typically ended up being 90% male. The audience was comprised of our investors, partners, and C-level business development folks. It’s always sad when there’s no line for the women’s restroom.”  Her  grace in dealing with the dynamics taught me two valuable lessons: Be fiercely positive and seek out your people.

Today Lauren works in technology as a Founder at Ritual and the face of a cognitive agent that interfaces with customers for several industries, (patients for a healthcare system.) What does current customer experience look like? In my experience- not great. There is a definite need to improve the experience for patients online and many companies and healthcare systems have solutions that help improve outcomes and cost.  My personal strategy? Get on the phone and press as many buttons as I can, while hoping a real human comes on the line since I don’t remember my insurance ID number. Or my account number with the power company.

Lauren is part of the future of healthcare as AI automates repetitive tasks. A little background on the potentials and current benefits can start with the patient as a consumer. Many healthcare companies use an automated system when a patient calls with medical questions or personal patient information. They may want a copy of their records and need identity confirmation or need to know if they should make an appointment with a doctor or go to a local emergency room. These questions can be answered through digital tools and phones.

Systems can range in sophistication from a series of recordings to a chat bot to an artificial intelligence cognitive agent and a human with highly specialized training and clinical knowledge. Not to brag, but at one of my jobs the company asked me to be the voice for their system so I can relate to being the face of AI. A cognitive agent can use artificial intelligence technology and interact with a clinical framework to help patients get great care. This can be paired with the clinical bounds of a program like Health Navigator and use natural language processing to help patients get appropriate support quickly and in the context of their personal history and insurance or healthcare information. Adoption and development of these technologies will see huge positive impact on patient outcomes and security.

I interacted with Lauren on twitter before the conference to discuss working as a woman in tech. The thing that struck me meeting her was her grace. Some people have powerful positive energy and I wonder how we can teach that type of interaction to a machine learning system. We can teach a system to have an asymmetrical appearance like humans. Artificial intelligence engines are learning to identify customers by voice and appearance. The human experience in medicine is also about presence and connecting us digitally. I asked Lauren what she thought about working with Amelia, and about being a woman in Technology. Mainly I wanted to understand the way she has established expectations and boundaries.

Janae: What is it like working in technology as a woman?

Lauren: This is not specific to one of the roles I’ve held particularly, whether at IPSoft or any of my other jobs, however, I think in some of the male dominated industries, there’s a feeling as though you have to prove yourself and get over the “female hump” before a conversation with someone who expects to be talking to another man. I’ve had past jobs that bred a bit of a “bro” culture, where there are no women in high-level positions and I think that really trickles down and impacts the rest of the culture. It goes without saying that I’ve also overheard and been part of situations where sexist comments were made, or where visitors of the company assumed the first girl they saw was an assistant/office manager, etc.

Janae: What do you wish men understood about being a woman in tech?

Lauren: “That the same way racism is still rampant in the US, the same goes for sexism. Even when there’s not overt instances or actions that are clearly offensive, there are small, every day micro instances of things that are said under the breath or actions that are hard to prove clear wrongdoing that still add up and take a toll over a period of time.”

Janae: What do you love about working with Amelia?

Lauren:  “I think Amelia can potentially have such a positive impact on the workforce and ultimately world. After all, to date, she’s the most sophisticated AI in history. Throughout history we’ve changed our jobs to leverage technology. AI is going to do that too. I heard a lot of the execs presenting at the conference talking about how they are changing the structure of their teams in order to have Amelia take on a lot of the high volume repetitive queries and let their staff evolve to take on more exception cases that help them have more interesting conversations with customers. I think most of us would prefer to spend our time on tasks we find challenging and rewarding and less on repetitive chores. That idea of freeing up our day to spend more time doing things we love really appeals to me.”

Overcoming general fatigue from interactions that question credibility based on gender can be hard to grasp. Repetitive music and actions that themselves are harmless have been weaponized into torture. Constant references about appearance can be difficult. Talking to Lauren about women in technology was positive. For women, the sum is greater than it’s parts. The result for providers can be burnout or a lack of empathy for patient requests.

Artificial intelligence will restructure workforce roles and take some of the stress of repetitive tasks and recording. Building positive interactions while filtering through repetitive actions that lead to burnout can provide better support. Physician time can be used for helping and connecting on a personal level. I was grateful for the time I had discussing women in technology and the future. Establishing boundaries in workforce interactions can be like structuring the bounds of a healthcare customer service system. Creating purposeful positive interactions improves the system. Be fiercely positive to other women in technology.

Could AI And Healthcare Chatbots Help Clinicians Communicate With Patients?

Posted on April 25, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

AI-driven chatbots are becoming increasingly popular for a number of reasons, including improving technology and a need to automate some routine processes. (I’d also argue that these models are emerging because millennials and Gen Z-ers have spent their lives immersed in online-based social environments, and are far less likely to be afraid of or uncomfortable with such things.)

Given the maturation of the technology, I’m not surprised to see a number of AI-driven chatbots for healthcare emerging.  Some of these merely capture symptoms, such as the diabetes, CHF and mental health monitoring options by Sense.ly.

But other AI-based chatbots attempt to go much further. One emerging company, X2ai, is rolling out a psychology-oriented chatbot offering mental health counseling, Another, UK-based startup Babylon Health, offers a text-only mobile apps which provides medical evaluations and screenings. The app is being pilot-tested with the National Health Service, where early reports say that it’s diagnosing and triaging patients successfully.

One area I haven’t seen explored, though, is using a chatbot to help doctors handle routine communications with patients. Such an app could not only triage patients, as with the NHS example, but also respond to routine email messages.

Scheduling and administration

The reality is that while doctors and nurses are used to screening patients via telephone, they’re afraid of being swamped by tons of electronic patient messages. Many feel that if they agree to respond to patient email messages via a patient portal, they’ll spend too much time doing so. With most already time-starved, it’s not surprising that they’re worried about this.

But a combination of AI and healthcare chatbot technology could reduce their time required to engage patients. In fact, the right solution could address a few medical practice workflow issues at one time.

First, it could triage and route patient concerns to doctors and advanced practice nurses, something that’s done now by unqualified clerks or extremely busy nurses. For example, the patient would be able to tell the chatbot why they wanted to schedule a visit, with the chatbot teasing out some nuances in their situation. Then, the chatbot could kick the information over to the patient’s provider, who could, with a few clicks, forward a request to schedule either an urgent or standard consult.

Perhaps just as important, the AI technology could sit atop messages sent between provider and patient. If the patient message asked a routine question – such as when their test results would be ready – the system could bounce back a templated message stating, for instance, that test results typically take five business days to post on the patient portal. It could also send templated responses to requests for medical records, questions about doctor availability or types of insurance accepted and so on.

Diagnosis and triage

Meanwhile, if the AI concludes that the patient has a health concern to address, it could send back a link to the chatbot, which would ask pertinent questions and send the responses to the treating clinician. At that point, if things look questionable, the doctor might choose to intervene with their own email message or phone call.

Of course, providers will probably be worried about relying on a chatbot for patient triage, especially the legal consequences if the bot misses something important. But over time, if health chatbot pilots like the UK example offer good results, they may eventually be ready to give this approach a shot.

Also, patients may be uncertain about working with a chatbot at first. But if physicians stress that they’re not trying put them off, but rather, to save time so they can take their time when patients need them, I think they’ll be satisfied.

I admit that under ideal circumstances, clinicians would have more time to communicate with patients directly. But the truth is, they simply don’t, and pressuring them to take phone calls or respond to every online message from patients won’t work.

Besides, as providers work to prepare for value-based care, they’ll need not only physician extenders, but physician extender-extenders like chatbots to engage patients and keep track of their needs. So let’s give them a shot.

Selecting the Right AI Partner in Healthcare Requires a Human Network

Posted on March 1, 2017 I Written By

Healthcare as a Human Right. Physician Suicide Loss Survivor.
Janae writes about Artificial Intelligence, Virtual Reality, Data Analytics, Engagement and Investing in Healthcare.
twitter: @coherencemed

Artificial Intelligence, or AI for short, does not always equate to high intelligence and this can have a high cost for healthcare systems. Navigating the intersection of AI and healthcare requires more than clinical operations expertise; it requires advanced knowledge in business motivation, partnerships, legal considerations, and ethics.

Learning to Dance at HIMSS17

This year I had the pleasure of attending a meetup for people interested in and working with AI for healthcare at the Healthcare Information and Management Systems Society (HIMSS) annual meeting in Orlando, Florida. At the beginning of the meetup Wen Dombrowski, MD, asked everyone to stand up and participate in a partner led movement activity. Not your average trust fall, this was designed to teach about AI and machine leaning while pushing most of us out of our comfort zones and to spark participants to realize AI-related lessons. One partner led and the other partner followed their actions.

Dedicated computer scientists, business professionals, and proud data geeks tested their dancing skills. My partner quit when it was my turn to lead the movement. About half of the participants avoided eye contact and reluctantly shuffled their feet while they half nursed their coffee. But however awkward, half the participants felt the activity was a creative way to get us thinking about what it takes for machines to ‘learn’. Notably Daniel Rothman of MyMee had some great dance moves.

I found both the varying feedback and equally varying willingness to participate interesting. One of the participants said the activity was a “waste of time.” They must have come from the half of the room that didn’t follow mirroring instructions. I wonder if I could gather data about what code languages were the specialty of those most resistant. Were the Python coders bad at dancing? I hope not. My professional training is actually as a licensed foreign language teacher so I immediately corroborated the instructional design effectiveness of starting with a movement activity.

There is evidence that participating in physical activity preceding learning makes learners more receptive and allows them to retain the experience longer. “Physical activity breaks throughout the day can improve both student behavior and learning (Trost 2007)” (Reilly, Buskist, and Gross, 2012). I assumed that knowledge of movement and learning capacity was common knowledge. Many of the instructional design comments Dr. Dombrowski received while helpful, revealed participants’ lack of knowledge about teaching and cognitive learning theory.

I could have used some help at the onset in choosing a dance partner that would have matched and anticipated my every move. The same goes for healthcare organizations and their AI solutions.  While they may be a highly respected institution employing some of the most brilliant medical minds, they need to also become or find a skilled matchmaker to bring the right AI partner (our mix of partners) to the dance floor.

AI’s Slow Rise from Publicity to Potential

Artificial Intelligence has experienced a difficult and flashy transition into the medical field. For example, AI computing has been used to establish consensus with imaging for radiologists. While these tools have helped reduce false positives for breast cancer patients, errors remain and not every company entering AI has equal computing abilities. The battle cry that suggested physicians be replaced with robots seems to have slowed robots. While AI is gaining steam, the potential is still catching up with the publicity.

Even if an AI company has stellar computing ability, buyers should question if they also have the same design for outcome. Are they dedicated to protecting your patients and providing better outcomes, or simply making as much profit as possible? Human FTE budgets have been replaced by computing AI costs, and in some instances at the expense of patient and data security.  When I was asking CIOs and smaller companies about their experiences, many were reluctant to criticize a company they had a non-disclosure agreement with.

Learning From the IBM Watson and MD Anderson Breakup

During HIMSS week, the announcement that the MD Anderson and IBM Watson dance party was put on hold was called a setback for AI in medicine by Forbes columnist Matthew Herper. In addition, a scathing report detailing the procurement process written by the University of Texas System Administration Audit System reads more like a contest for the highest consulting fees. This suggests to me that perhaps one of the biggest threats to patient data security when it comes to AI is a corporation’s need to profit from the data.

Moving on, reports of the MD Anderson breakup also mention mismanagement including failing to integrate data from the hospital’s Epic migration. Epic is interoperable with Watson but in this case integration of new data was included in Price Waterhouse Cooper’s scope of work. If poor implementation stopped the project, should a technology partner be punished? Here is an excerpt from the IBM statement on the failed partnership:

 “The recent report regarding this relationship, published by the University of Texas System Administration (“Special Review of Procurement Procedures Related to the M.D. Anderson Cancer Center Oncology Expert Advisor Project”), assessed procurement practices. The report did not assess the value or functionality of the OEA system. As stated in the report’s executive summary, “results stated herein are based on documented procurement activities and recollections by staff, and should not be interpreted as an opinion on the scientific basis or functional capabilities of the system in its current state.”

With non-disclosure agreements and ongoing lawsuits in place, it’s unclear whether this recent example will and should impact future decisions about AI healthcare partners. With multiple companies and interests represented no one wants to be the fall guy when a project fails or has ethical breaches of trust. The consulting firm of Price Waterhouse Coopers owned many of the portions of the project that failed as well as many of the questionable procurement portions.

I spoke with Christine Douglas part of IBM Watson’s communications team and her comments about the early adoption of AI were interesting. She said “you have to train the system. There’s a very big difference between the Watson that’s available commercially today and what was available with MD Anderson in 2012.”  Of course that goes for any machine learning solution large or small as the longer the models have to ‘learn’ the better or more accurate the outcome should be.

Large project success and potential project failure have shown that not all AI is created equally, and not every business aspect of a partnership is dedicated to publicly shared goals. I’ve seen similar proposals from big data computing companies inviting research centers to pay for use of AI computing that also allowed the computing partner to lease the patient data used to other parties for things like clinical trials. How’s that for patient privacy! For the same cost, that research center could put an entire team of developers through graduate school at Stanford or MIT. By the way, I’m completely available for that team! I would love to study coding more than I do now.

Finding a Trusted Partner

So what can healthcare organizations and AI partners learn from this experience? They should ask themselves what their data is being used for. Look at the complaint in the MD Anderson report stating that procurement was questionable. While competitive bidding or outside consulting can help, in this case it appears that it crippled the project. The layers of business fees and how they were paid kept the project from moving forward.

Profiting from patient data is the part of AI no one seems willing to discuss. Maybe an AI system is being used to determine how high fees need to be to obtain board approval for hospital networks.

Healthcare organizations need to ask the tough questions before selecting any AI solution. Building a human network of trusted experts with no financial stake and speaking to competitors about AI proposals as well as personal learning is important for CMIOs, CIOs and healthcare security professionals. Competitive analysis of industry partners and coding classes has become a necessary part of healthcare professionals. Trust is imperative and will have a direct impact on patient outcomes and healthcare organization costs. Meetups like the networking event at HIMSS allow professionals to expand their community and add more data points, gathered through real human interaction, to their evaluation of and AI solutions for healthcare. Nardo Manaloto discussed the meetup and how the group could move forward on Linkedin you can join the conversation.

Not everyone in artificial intelligence and healthcare is able to evaluate the relative intelligence and effectiveness of machine learning. If your organization is struggling, find someone who can help, but be cognizant of the value of the consulting fees they’ll charge along the way.

Back to the dancing. Artificial does not equal high intelligence. Not everyone involved in our movement activity realized it was actually increasing our cognitive ability. Even those who quit, like my partner did, may have learned to dance just a little bit better.

 

Resources

California Department of Education. 2002. Physical fitness testing and SAT9 Retrieved May 20, 2003, from www.cde.ca.gov/statetests/pe/pe.html

Carter, A. 1998. Mapping the mind, Berkeley: University of California Press.

Czerner, T. B. 2001. What makes you tick: The brain in plain English, New York: John Wiley.

Dennison, P. E. and Dennison, G. E. 1998. Brain gym, Ventura, CA: Edu-Kinesthetics.

Dienstbier, R. 1989. Periodic adrenalin arousal boosts health, coping. New Sense Bulletin, : 14.9A

Dwyer, T., Sallis, J. F., Blizzard, L., Lazarus, R. and Dean, K. 2001. Relation of academic performance to physical activity and fitness in children. Pediatric Exercise Science, 13: 225–237. [CrossRef], [Web of Science ®]

Gavin, J. 1992. The exercise habit, Champaign, IL: Human Kinetics.

Hannaford, C. 1995. Smart moves: Why learning is not all in your head, Arlington, VA: Great Ocean.

Howard, P. J. 2000. The owner’s manual for the brain, Austin, TX: Bard.

Jarvik, E. 1998. Young and sleepless. Deseret News, July 27: C1

Jensen, E. 1998. Teaching with the brain in mind, Alexandria, VA: Association for Supervision and Curriculum Development.

Jensen, E. 2000a. Brain-based learning, San Diego: The Brain Store.

Reilly, E., Buskist, C., & Gross, M. K. (2012). Movement in the Classroom: Boosting Brain Power, Fighting Obesity. Kappa Delta Pi Record, 48(2), 62-66. doi:10.1080/00228958.2012.680365.

The Value Of Pairing Machine Learning With EMRs

Posted on January 5, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

According to Leonard D’Avolio, the healthcare industry has tools at its disposal, known variously as AI, big data, machine learning, data mining and cognitive computing, which can turn the EMR into a platform which supports next-gen value-based care.

Until we drop the fuzzy rhetoric around these tools – which have offered superior predictive performance for two decades, he notes – it’s unlikely we’ll generate full value from using them. But if we take a hard, cold look at the strengths and weaknesses of such approaches, we’ll get further, says D’Avolio, who wrote on this topic recently for The Health Care Blog.

D’Avolio, a PhD who serves as assistant professor at Harvard Medical School, is also CEO and co-founder of AI vendor Cyft, and clearly has a dog in this fight. Still, my instinct is that his points on the pros and cons of machine learning/AI/whatever are reasonable and add to the discussion of EMRs’ future.

According to D’Avolio, some of the benefits of machine learning technologies include:

  • The ability to consider many more data points than traditional risk scoring or rules-based models
  • The fact that machine learning-related approaches don’t require that data be properly formatted or standardized (a big deal given how varied such data inflows are these days)
  • The fact that if you combine machine learning with natural language processing, you can mine free text created by clinicians or case managers to predict which patients may need attention

On the flip side, he notes, this family of technologies comes with a major limitation as well. To date, he points out, such platforms have only been accessible to experts, as interfaces are typically designed for use by specially trained data scientists. As a result, the results of machine learning processes have traditionally been delivered as recommendations, rather than datasets or modules which can be shared around an organization.

While D’Avolio doesn’t say this himself, my guess is that the new world he heralds – in which machine learning, natural language processing and other cutting-edge technologies are common – won’t be arriving for quite some time.

Of course, for healthcare organizations with enough resources, the future is now, and cases like the predictive analytics efforts going on within Paris public hospitals and Geisinger Health System make the point nicely. Clearly, there’s much to be gained in performing advanced, liquidly-flowing analyses of EMR data and related resources. (Geisinger has already seen multiple benefits from its investments, though its data analytics rollout is relatively new.)

On the other hand, independent medical practices, smaller and rural hospitals and ancillary providers may not see much direct impact from these projects for quite a while. So while D’Avolio’s enthusiasm for marrying EMRs and machine learning makes sense, the game is just getting started.

Artificial Intelligence Can Improve Healthcare

Posted on July 20, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

In recent times, there has been a lot of discussion of artificial intelligence in public forums, some generated by thought leaders like Bill Gates and Stephen Hawking. Late last year Hawking actually argued that artificial intelligence “could spell the end of the human race.”

But most scientists and researchers don’t seem to be as worried as Gates and Hawking. They contend that while machines and software may do an increasingly better job of imitating human intelligence, there’s no foreseeable way in which they could become a self-conscious threat to humanity.

In fact, it seems far more likely that AI will work to serve human needs, including healthcare improvement. Here’s five examples of how AI could help bring us smarter medicine (courtesy of Fast Company):

  1. Diagnosing disease:

Want to improve diagnostic accuracy? Companies like Enlitic may help. Enlitic is studying massive numbers of medical images to help radiologists pick up small details like tiny fractures and tumors.

  1. Medication management

Here’s a twist on traditional med management strategies. The AiCure app is leveraging a smartphone webcam, in tandem with AI technology, to learn whether patients are adhering to their prescription regimen.

  1. Virtual clinicians

Though it may sound daring, a few healthcare leaders are considering giving no-humans-involved health advice a try. Some are turning to startup Sense.ly, which offers a virtual nurse, Molly. The Sense.ly interface uses machine learning to help care for chronically-ill patients between doctor’s visits.

  1. Drug creation:

AI may soon speed up the development of pharmaceutical drugs. Vendors in this field include Atomwise, whose technology leverages supercomputers to dig up therapies for database of molecular structures, and Berg Health, which studies data on why some people survive diseases.

  1. Precision medicine:

Working as part of a broader effort seeking targeted diagnoses and treatments for individuals, startup Deep Genomics is wrangling huge data sets of genetic information in an effort to find mutations and linkages to disease.

In addition to all of these clinically-oriented efforts, which seem quite promising in and of themselves, it seems clear that there are endless ways in which computing firepower, big data and AI could come together to help healthcare business operations.

Just to name the first applications that popped into my head, consider the impact AI could have on patient scheduling, particularly in high-volume hostile environments. What about using such technology to do a better job of predicting what approaches work best for collecting patient balances, and even to execute those efforts is sophisticated way?

And of course, there are countless other ways in which AI could help providers leverage clinical data in real time. Sure, EMR vendors are already rolling out technology attempting to help hospitals target emergent conditions (such as sepsis), but what if AI logic could go beyond condition-specific modules to proactively predicting a much broader range of problems?

The truth is, I don’t claim to have a specific expertise in AI, so my guesses on what applications makes sense are no better than any other observer’s. On the other hand, though, if anyone reading this has cool stories to tell about what they’re doing with AI technology I’d love to hear them.