Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

It’s Time To Work Together On Technology Research

Posted on September 12, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Bloggers like myself see a lot of data on the uptake of emerging technologies. My biggest sources are market research firms, which typically provide the 10,000-foot view of the technology landscape and broad changes the new toys might work in the healthcare industry. I also get a chance to read some great academic research, primarily papers focused on niche issues within a subset of health IT.

I’m always curious to see which new technologies and applications are rising to the top, and I’m also intrigued by developments in emerging sub-disciplines such as blockchain for patient data security.

However, I’d argue that if we’re going to take the next hill, health IT players need to balance research on long-term adoption trends with a better understanding of how clinicians actually use new technologies. Currently, we veer between the micro and macro view without looking at trends in a practical manner.

Let’s consider the following information I gathered from a recent report from market research firm Reaction Data.   According to the report, which tabulated responses from a survey of about 100 healthcare leaders, five technologies seem to top the charts as being set to work changes in healthcare.

The list is topped by telemedicine, which was cited by 29% of respondents, followed by artificial intelligence (20%), interoperability (15%), data analytics (13%) and mobile data (11%).

While this data may be useful to leaders of large organizations in making mid- to long-range plans, it doesn’t offer a lot of direction as to how clinicians will actually use the stuff. This may not be a fatal flaw, as it is important to have some idea what trends are headed, but it doesn’t do much to help with tactical planning.

On the flip side, consider a paper recently published by a researcher with Google Brain, the AI team within Google. The paper, by Google software engineer Peter Lui, describes a scheme in which providers could use AI technology to speed their patient documentation process.

Lui’s paper describes how AI might predict what a clinician will say in patient notes by digging into the content of prior notes on that patient. This would allow it to help doctors compose current notes on the fly.  While Lui seems to have found a way to make this work in principle, it’s still not clear how effective his scheme would be if put into day-to-day use.

I’m well aware that figuring out how to solve a problem is the work of vendors more than researchers. I also know that vendors may not be suited to look at the big picture in the way of outside market researcher firms can, or to conduct the kind of small studies the fuel academic research.

However, I think we’re at a moment in health IT that demands high-level research collaboration between all of the stakeholders involved.  I truly hate the word “disruptive” by this point, but I wouldn’t know how else to describe options like blockchain or AI. It’s worth breaking down a bunch of silos to make all of these exciting new pieces fit together.

AI-Based Tech Could Speed Patient Documentation Process

Posted on August 27, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

A researcher with a Google AI team, Google Brain, has published a paper describing how AI could help physicians complete patient documentation more quickly. The author, software engineer Peter Lui, contends that AI technology can speed up patient documentation considerably by predicting its content.

On my initial reading of the paper, it wasn’t clear to me what advantage this has over pre-filling templates or even allowing physicians to cut-and-paste text from previous patient encounters. Still, judge for yourself as I outline what author Liu has to say, and by all means, check out the write-up.

In its introduction, the paper notes that physicians spend a great deal of time and energy entering patient notes into EHRs, a process which is not only taxing but also demoralizing for many physicians. Choosing from just one of countless data points underscoring this conclusion, Liu cites a 2016 study noting that physicians spend almost 2 hours of administrative work for every hour of patient contact.

However, it might be possible to reduce the number of hours doctors spend on this dreary task. Google Brain has been working on technologies which can speed up the process of documentation, including a new medical language modeling approach. Liu and his colleagues are also looking at how to represent an EHR’s mix of structured and unstructured text data.

The net of all of this? Google Brain has been able to create a set of systems which, by drawing on previous patient records can predict most of the content a physician will use next time they see that patient.

The heart of this effort is the MIMIC-III dataset, which contains the de-identified electronic health records of 39,597 patients from the ICU of a large tertiary care hospital. The dataset includes patient demographic data, medications, lab results, and notes written by providers. The system includes AI capabilities which are “trained” to predict the text physicians will use in their latest patient note.

In addition to making predictions, the Google Brain AI seems to have been able to pick out some forms of errors in existing notes, including patient ages and drug names, as well as providing autocorrect options for corrupted words.

By way of caveats, the paper warns that the research used only data generated within 24 hours of the current note content. Liu points out that while this may be a wide enough range of information for ICU notes, as things happen fast there, it would be better to draw on data representing larger windows of time for non-ICU patients. In addition, Liu concedes that it won’t always be possible to predict the content of notes even if the system has absorbed all existing documentation.

However, none of these problems are insurmountable, and Liu understandably describes these results as “encouraging,” but that’s also a way of conceding that this is only an experimental conclusion. In other words, these predictive capabilities are not a done deal by any means. That being said, it seems likely that his approach could be valuable.

I am left with at least one question, though. If the Google Brain technology can predict physician notes with great fidelity, how does that differ than having the physician cut-and-paste previous notes on their own?  I may be missing something here, because I’m not a software engineer, but I’d still like to know how these predictions improve on existing workarounds.

Geisinger, Penn State Researchers Predict Risk Of Rehospitalization Within Three Days Of Discharge

Posted on June 15, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

In recent times, healthcare organizations have focused deeply on the causes of patient readmissions to the hospital. It’s a problem that affects both physicians and health systems, particularly if the two are not in synch.

To date, providers have focused on readmissions happening within 30 days, largely in an effort to avoid financial penalties imposed by Medicare and Medicaid. However, if the following research is solid, it could push the focus of care much closer to hospital discharge dates.

In an effort which could change the process of avoiding readmissions, a group of researchers has found a way to predict a patient’s risk for needing additional medical care within three days of discharge. The new approach developed jointly by Penn State and Geisinger Health Plan, relies on clinical, administrative and socio-economic data drawn from patients admitted to Geisinger over two years.

The model they created is known as REDD, an acronym which stands for readmission, emergency department or death. Using this model can help physicians target interventions effective and reduce the number of adverse events, according to Deepak Agrawal, one of the Penn State researchers.

You won’t be surprised to hear that readmissions after 30 days are often related to social determinants of health, such as a poor home environment, limited access to services and scant social support. Providers are certainly working to close these gaps, but to date, this has remained a major challenge.

However, the dynamics are different when finding patients who may be readmitted quickly. “Readmissions closer to discharge are more likely to related to factors that are actually present but are not identified at the time the patient is discharged,” said research team leader Sundar Kumara, Allen E. Pearce and Allen M. Pierce Professor of Industrial Engineering with Penn State, who was quoted in a prepared statement.

Another Penn State researcher, Cheng-Bang Chen, added another interesting observation. He noted that the more time that passes after a patient gets discharged, the less likely it is that problems will be caught in time. After all, it may be a while before treating physicians have time to review lengthy hospital records, and the patient could experience a time-sensitive event before the physician completes the review.

To test the REDD program, Geisinger ran a six-month pilot tracking high-risk patients and adding additional services designed to avoid readmissions, ED visits or death.

To treat this population effectively, physicians took a number of steps, such as scheduling appointments with patients’ primary care doctors, educating patients about their medications and post-discharge care plans,  having the inpatient clinical pharmacist review the provider’s recommendations, filling patient prescriptions before discharge and having the hospital check on patients discharged to a skilled nursing facility one day after discharge.

It’s worth noting that there was one major issue which undermined the research results. Penn State reported that because of a shortage of nurses at the hospital during the pilot, they couldn’t tell whether the REDD program met its goals.

Still, researchers are convinced they’re heading in the right direction. “If the REDD model was fully implemented and aligned with clinical workflows, it has the potential to dramatically reduce hospital readmissions,” said Eric Reich, manager of health care re-engineering at Geisinger.

Let’s hope he’s right.

AI Project Could Prevent Needless Blindness

Posted on January 11, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

At this point, you’re probably sick of hearing about artificial intelligence and the benefits it may offer as a diagnostic tool. Even so, there are still some AI stories worth telling, and the following is one of them.

Yes, IBM Watson Health recently had a well-publicized stumble when it attempted to use “cognitive computing” to detect cancer, but that may have more to do with the fact that Watson was under so much pressure to produce results quickly with something that could’ve taken a decade to complete. Other AI-based diagnostic projects seem to be making far more progress.

Consider the following, for example. According to a story in WIRED magazine, Google is embarking on a project which could help eye doctors detect diabetic retinopathy and prevent blindness, basing its efforts on technologies it already has in-house.

The tech giant reported last year that it had trained image recognition algorithms to detect tiny aneurysms suggesting that the patient is in the early stages of retinopathy. This system uses the same technology that allows Google’s image search photo and photo storage services to discriminate between various objects and people.

To take things to the next step, Google partnered with the Aravind Eye Care System, a network of eye hospitals based in India. Aravind apparently helped Google develop the retinal screening system by contributing some of the images it already had on hand to help Google develop its image parsing algorithms.

Aravind and Google have just finished a clinical study of the technology in India with Aravind. Now the two are working to bring the technology into routine use with patients, according to a Google executive who spoke at a recent conference.

The Google exec, Lily Peng, who serves as a product manager with the Google Brain AI research group, said that these tools could help doctors to do the more specialized work and leave the screening to tools like Google’s. “There is not enough expertise to go around,” she said. “We need to have a specialist working on treating people who are sick.”

Obviously, we’ll learn far more about the potential of Google’s retinal scanning tech once Aravind begins using it on patients every day. In the meantime, however, one can only hope that it emerges as a viable and safe tool for overstressed eye doctors worldwide. The AI revolution may be overhyped, but projects like this can have an enormous impact on a large group of patients, and that can’t be bad.

Supercharged Wearables Are On The Horizon

Posted on January 3, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Over the last several years, the healthcare industry has been engaged in a rollicking debate over the value of patient-generated health data. Critics say that it’s too soon to decide whether such tools can really add value to medical care, while fans suggest it’s high time to make use of this information.

That’s all fine, but to me, this discussion no longer matters. We are past the question of whether consumer wearables data helps clinicians, which, in their current state, are under-regulated and underpowered. We’re moving on to profoundly more-capable devices that will make the current generation look like toys.

Today, tech giants are working on next-generation devices which will perform more sophisticated tracking and solve more targeted problems. Clinicians, take note of the following news items, which come from The New York Times:

  • Amazon recently invested in Grail, a cancer-detection start-up which raised more than $900 million
  • Apple acquired Beddit, which makes sleep-tracking technology
  • Alphabet acquired Senosis Health, which develops apps that use smartphone sensors to monitor health signals

And the action isn’t limited to acquisitions — tech giants are also getting serious about creating their own products internally. For example, Alphabet’s research unit, Verily Life Sciences, is developing new tools to collect and analyze health data.

Recently, it introduced a health research device, the Verily Study Watch, which has sensors that can collect data on heart rate, gait and skin temperature. That might not be so exciting on its own, but the associated research program is intriguing.

Verily is using the watch to conduct a study called Project Baseline. The study will follow about 10,000 volunteers, who will also be asked to use sleep sensors at night, and also agreed to blood, genetic and mental health tests. Verily will use data analytics and machine learning to gather a more-detailed picture of how cancer progresses.

I could go on, but I’m sure you get the point. We are not looking at your father’s wearables anymore — we’re looking at devices that can change how disease is detected and perhaps even treated dramatically.

Sure, the Fitbits of the world aren’t likely to go away, and some organizations will remain interested in integrating such data into the big data stores. But given what the tech giants are doing, the first generation of plain-vanilla devices will soon end up in the junk heap of medical history.

Google, Stanford Pilot “Digital Scribe” As Human Alternative

Posted on November 29, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Without a doubt, doctors benefit from the face-to-face contact with patients restored to them by scribe use; also, patients seem to like that they can talk freely without waiting for doctors to catch up with their typing. Unfortunately, though, putting scribes in place to gather EMR information can be pricey.

But what if human scribes could be replaced by digital versions, ones which interpreted the content of office visits using speech recognition and machine learning tools which automatically entered that data into an EHR system? Could this be done effectively, safely and affordably? (Side Note: John proposed something similar happening with what he called the Video EHR back in 2006.)

We don’t know the answer yet, but we may find out soon. Working with Google, a Stanford University doctor is piloting the use of digital scribes at the family medicine clinic where he works. Dr. Steven Lin is conducting a 9-month long study of the concept at the clinic, which will include all nine doctors currently working there.

Patients can choose whether to participate or not. If they do opt in, researchers plan to protect their privacy by removing their protected health information from any data used in the study.

To capture the visit information, doctors will wear a microphone and record the session. Once the session is recorded, team members plan to use machine learning algorithms to detect patterns in the recordings that can be used to complete progress notes automatically.

As one might imagine, the purpose of the pilot is to see what challenges doctors face in using digital scribes. Not surprisingly, Dr. Lin (and doubtless, Google as well), hope to develop a digital scribe tool that can be used widely if the test goes well.

While the information Stanford is sharing on the pilot is intriguing in and of itself, there are a few questions I’d hope to see project leaders answer in the future:

  • Will the use of digital scribes save money over the cost of human scribes? How much?
  • How much human technical involvement will be necessary to make this work? If the answer is “a lot” can this approach scale up to widespread use?
  • How will providers do quality control? After all, even the best voice recognition software isn’t perfect. Unless there’s some form of human content oversight, mis-translated words could end up in patient records indefinitely – and that could lead to major problems.

Don’t get me wrong: I think this is a super idea, and if this approach works it could conceivably change EHR information gathering for the better. I just think it’s important that we consider some of the tradeoffs that we’ll inevitably face if it takes off after the pilot has come and gone.

Mercy Shares De-Identified Data With Medtronic

Posted on October 20, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Medtronic has always performed controlled clinical trials to check out the safety and performance of its medical devices. But this time, it’s doing something more.

Dublin-based Medtronic has signed a data-sharing agreement with Mercy, the fifth largest Catholic health system in the U.S.  Under the terms of the agreement, the two are establishing a new data sharing and analysis network intended to help gather clinical evidence for medical device innovation, the company said.

Working with Mercy Technology Services, Medtronic will capture de-identified data from about 80,000 Mercy patients with heart failure. The device maker will use that data to explore real-world factors governing their response to Cardiac Resynchronization Therapy, a heart failure treatment option which helps some patients.

Medtronic believes that the de-identified patient data Mercy supplies could help improve device performance, according to Dr. Rick Kuntz, senior vice president of strategic scientific operations with Medtronic. “Having the ability to study patient care pathways and conditions before and after exposure to a medical device is crucial to understanding how those devices perform outside of controlled clinical trial setting,” said Kuntz in a prepared statement.

Mercy’s agreement with Medtronic is not unique. In fact, academic medical centers, pharmaceutical companies, health insurers and increasingly, broad-based technology giants are getting into the health data sharing game.

For example, earlier this year Google announced that it was expanding its partnerships with three high-profile academic medical centers under which they work to better analyze clinical data. According to Healthcare IT News, the partners will examine how machine learning can be used in clinical settings to sift through EMR data and find ways to improve outcomes.

“Advanced machine learning is mature enough to start accurately predicting medical events – such as whether patients will be hospitalized, how long they will stay, and whether the health is deteriorating despite treatment for conditions such as urinary tract infections, pneumonia, or heart failure,” said Google Brain Team researcher Katherine Chou in a blog post.

As with Mercy, the academic medical centers are sharing de-identified data. Chou says that offers plenty of information. “Machine learning can discover patterns in de-identified medical records to predict what is likely to happen next, and thus, anticipate the needs of the patients before they arise,” she wrote.

It’s worth pointing out that “de-identification” refers to a group of techniques for patient data protection which, according to NIST, include suppression of personal identifiers, replacing personal identifiers with an average value for the entire group of data, reporting personal identifiers as being within a given range, exchanging personal identifiers other information and swapping data between records.

It may someday become an issue when someone mixes up de-identification (which makes it quite difficult to define specific patients) and anonymization, a subcategory of de-identification whereby data can never be re-identified. Such confusion would, in short, be bad, as the difference between “de-identified” and “anonymized” matters.

In the meantime, though, de-identified data seems likely to help a wide variety of healthcare organizations do better work. As long as patient data stays private, much good can come of partnerships like the one underway at Mercy.

Say It One More Time: EHRs Are Hard To Use

Posted on September 19, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

I don’t know about you, but I was totes surprised to hear about another study pointing out that doctors have good reasons to hate their EHR. OK, not really surprised – just a bit sadder on their account – but I admit I’m awed that any single software system can be (often deservedly) hated this much and in this many ways.

This time around, the parties calling out EHR flaws were the American Medical Association and the University of Wisconsin, which just published a paper in the Annals of Family Medicine looking at how primary care physicians use their EHR.

To conduct their study, researchers focused on how 142 family physicians in southeastern Wisconsin used their Epic system. The team dug into Epic event logging records covering a three-year period, sorting out whether the activities in question involved direct patient care or administrative functions.

When they analyzed the data, the researchers found that clinicians spent 5.9 hours of an 11.4-hour workday interacting with the EHR. Clerical and administrative tasks such as documentation, order entry, billing and coding and system security accounted about 44% of EHR time and inbox management roughly another 24% percent.

As the U of W article authors see it, this analysis can help practices make better use of clinicians’ time. “EHR event logs can identify areas of EHR-related work that could be delegated,” they conclude, “thus reducing workload, improving professional satisfaction, and decreasing burnout.”

The AMA, for its part, was not as detached. In a related press release, the trade group argued that the long hours clinicians spend interacting with EHRs are due to poor system design. Honestly, I think it’s a bit of a stretch to connect the study results directly to this conclusion, but of course, the group isn’t wrong about the low levels of usability most EHRs foist on doctors.

To address EHR design flaws, the AMA says, there are eight priorities vendors should consider, including that the systems should:

  • Enhance physicians’ ability to provide high-quality care
  • Support team-based care
  • Promote care coordination
  • Offer modular, configurable products
  • Reduce cognitive workload
  • Promote data liquidity
  • Facilitate digital and mobile patient engagement
  • Integrate user input into EHR product design and post-implementation feedback

I’m not sure all of these points are as helpful as they could be. For example, there are approximately a zillion ways in which an EHR could enhance the ability to provide high-quality care, so without details, it’s a bit of a wash. I’d say the same thing about the digital/mobile patient engagement goal.

On the other hand, I like the idea of reducing cognitive workload (which, in cognitive psychology, refers to the total amount of mental effort being used in working memory). There’s certainly evidence, both within and outside medicine, which underscores the problems that can occur if professionals have too much to process. I’m confident vendors can afford design experts who can address this issue directly.

Ultimately, though, it’s not important that the AMA churns out a perfect list of usability testing criteria. In fact, they shouldn’t have to be telling vendors what they need at this point. It’s a shame EHR vendors still haven’t gotten the usability job done.

Bringing Zen To Healthcare:  Transformation Through The N of 1

Posted on July 21, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

The following essay wasn’t easy to understand. I had trouble taking it in at first. But the beauty of these ideas began to shine through for me when I took time to absorb them. Maybe you will struggle with them a bit yourself.

In his essay, the author argues that if providers focus on “N of 1” it could change healthcare permanently. I think he might be right, or at least makes a good case.  It’s a complex argument but worth following to the end. Trust me, the journey is worth taking.

The mysterious @CancerGeek

Before I share his ideas, I’ll start with an introduction to @CancerGeek, the essay’s author. Other than providing a photo as part of his Twitter home page, he’s chosen to be invisible. Despite doing a bunch of skillful GoogleFu, I couldn’t track him down.

@CancerGeek posted a cloud of interests on the Twitter page, including a reference to being global product manager PET-CT; says he develops hospital and cancer centers in the US and China; and describes himself as an associate editor with DesignPatient-MD.

In the essay, he says that he did clinical rotations from 1998 to 1999 while at the University of Wisconsin-Madison Carbone Comprehensive Cancer Center, working with Dr. Minesh Mehta.

He wears a bow tie.

And that’s all I’ve got. He could be anybody or nobody. All we have is his voice. John assures me he’s a real person that works at a company that everyone knows. He’s just chosen to remain relatively anonymous in his social profiles to separate his social profiles from his day job.

The N of 1 concept

Though we don’t know who @CancerGeek is, or why he is hiding, his ideas matter. Let’s take a closer look at the mysterious author’s N of 1, and decide for ourselves what it means. (To play along, you might want to search Twitter for the #Nof1 hashtag.)

To set the stage, @CancerGeek describes a conversation with Dr. Mehta, a radiation oncologist who served as chair of the department where @CancerGeek got his training. During this encounter, he had an insight which helped to make him who he would be — perhaps a moment of satori.

As the story goes, someone called Dr. Mehta to help set up a patient in radiation oncology, needing help but worried about disturbing the important doctor.

Apparently, when Dr. Mehta arrived, he calmly helped the patient, cheerfully introducing himself to their family and addressing all of their questions despite the fact that others were waiting.

When Dr. Mehta asked @CancerGeek why everyone around him was tense, our author told him that they were worried because patients were waiting, they were behind schedule and they knew that he was busy. In response, Dr. Mehta shared the following words:

No matter what else is going on, the world stops once you enter a room and are face to face with a patient and their family. You can only care for one patient at a time. That patient, in that room, at that moment is the only patient that matters. That is the secret to healthcare.

Apparently, this advice changed @CancerGeek on the spot. From that moment on, he would work to focus exclusively on the patient and tune out all distractions.

His ideas crystallized further when he read an article in the New England Journal of Medicine that gave a name to his approach to medicine. The article introduced him to the concept of N of 1.  All of the pieces began to began to fit together.

The NEJM article was singing his song. It said that no matter what physicians do, nothing else counts when they’re with the patient. Without the patient, it said, little else matters.

Yes, the author conceded, big projects and big processes matter still matter. Creating care models, developing clinical pathways and clinical service lines, building cancer centers, running hospitals, and offering outpatient imaging, radiology and pathology services are still worthwhile. But to practice well, the author said, dedicate yourself to caring for patients at the N of 1. Our author’s fate was sealed.

Why is N of 1 important to healthcare?

Having told his story, @CancerGeek shifts to the present. He begins by noting that at present, the healthcare industry is focused on delivering care at the “we” level. He describes this concept this way:

“The “We” level means that when you go to see a physician today, that the medical care they recommend to you is based on people similar to you…care based on research of populations on the 100,000+ (foot) level.”

But this approach is going to be scrapped over the next 8 to 10 years, @CancerGeek argues. (Actually, he predicts that the process will take exactly eight years.)

Over time, he sees care moving gradually from the managing groups to delivering personalized care through one-to-one interactions. He believes the process will proceed as follows:

  • First, sciences like genomics, proteomics, radionomics, functional imaging and immunotherapies will push the industry into delivering care at a 10,000-foot population level.
  • Next, as ecosystems are built out that support seamless sharing of digital footprints, care will move down to the 1,000-foot level.
  • Eventually, the system will alight at patient level. On that day, the transition will be complete. Healthcare will no longer be driven by hospitals, healthcare systems or insurance companies. Its sole focus will be on people and communities — and what the patient will become over time.

When this era arrives, doctors will know patients far more deeply, he says.

He predicts that by leveraging all of the data available in the digital world, physicians will know the truth of their experiences, including the food they eat, the air they breathe, how much sleep they get, where they work, how they commute to and from work and whether they care for a family member or friend, doctors will finally be able to offer truly personalized care. They’ll focus on the N of 1, the single patient they’re encountering at that moment.

The death of what we know

But we’re still left with questions about the heart of this idea. What, truly, is the N of 1? Perhaps it is the sound of one hand clapping. Or maybe it springs from an often-cited Zen proverb: “When walking, walk. When eating, eat.” Do what you’re doing right now – focus and stay in the present moment. This is treating patients at the N of 1 level, it seems to me.

Like Zen, the N of 1 concept may sound mystical, but it’s entirely practical. As he points out, patients truly want to be treated at the N of 1 – they don’t care about the paint on the walls or Press Ganey scores, they care about being treated as individuals. And providers need to make this happen.

But to meet this challenge, healthcare as we know it must die, he says. I’ll leave you with his conclusion:

“Within the next eight years, healthcare as we know it will end. The new healthcare will begin. Healthcare delivered at the N of 1.”  And those who seek will find.

A Tool For Evaluating E-Health Applications

Posted on April 11, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

In recent years, developers have released a staggering number of mobile health applications, with nearly 150,000 available as of 2015. And the demand for such apps is rising, with the mHealth services market projected to reach $26 billion globally this year, according to analyst firm Research 2 Guidance.

Unfortunately, given the sheer volume of apps available, it’s tricky to separate the good from the bad. We haven’t even agreed on common standards by which to evaluate such apps, and neither regulatory agencies nor professional associations have taken a firm position on the subject.

For example, while we have seen groups like the American Medical Association endorse the use of mobile health applications, their acceptance came with several caveats. While the organization conceded that such apps might be OK, it noted that such approval applies only if the industry develops an evidence base demonstrating that the apps are accurate, effective, safe and secure. And other than broad practice guidelines, the trade group didn’t get into the details of how its members could evaluate app quality.

However, at least one researcher has made an attempt at developing standards which identify the best e-Health software apps and computer programs. Assistant professor Amit Baumel, PhD, of the Feinstein Institute for Medical Research, has recently led a team that created a tool to evaluate the quality and therapeutic potential of such applications.

To do his research, a write-up of which was published in the Journal of Medical Internet Research, Baumel developed an app-rating tool named Enlight. Rather than using automated analytics, Enlight was designed as a manual scale to be filled out by trained raters.

To create the foundation for Enlight, researchers reviewed existing literature to decide which criteria were relevant to determine app quality. The team identified a total of 476 criteria from 99 sources to build the tool. Later, the researchers tested Enlight on 42 mobile apps and 42 web-based programs targeting modifiable behaviors related to medical illness or mental health.

Once tested, researchers rolled out the tool. Enlight asked participants to score 11 different aspects of app quality, including usability, visual design, therapeutic persuasiveness and privacy. When they evaluated the responses, they found that Enlighten raters reached substantially similar results when rating a given app. They also found that all of the eHealth apps rated “fair” or above received the same range of scores for user engagement and content – which suggests that consumer app users have more consistent expectations than we might have expected.

That being said, Baumel’s team noted that even if raters like the content and found the design to be engaging, that didn’t necessarily mean that the app would change people’s behaviors. The researchers concluded that patients need not only a persuasive app design, but also qualities that support a therapeutic alliance.

In the future, the research team plans to research which aspects of app quality do a better job at predicting user behaviors. They’re also testing the feasibility of rolling out an Enlight-based recommendation system for clinicians and end users. If they do succeed, they’ll be addressing a real need. We can’t continue to integrate patient-generated app data until we can sort great apps from useless, inaccurate products.