Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Randomized Controlled Trials and Longitudinal Analysis for Health Apps at Twine Health (Part 1 of 2)

Posted on February 17, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Walking into a restaurant or a bus is enough to see that any experience delivered through a mobile device is likely to have an enthusiastic uptake. In health care, the challenge is to find experiences that make a positive difference in people’s lives–and proving it.

Of course, science has a time-tested method for demonstrating the truth of a proposition: randomized tests. Reproducibility is a big problem, admittedly, and science has been shaken by the string of errors and outright frauds perpetrated in scientific journals. Still, knowledge advances bit by bit through this process, and the goal of every responsible app developer in the health care space is the blessing offered by a successful test.

Consumer apps versus clinical apps

Most of the 165,000 health apps will probably always be labeled “consumer” apps and be sold without the expense of testing. They occupy the same place in the health care field as the thousands of untested dietary supplements and stem cell injection therapies whose promise is purely anecdotal. Consumer anger over ill-considered claims have led to lawsuits against the Fitbit device manufacturer and Lumosity mental fitness app, leading to questions about the suitability of digital fitness apps for medical care plans.

The impenetrability of consumer apps to objective judgment comes through in a recent study from the Journal of Medical Internet Research (JMIR) that asked mHealth experts to review a number of apps. The authors found very little agreement about what makes a good app, thus suggesting that quality cannot be judged reliably, a theme in another recent article of mine. One might easily anticipate that subjective measures would produce wide variations in judgment. But in fact, many subjective measures produced more agreement (although not really strong agreement) than more “objective” measures such as effectiveness. If I am reading the data right, one of the measures found to be most unreliable was one of the most “objective”: whether an app has been tested for effectiveness.

Designing studies for these apps is an uncertain art. Sometimes a study may show that you don’t know what to measure or aren’t running the study long enough. These possible explanations–gentler than the obvious concern that maybe fitness devices don’t achieve their goals–swirl about the failure of the Scripps “Wired for Health” study.

The Twine Health randomized controlled trials

I won’t talk any more about consumer apps here, though–instead I’ll concentrate on apps meant for serious clinical use. What can randomized testing do for these?

Twine Health and MIT’s Media Lab took the leap into rigorous testing with two leading Boston-area partners in the health care field: a diabetes case study with the Joslin Diabetes Center and a hypertension case study with Massachusetts General Hospital. Both studies compared a digital platform for monitoring and guiding patients with pre-existing tools such as face-to-face visits and email. Both demonstrated better results through the digital platform–but certain built-in limitations of randomized studies leave open questions.

When Dr. John Moore decided to switch fields and concentrate on the user experience, he obtained a PhD at the Media Lab and helped develop an app called CollaboRhythm. He then used it for the two studies described in the papers, while founding and becoming CEO of Twine Health. CollaboRhythm is a pretty comprehensive platform, offering:

  • The ability to store a care plan and make it clear to the user through visualizations.

  • Patient self-tracking to report taking medications and resulting changes in vital signs, such as glycemic levels.

  • Visualizations showing the patient her medication adherence.

  • Reminders when to take medication and do other aspects of treatment, such as checking blood pressure.

  • Inferences about diet and exercise patterns based on reported data, shown to the patient.

  • Support from a human coach through secure text messages and virtual visits using audio, video, and shared screen control.

  • Decision support based on reported vital statistics and behaviors. For instance, when diabetic patients reported following their regimen but their glycemic levels were getting out of control, the app could suggest medication changes to the care team.

The collection of tools is not haphazard, but closely follows the modern model of digital health laid out by the head of Partners Connected Health, Joseph Kvedar, in his book The Internet of Healthy Things (which I reviewed at length). As in Kvedar’s model, the CollaboRhythm interventions rested on convenient digital technologies, put patients’ care into their own hands, and offered positive encouragement backed up by clinical staff.

As an example of the patient empowerment, the app designers deliberately chose not to send the patient an alarm if she forgets her medication. Instead, the patient is expected to learn and adopt responsibility over time by seeing the results of her actions in the visualizations. In exit interviews, some patients expressed appreciation for being asked to take responsibility for their own health.

The papers talk of situated learning, a classic education philosophy that teaches behavior in the context where the person has to practice the behavior, instead of an artificial classroom or lab setting. Technology can bring learning into the home, making it stick.

There is also some complex talk of the relative costs and time commitments between the digital interventions and the traditional ones. One important finding is that app users expressed significantly better feelings about the digital intervention. They became more conscious of their health and appreciated being able to be part of decisions such as changing insulin levels.

So how well does this treatment work? I’ll explore that tomorrow in the next section of this article, along with strengths and weaknesses of the studies.

Doctors and Patients are Largely Missing at #HIMSS16

Posted on February 16, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I recently got an email from someone asking me if I knew of practicing doctors that would be at the HIMSS Annual conference in Las Vegas (Or as we affectionately call it, #HIMSS16). I was sadly struggling to find an answer to that question. In fact, as I thought back over my last 6 years at HIMSS conferences I could probably count on my hands and feet how many practicing doctors I’d spent time with at HIMSS.

Consider that HIMSS attendance has exploded over the years and I won’t be surprised if HIMSS attendance passes 50,000 people this year. No doubt I only meet a small subset of the attendees, but there certainly should be more practicing doctors at the event. It’s unfortunate for our industry that they’re not there since their voice is so crucial to the success of healthcare IT.

I’m sure HIMSS has a count of how many doctors (MD or DO) are at the event. However, those numbers are skewed since I know a ton of MDs and DOs who attend HIMSS, but they’re not actually practicing medicine anymore. They’re CMO’s at vendors or startup entrepreneurs or clinical informaticists or something else. Many of them never even practiced medicine after residency. Nothing against these people. Many of them have amazing insight into what’s happening in healthcare. However, they’re not dealing with the day to day realities of practicing medicine.

I understand why many practicing doctors don’t attend HIMSS. It’s hard for them to get away from the office and justify traveling to a conference at their own expense. Plus, HIMSS registrations aren’t cheap. I don’t know why at this point HIMSS doesn’t give practicing doctors a free registration to the conference. Even if they did this, I know some practicing doctors who have attended HIMSS that went away disenfranchised by the disconnect between what they heard at the show and what they experienced in their offices. It’s no surprise why they don’t return to future shows. However, keeping them away isn’t the way to change that disconnect. Having them at the conference is the way to fix the disconnect.

A similar commentary could be applied to patients at HIMSS as well. I’m always a little tentative to say that patients aren’t at HIMSS since all 50,000+ attendees are or have been patients in the health care system. So, patients are at HIMSS. However, there’s a difference between someone who’s been a patient and someone who’s at HIMSS to represent the voice of the patient.

There has been some efforts to include more patients at HIMSS, but it’s still an infinitesimally small number compared to the 50,000 attendees. One solution is for more of us to be more of a patient voice at HIMSS. The other solution is to bring more patients who will be advocates for that voice.

This isn’t to say that HIMSS is a bad event. It’s a great event. It just could be better with more doctors and more patients present. If we can’t bring 50,000 people together and 1300 exhibitors and do some good, then something is really wrong. I’ve seen and written about some of the amazing announcements, initiatives and efforts that have come out of HIMSS. I’m sure we’ll see more of that progress again this year.

Plus, let’s also acknowledge that many of the 1300 HIMSS exhibitors and 50,000+ attendees spend a lot of time working with and consulting with doctors and patients when creating, evaluating and implementing healthcare IT solutions. In some ways a vendor or hospital CIO who’s talked to hundreds of patients or hundreds of doctors represents the voice of the patient and the doctor much better than 1 patient or 1 doctor sharing their own “N of 1” view of what’s happening in healthcare.

The reality of healthcare and health IT is that we’re talking about extremely difficult challenges. That’s why we need everyone in the same boat and paddling in the same direction. HIMSS is that event for healthcare IT in many ways, but could even be more valuable if more doctors and patients were in attendance.

President’s Day – The Responsibility of Tomorrow

Posted on February 15, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Last year I started the tradition of posting some presidential quotes that I found interesting as a way to honor President’s Day in the US. I’ll keep that tradition alive with the quote below:
Lincoln Quote

#HIMSS16 Exhibitors with Great Workflow Stories

Posted on February 12, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

UPDATE: You can see the recorded video of this discussion in the YouTube video embedded below:

HIMSS16 Exhibitors with Great Workflow Stories-blog

Chuck Webster, MD, or as most people know him, @wareflo, is famous for always talking about healthcare workflow. You can talk about EHR and he’ll bring up workflow. You can talk about population health and he’ll discuss the workflow aspects of it. You can talk about buying a cheeseburger at McDonalds and he’ll talk about workflow. You can talk about your tweeting strategy and he’ll talk about workflow. He should really consider changing his name to Mr. Workflow.

One thing I’ve always found interesting is that each year Chuck goes through the list of HIMSS exhibitors (yes, all ~1300 of them…he’s insane like that!) and identifies which ones are using workflow technology to solve the problems of healthcare. With that in mind, I thought a blab with Chuck and other HIMSS16 vendors who include some aspect of workflow in their solutions would be a great intro to #HIMSS16. So, on Wednesday, February 17, 2016 at Noon ET (9 AM PT), I’ll be sitting down with Charles Webster and some #HIMSS16 vendors that are interested in workflow.

You can join my live conversation with Chuck Webster and even add your own comments to the discussion or ask Chuck questions. All you need to do to watch live is visit this blog post on Wednesday, February 17, 2016 at Noon ET (9 AM PT) and watch the video embed at the bottom of the post or you can subscribe to the blab directly. We’re hoping to include as many people in the conversation as possible. The discussion will be recorded as well and available on this post after the interview.

If you’re a healthcare IT vendor that has a solution that helps with healthcare workflows, we’d love to have you join us on the blab (video, chat, or just viewing). If you want to hop on video, you’ll probably want to visit the blab directly. Otherwise, if you just want to watch us chat, the video below will go live on the day of the blab.

If you’d like to see the archives of Healthcare Scene’s past interviews, you can find and subscribe to all of Healthcare Scene’s interviews on YouTube.

Are You Cheating Yourself and Your Patients When Meeting Regulations Like Meaningful Use?

Posted on February 11, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I spend a lot of time counseling clients that, although they may be able to check the box for attestation, they’re cheating themselves and their patients out of the improvements that systems were intended to drive.

The above quote was from a post by Dr. Jayne on HIStalk. Her comment really struck me on a number of levels. The first is that I know so many organizations that met the rule of the law when it came to meaningful use, but definitely didn’t meet the spirit of the law.

Dr. Jayne is so right that many organizations that just slapped an EHR in there in order to get the EHR incentive money cheated themselves and their patients out of benefits that could have been achieved. I realize the economic realities associated with waiting, but I think we’re going to suffer some long term consequences from all the rushed implementations that jimmy rigged things to meet the letter of the law as opposed to using the meaningful use regulations to improve the care they provide.

However, I’d take Dr. Jayne’s analysis one step further. It’s worth considering if doctors chasing the EHR incentive money and showing meaningful use are cheating themselves and their patients. Those fans of meaningful use will probably think I’m just being negative. Most doctors will likely think it’s a very good question.

The reality I’ve seen is that the happiest doctors I know chose to shun meaningful use and some of them are now laughing at their physician friends that are busy clicking meaningful use check boxes. Ok, most of them aren’t evil enough to laugh. However, in their heads they’re thinking how grateful they are that they chose not to pursue meaningful use. It’s not hard to make an argument for why not doing meaningful use is in the best interest of your patient.

My takeaway from the experience of the meaningful use “gold rush” was to slow down and think rationally. Mistakes happen when you make irrational choices. Taking the time to make a well thought out business decision is key when evaluating any software implementation and government program. Even a software implementation with $36 billion of government incentive money attached to it.

What is Quality in Health Care? (Part 2 of 2)

Posted on February 10, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article described different approaches to quality–and in fact to different qualities. In this part, I’ll look at the problems with quality measures, and at emerging solutions.

Difficulties of assessing quality

The Methods chapter of a book from the National Center for Biotechnology Information at NIH lays out many of the hurdles that researchers and providers face when judging the quality of clinical care. I’ll summarize a few of the points from the Methods chapter here, but the chapter is well worth a read. The review showed how hard it is to measure accurately many of the things we’d like to know about.

For instance, if variations within a hospital approach (or exceed) the variations between hospitals, there is little benefit to comparing hospitals using that measure. If the same physician gets wildly different scores from year to year, the validity of the measure is suspect. When care is given by multiple doctors and care teams, it is unjust to ascribe the outcome to patient’s principal caretaker. If random variations outweigh everything, the measure is of no use at all. One must also keep in mind practical considerations, such as making sure the process of collecting data would not cost too much.

Many measures apply to a narrow range of patients (for instance, those with pneumonia) and therefore may be skewed for doctors with a relatively small sample of those patients. And a severe winter could elevate mortality from pneumonia, particularly if patients have trouble getting adequate shelter and heat. In general, “For most outcomes, the impacts of random variation and patient factors beyond providers’ control often overwhelm differences attributable to provider quality.” ACMQ quality measures “most likely cannot definitively distinguish poor quality providers from high quality providers, but rather may illuminate potential quality problems for consideration of further investigation.”

The chapter helps explain why many researchers fall back on standard of care. Providers don’t trust outcome-based measures because of random variations and factors beyond their control, including poverty and other demographics. It’s hard even to know what contributed to a death, because in the final months it may not have been feasible to complete the diagnoses of a patient. Thus, doctors prefer “process measures.”

Among the criteria for evaluating quality indicators we see, “Does the indicator capture an aspect of quality that is widely regarded as important?” and more subtly, “subject to provider or public health system control?” The latter criterion heed physicians who say, “We don’t want to be blamed for bad habits or other reasons for noncompliance on the part of our patients, or for environmental factors such as poverty that resist quick fixes.”

The book’s authors are certainly aware of the bias created by gaming the reimbursement system: “systematic biases in documentation and coding practices introduced by awareness that risk-adjustment and reimbursement are related to the presence of particular complications.” The paper points out that diagnosis data is more trustworthy when it is informed by clinical information, not just billing information.

One of the most sensitive–and important–factors in quality assessment is risk adjustment, which means recognizing which patients have extra problems making their care more difficult and their recovery less certain. I have heard elsewhere the claim that CMS doesn’t cut physicians enough slack when they take on more risky patients. Although CMS tries to take poverty into account, hospital administrators suspect that institutions serving low-income populations–and safety-net hospitals in particular–are penalized for doing so.

Risk adjustment criteria are sometimes unpublished. But the most perverse distortion in the quality system comes when hospitals fail to distinguish iatrogenic complications (those introduced by medical intervention, such as infections incurred in the hospital) from the original diseases that the patient brought. CMS recognizes this risk in efforts such as penalties for hospital-acquired conditions. Unless these are flagged correctly, hospitals can end up being rewarded for treating sicker patients–patients that they themselves made sicker.

Distinguishing layers of quality

Theresa Cullen,associate director of the Regenstrief Institute’s Global Health Informatics Program, suggests that we think of quality measures as a stack, like those offered by software platforms:

  1. The bottom of the stack might simply measure whether a patient receive the proper treatment for a diagnosed condition. For instance, is the hemoglobin A1C of each diabetic patient taken regularly?

  2. The next step up is to measure the progress of the first measure. How many patients’ A1C was under control for their stage of the disease?

  3. Next we can move to measuring outcomes: improvements in diabetic status, for instance, or prevention of complications from diabetes

  4. Finally, we can look at the quality of the patient’s life–quality-adjusted life years.

Ultimately, to judge whether a quality measure is valid, one has to compare it to some other quality measure that is supposedly trustworthy. We are still searching for measures that we can rely on to prove quality–and as I have already indicated, there may be too many different “qualities” to find ironclad measures. McCallum offers the optimistic view that the US is just beginning to collect the outcomes data that will hopefully give us robust quality measures, Patient ratings serve as a proxy in the interim.

When organizations claim to use quality measures for accountable care, ratings, or other purposes, they should have their eyes open about the validity of the validation measures, and how applicable they are. Better data collection and analysis over time should allow more refined and useful quality measures. We can celebrate each advance in the choices we have for measures and their meanings.

What is Quality in Health Care? (Part 1 of 2)

Posted on February 9, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Assessing the quality of medical care is one of the biggest analytical challenges in health today. Every patient expects–and deserves–treatment that meets the highest standards. Moreover, it is hard to find an aspect of health care reform that does not depend on accurate quality measurement. Without a firm basis for assessing quality, how can the government pay Accountable Care Organizations properly? How can consumer choice (the great hope of many reformers) become viable? How can hospitals and larger bodies of researchers become “learning health systems” and implement continuous improvement?

Ensuring quality, of course, is crucial in a fee-for-value system to ensure that physicians don’t cut costs just by withholding necessary care. But a lot of people worry that quality-based reimbursement plans won’t work. As this article will show, determining what works and who is performing well are daunting tasks.

A recent op-ed claims that quality measures are adding unacceptable stress to doctors, that the metrics don’t make a difference to ultimate outcomes, that the variability of individual patients can’t be reflected in the measures, that the assessments don’t take external factors adequately into account, and that the essential element of quality is unmeasurable.

Precision medicine may eventually allow us to tailor treatments to individual patients with unique genetic prints. But in the meantime, we’re guessing a lot of the time we prescribe drugs.

The term quality originally just distinguished things of different kinds, like the Latin word qualis from which it is derived. So there are innumerable different qualities (as in “The quality of mercy is not strained”). It took a while for quality to be seen as a single continuum, as in an NIH book I’ll cite later, which reduces all quality measures to a single number by weighting different measures and combining them. Given the lack of precision in individual measures and the subjective definitions of quality, it may be a fool’s quest to seek a single definition of quality in health care.

Many qualities in play
Some of the ways to measure quality and outcomes include:

  • Longitudinal research: this tracks a group of patients over many years, like the famous Framingham Heart Study that changed medical care. Modern “big data” research carries on this tradition, using data about patients in the field to supplement or validate conventional clinical research. In theory, direct measurement is the most reliable source of data about what works in public health and treatment. Obvious drawbacks include:

    • the time such studies take to produce reliable results

    • the large numbers of participants needed (although technology makes it more feasible to contact and monitor subjects)

    • the risk that unknown variations in populations will produce invalid results

    • inaccuracies introduced by the devices used to gather patient information

  • Standard of care: this is rooted in clinical research, which in turn tries to ensure rigor through double-blind randomized trials. Clinical trials, although the gold standard in research, are hampered by numerous problems of their own, which I have explored in another article. Reproducibility is currently being challenged in health care, as in many other areas of science.

  • Patient ratings: these are among the least meaningful quality indicators, as I recently explored. Patients can offer valuable insights into doctor/patient interactions and other subjective elements of their experience moving through the health care system–insights to which I paid homage in another article–but they can’t dissect the elements of quality care that went into producing their particular outcome, which in any case may require months or years to find out. Although the patient’s experience determines her perception of quality, it does not necessarily reflect the overall quality of care. The most dangerous aspect of patient ratings, as Health IT business consultant Janice McCallum points out, comes when patients’ views of quality depart from best practices. Many patients are looking for a quick fix, whether through pain-killers, antibiotics, or psychotropic medications, when other interventions are called for on the basis of both cost and outcome. So the popularity of ratings among patients just underscores how little we actually know about clinical quality.

Quality measures by organizations such as the American College of Medical Quality (ACMQ) and National Committee for Quality Assurance (NCQA) depend on a combination of the factors just listed. I’ll look more closely at these in the next part of this article.

EMR Issues That Generate Med Mal Payouts Sound Familiar

Posted on February 8, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

When any new technology is adopted, new risks arise, and EMR systems are no exception to that rule. In fact, if one medical malpractice insurer’s experience is any indication, EMR-related medical errors may be rising over time — or at least, healthcare organizations are becoming more aware of the role that EMRs are playing in some medical errors. The resulting data seems to suggest that many EMR risks haven’t changed for more than a decade.

In a recent blog item, med mal insurer The Doctors Company notes that EMR-related factors contributed to just under one percent of all claims closed between January 2007 through June 2014. Researchers there found that user factors contributed to 64% of the 97 closed claims, and system factors 42%.

The insurer also got specific as to what kind of system and user factors had a negative impact on care, and how often.

EMR System Factors: 

  • Failure of system design – 10%
  • Electronic systems/technology failure – 9%
  • Lack of EMR alert/alarm/decision support – 7%
  • System failure–electronic data routing – 6%
  • Insufficient scope/area for documentation – 4%
  • Fragmented EMR – 3%

EMR User Factors

  • Incorrect information in the EMR – 16%
  • Hybrid health records/EMR conversion – 15%
  • Prepopulating/copy and paste – 13%
  • EMR training/education – 7%
  • EMR user error (other than data entry) – 7%
  • EMR alert issues/fatigue – 3%
  • EMR/CPOE workarounds -1%

This is hardly a road map to changes needed in EMR user practices and system design, as a 97-case sample size is small. That being said, it’s intriguing — and to my mind a bit scary — to note 16% of claims resulted at least in part due to the EMR containing incorrect information. True, paper records weren’t perfect either, but there’s considerably more vectors for infecting EMR data with false or garbled data.

It’s also worth digging into what was behind the 10% of claims impacted by failure of EMR design. Finding out what went wrong in these cases would be instructive, to be sure, even if some the flaws have probably been found and fixed. (After all, some of these claims were closed more than 15 years ago.)

But I’m leaving what I consider to be the juiciest data for last. Just what problems were created by EMR user and systems failures? Here’s the top candidates:

Top Allegations in EMR Claims

  • Diagnosis-related (failure, delay, wrong) – 27%
  • Medication-related – 19%
    • Ordering wrong medication – 7%
    • Ordering wrong dose – 5%
    • Improper medication management – 7%

As medical director David Troxel, MD notes in his blog piece, most of the benefits of EMRs continue to come with the same old risks. Tradeoffs include:

Improved documentation vs. complexity: EMRs improve documentation and legibility of data, but the complexity created by features like point-and-click lists, autopopulation of data from templates and canned text can make it easier to overlook important clinical information.

Medication accuracy vs. alarm fatigue: While EMRs can make med reconciliation and management easier, and warn of errors, frequent alerts can lead to “alarm fatigue” which cause clinicians to disable them.

Easier data entry vs. creation of errors:  While templates with drop-down menus can make data entry simpler, they can also introduce serious, hard-to-catch errors when linked to other automated features of the EMR.

Unfortunately, there’s no simple way to address these issues, or we wouldn’t still be talking about them many years after they first became identified. My guess is that it will take a next-gen EMR with new data collection, integration and presentation layers to move past these issues. (Expect to see any candidates at #HIMSS16?)

In the mean time, I found it very interesting to hear how EMRs are contributing to medical errors. Let’s hope that within the next year or two, we’ll at least be talking about a new, improved set of less-lethal threats!

Fear of Saying Yes to Healthcare IT

Posted on February 5, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’ve seen a theme this week in healthcare. The theme keeps coming up and so I thought I’d highlight it here for others to comment on. The following Twitter exchange illustrates the discussion:


This reply is about secure text, but “this” in Nick’s tweet could be a wide variety of tech solutions. So, fill in “this” with your favorite health IT solution.

Andrew Richards responded:

And then I replied:


Andrew is right that there are a lot of solutions out there, but the “gatekeepers” as he calls them are saying no. My tweet was limited to 140 characters, so I highlighted the fear element assciated with not saying yes. However, that definitely simplifies the reason they’re not saying yes. Let’s also be clear that they’re not usually saying no either. They’re just not saying yes (this is is sometimes called misery by sales people).

While I think fear is a major element why the health IT gatekeepers are saying no, there are other reasons. For example, many are so overwhelmed with “bigger” projects that they just don’t have the time to say yes to one more project. Even a project that has great potential to provide value to their organization. I’ve heard some people argue that this is just an excuse. In some cases that may be the case, but in others people really are busy with tons of projects.

Another obstacle I see is that many feel like they’ve been burned by past health IT projects. The front runner for burning people out is EHR. No doubt some really awful EHR implementations have left a black eye on any future healthcare IT projects. If you’d been through some of the awful EHR implementations that were done, you might be afraid of implementing more IT as well.

Nick Adkins finished the Twitter exchange with this tweet:


Nick has spent some time at burning man as you can tell from his tweet. However, a passion for improving healthcare and going above and beyond what we’re doing today is a key strategy to saying yes to challenging, but promising projects.

I’d love to hear your thoughts on this subject. Are there other good reasons people should be afraid of implementing new technology? Do we need to overcome this fear? What’s going to help these health IT “gatekeepers” to start saying yes?

Practice Fusion Cuts 25% of Staff

Posted on February 4, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Following on our post a few weeks ago about the potential Practice Fusion IPO, news just came out that EHR vendor, Practice Fusion, has now cut its staff by 25%. The Techcrunch report says that the cuts were across the board and affected roughly 74 people. Many are suggesting that the two reports are related since cutting staff is a great way to improve your profit numbers before an anticipated IPO.

While I think the IPO could be in mind, I think there are likely some other trends at play too. While Techcrunch notes that it’s a down market for many IT companies, I think it’s fair to say that many EHR vendors have felt the pinch of late. I wrote a year or so ago that the golden era of government incentivized EHR sales was over and we’re entering a much different market. So, it shouldn’t be a surprise that an EHR vendor might go through some cuts as the false market created by meaningful use disappears. I won’t be surprised to see more layoffs from other EHR vendors. Especially ambulatory EHR vendors like Practice Fusion.

No doubt another factor at play is that Tom Langan replaced Ryan Howard as CEO back in August. It’s very common for a new CEO to go through a round of layoffs after taking over a business. Doing so is hard for the previous CEO who’s so connected to the staff. Not that layoffs are ever easy, but it’s much easier for a new CEO to layoff people in order to make the organization more efficient. That’s particularly true when the previous CEO was the original CEO and Founder of the company.

The cynical observer could also argue that Practice Fusion needed to do these layoffs in order to slow their burn rate since they aren’t in a position to raise more capital. You’d think the $150 million they already raised would give them plenty of run way. However, you’d be surprised how quickly that disappears with that many staff on payroll (Not to mention rents in San Francisco). I personally don’t think this is a case of Practice Fusion cutting staff because they can’t go and raise money. However, it could be Practice Fusion cutting its burn rate so that they have some flexibility on when they go public without having to raise more money.

All of this said, 74 people lost their jobs at an EHR vendor. That’s never fun for anyone involved. At least they’ll likely have plenty of job opportunities in silicon valley. Unless that bubble pops like some are suggesting. It will be interesting to see how many now former Practice Fusion employees search for another job in health care IT.