Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Artificial Intelligence in Healthcare: Medical Ethics and the Machine Revolution

Posted on July 25, 2017 I Written By

Healthcare as a Human Right. Physician Suicide Loss Survivor. Janae writes about Artificial Intelligence, Virtual Reality, Data Analytics, Engagement and Investing in Healthcare. twitter: @coherencemed

Artificial Intelligence could be the death of us all.  I heard that roughly a hundred times last week through shares on twitter. While this theme may be premature, the concern of teaching ethics and valuing human life is a relevant question for machine learning, especially in the realm of healthcare. Can we teach a machine ethical bounds? As Elon Musk calls for laws and boundaries I was wondering what semantics and mathematics would be needed to frame that question.  I had a Facebook friend that told me he knew a lot about artificial intelligence and he wanted to warn me about the coming robot revolution.

He did not, in fact, know a lot about artificial intelligence coding. He did not know code nor did he have any knowledge of mathematical theory, but was familiar with the worst-case scenarios of the robots we create finding humanity superfluous and eliminating us. I was so underwhelmed I messaged my friend who makes robots for Boston Dynamics and asked him about his latest project.

I was disappointed with that Facebook interaction. This disappointment was offset for me last week when the API changed and a happy Facebook message asked me if I wanted to revisit an add that I had previously liked. I purposefully like ads if I like the company. Good old Facebook predictive ads. Sometimes I also comment on a picture or tag someone in an add to see if I can change which advertisements I see in my feed.

One of the happiest times with this feature was finding great new running socks. I commented on a friends’ picture that I liked her running socks and within an hour saw my first ad for those very same socks. While I’m not claiming to have seen the predictive and advertising algorithms behind Facebook advertising, machine learning is behind that ad.  Photo recognition through image processing can identify the socks my friend Ilana was wearing while running a half marathon. Simple keyword scans can read my positive comments about the socks which gives them information about what I like. This can pair with photos from advertisers and- within one hour of “liking” those socks they seamlessly show up in my feed as a buying option. Are there ethical considerations about knowing exactly what my buying power is and my buying patterns and my personal history? Yes. Similarly, there will be ethical considerations when insurance companies can predict exactly which patients will and won’t be able to pay for their healthcare. While I appreciate great running socks, I have mixed feelings about assessing my ability to pay for the best medical care.

Can a machine be taught to value the best medical care and ethics? We seem to hear a lot of debate about whether they can be taught not to kill us. Teaching a machine ethics will be complicated as they show how poor nutrition directly changes how long patients live. Some claim these are dangerous things to create, others say the difference will be human intuition. Can human intuition be replicated and what application will that have for medicine?  I always considered intuition connections our brain recognizes that we are not directly aware of, so a machine should be able to learn intuition through deep learning networks.

Creating “laws” or “rules” for ethics in artificial intelligence as Elon Musk calls for is difficult in that ethical bounds are difficult to teach machines. In a recent interview Musk claimed that Artificial Intelligence is the biggest risk that we face as a civilization. Ethical rules and bounds are difficult for humanity. Once when we were looking at data patterns and trust patterns and disease prediction someone turned to me and said- but insurance companies will use this information to not give people coverage. If they could read your genes people will die. In terms of teaching a machine ethics or adding outward bounds, one of the weaknesses is that trained learning systems can get really good on a narrow domain but they don’t do transfer learning in different domains- like reason by analogy- machines are terrible at that. I spoke with Adam Pease about how you can increase the ability of machines to use ontology to increase benefits of machine learning in healthcare outcomes. Ontology creates a way to process meaning that is more robust in a semantic view. He shared his open source projects about Ontology and shared that we should be speaking with Philosophers and Computer science experts about ontology and meaning.

Can a machine learn empathy? Will a naturally learning and evolving system teach itself different bounds and hold life sacred, and how will it interpret the challenge of every doctor to “Do no Harm?” The Medical community should come together for agreement about the ethical bounds and collaborate with computer scientists to see the capacity to teach those bounds and the possible outliers in motivation.

Most of the natural language processing is for applications that are pretty shallow in terms of what the machine understands. Machines are good at matching patterns- if your pattern is a bag of words it can connect with another bag of words within a quantity of documents. Many companies have done extensive work in training systems that will be working with patients to learn what words mean and common patterns within patient care. Health Navigator, for example, has done extensive work to form the clinical bounds for a telemedicine system. When a patient asks about their symptoms they get clinically relevant information paired with their symptoms even if that patient uses non-medical language in describing their chief complaint.  Clinical bounds create a very important framework for an artificial intelligence system to process large amounts of inbound data and help triage patients to appropriate care.

With the help of Symplur signals I looked at Ethics and Artificial Intelligence in Healthcare and who was having conversations online in those areas. Wen Dombrowski MD, MPH lead much of the discussion. Appropriately, part of what Catalaize Health does is artificial intelligence due diligence for healthcare organizations. Joe Babian who leads #HCLDR discussion was also a significant contributor.

Symplur Signals looked at the stakeholders for Artificial Intelligence and Ethics in Healthcare

A “smart” system needs to be able to make the same amount of inferences that a human can. Teaching inferences and standards within semantics are the first steps to teaching a machine “intuition” or “ethics.” Predictive pattern recognition appears more developed than ethical and meaning boundaries for sensitive patient information. We can teach a system to recognize an increased risk of suicidal behavior from rushing words or dramatically altered behavior or higher pitched speaking, but is it ethical to spy on at risk patients from their phone. I attended a mental health provider meeting about how that knowledge would affect patients with paranoia. What are the ethical lines between protection and control? Can the meaning of those lines be taught to a machine?

I’m looking forward to seeing what healthcare providers decide those bounds should be, and how they train the machines those ontologies.

HHS Office of Inspector General Plans To Review $1.6 Billion In Incentive Payments – MACRA Monday

Posted on July 24, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

This post is part of the MACRA Monday series of blog posts where we dive into the details of the MACRA Quality Payment Program and related topics.

The HHS Office of Inspector General has announced plans to review the appropriateness of a walloping $14.6 billion of incentive payments made to providers over a five-year period.  The upcoming report, which follows on a GAO study naming improperly issued incentive checks as the biggest threat to the Medicare EHR incentive program, addresses payments made between by CMS between January 2011 and December 2016.

The OIG’s current audit plans follow on research it previously conducted which estimated that the incentive program had wrongfully paid out $729 million incentive payments between just May 2011 and June 2014.

To conduct that review, the OIG sampled incentive payment records for 100 eligible providers, then used the level of erroneous payments found among them to extrapolate the total amount paid out wrongly by CMS during those three years.

This time around, the watchdog organization plans to audit all payments made during the entire past life of the incentive program, an exercise which could generate some even more dramatic numbers. If the prior research is any indication, the OIG could conclude that roughly 10 % to 12% of the entire $14.6 billion in incentive payments it issued shouldn’t have been made in the first place.

Of course, looked at one way this effort could be seen as closing the incentive door after the horses have left. Meaningful Use, by all accounts, is giving way to incentives under MACRA, which will apply distinctive criteria to its incentive payment formulas. Also, while I’m no numbers whiz, seems to me that you can’t really model the entire meaningful use program effectively using just 100 sample cases.

That being said, it does seem likely that the audit will find more situations in which physicians hadn’t submitted he right self-attestation data or couldn’t prove what they asserted, and if the federal auditor has any role to play, this research is probably a good idea

Sure, nobody wants to be audited, particularly when your healthcare organization has jumped through many hoops to comply with meaningful use rules. Even if you can afford to pay back your incentive money, why would you want to do so? And particularly if you’ve already played by the rules, you certainly wouldn’t want to prove it again. But since the audit is going to happen anyway, perhaps it’s best to get any possible pain it may generate out of the way.

To date, I haven’t read anything suggesting that CMS has immediate plans to claw back incentive payments from providers. My assumption, though, is that they will eventually do so. Governments need money to get their job done, and audits theoretically offer the added benefit of tightening up important initiatives like this one.

As someone who has worked exclusively in the civilian world, I have often made fun of the plodding pace at which federal and state government agencies operate. In this case, though, a slow, deliberate process — such as a gradually-widening payment review — is likely get the job done most effectively. Let’s establish carefully which incentive payments may have been issued inappropriately and clear the decks for MACRA.

Bringing Zen To Healthcare:  Transformation Through The N of 1

Posted on July 21, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

The following essay wasn’t easy to understand. I had trouble taking it in at first. But the beauty of these ideas began to shine through for me when I took time to absorb them. Maybe you will struggle with them a bit yourself.

In his essay, the author argues that if providers focus on “N of 1” it could change healthcare permanently. I think he might be right, or at least makes a good case.  It’s a complex argument but worth following to the end. Trust me, the journey is worth taking.

The mysterious @CancerGeek

Before I share his ideas, I’ll start with an introduction to @CancerGeek, the essay’s author. Other than providing a photo as part of his Twitter home page, he’s chosen to be invisible. Despite doing a bunch of skillful GoogleFu, I couldn’t track him down.

@CancerGeek posted a cloud of interests on the Twitter page, including a reference to being global product manager PET-CT; says he develops hospital and cancer centers in the US and China; and describes himself as an associate editor with DesignPatient-MD.

In the essay, he says that he did clinical rotations from 1998 to 1999 while at the University of Wisconsin-Madison Carbone Comprehensive Cancer Center, working with Dr. Minesh Mehta.

He wears a bow tie.

And that’s all I’ve got. He could be anybody or nobody. All we have is his voice. John assures me he’s a real person that works at a company that everyone knows. He’s just chosen to remain relatively anonymous in his social profiles to separate his social profiles from his day job.

The N of 1 concept

Though we don’t know who @CancerGeek is, or why he is hiding, his ideas matter. Let’s take a closer look at the mysterious author’s N of 1, and decide for ourselves what it means. (To play along, you might want to search Twitter for the #Nof1 hashtag.)

To set the stage, @CancerGeek describes a conversation with Dr. Mehta, a radiation oncologist who served as chair of the department where @CancerGeek got his training. During this encounter, he had an insight which helped to make him who he would be — perhaps a moment of satori.

As the story goes, someone called Dr. Mehta to help set up a patient in radiation oncology, needing help but worried about disturbing the important doctor.

Apparently, when Dr. Mehta arrived, he calmly helped the patient, cheerfully introducing himself to their family and addressing all of their questions despite the fact that others were waiting.

When Dr. Mehta asked @CancerGeek why everyone around him was tense, our author told him that they were worried because patients were waiting, they were behind schedule and they knew that he was busy. In response, Dr. Mehta shared the following words:

No matter what else is going on, the world stops once you enter a room and are face to face with a patient and their family. You can only care for one patient at a time. That patient, in that room, at that moment is the only patient that matters. That is the secret to healthcare.

Apparently, this advice changed @CancerGeek on the spot. From that moment on, he would work to focus exclusively on the patient and tune out all distractions.

His ideas crystallized further when he read an article in the New England Journal of Medicine that gave a name to his approach to medicine. The article introduced him to the concept of N of 1.  All of the pieces began to began to fit together.

The NEJM article was singing his song. It said that no matter what physicians do, nothing else counts when they’re with the patient. Without the patient, it said, little else matters.

Yes, the author conceded, big projects and big processes matter still matter. Creating care models, developing clinical pathways and clinical service lines, building cancer centers, running hospitals, and offering outpatient imaging, radiology and pathology services are still worthwhile. But to practice well, the author said, dedicate yourself to caring for patients at the N of 1. Our author’s fate was sealed.

Why is N of 1 important to healthcare?

Having told his story, @CancerGeek shifts to the present. He begins by noting that at present, the healthcare industry is focused on delivering care at the “we” level. He describes this concept this way:

“The “We” level means that when you go to see a physician today, that the medical care they recommend to you is based on people similar to you…care based on research of populations on the 100,000+ (foot) level.”

But this approach is going to be scrapped over the next 8 to 10 years, @CancerGeek argues. (Actually, he predicts that the process will take exactly eight years.)

Over time, he sees care moving gradually from the managing groups to delivering personalized care through one-to-one interactions. He believes the process will proceed as follows:

  • First, sciences like genomics, proteomics, radionomics, functional imaging and immunotherapies will push the industry into delivering care at a 10,000-foot population level.
  • Next, as ecosystems are built out that support seamless sharing of digital footprints, care will move down to the 1,000-foot level.
  • Eventually, the system will alight at patient level. On that day, the transition will be complete. Healthcare will no longer be driven by hospitals, healthcare systems or insurance companies. Its sole focus will be on people and communities — and what the patient will become over time.

When this era arrives, doctors will know patients far more deeply, he says.

He predicts that by leveraging all of the data available in the digital world, physicians will know the truth of their experiences, including the food they eat, the air they breathe, how much sleep they get, where they work, how they commute to and from work and whether they care for a family member or friend, doctors will finally be able to offer truly personalized care. They’ll focus on the N of 1, the single patient they’re encountering at that moment.

The death of what we know

But we’re still left with questions about the heart of this idea. What, truly, is the N of 1? Perhaps it is the sound of one hand clapping. Or maybe it springs from an often-cited Zen proverb: “When walking, walk. When eating, eat.” Do what you’re doing right now – focus and stay in the present moment. This is treating patients at the N of 1 level, it seems to me.

Like Zen, the N of 1 concept may sound mystical, but it’s entirely practical. As he points out, patients truly want to be treated at the N of 1 – they don’t care about the paint on the walls or Press Ganey scores, they care about being treated as individuals. And providers need to make this happen.

But to meet this challenge, healthcare as we know it must die, he says. I’ll leave you with his conclusion:

“Within the next eight years, healthcare as we know it will end. The new healthcare will begin. Healthcare delivered at the N of 1.”  And those who seek will find.

Google’s DeepMind Runs Afoul Of UK Regulators Over Patient Data Access

Posted on July 20, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Back in February, I recounted the tale of DeepMind, a standout AI startup acquired by Google a few years ago. In the story, I noted that DeepMind had announced that it would be working with the Royal Free London NHS Foundation Trust, which oversees three hospitals, to test out its healthcare app

DeepMind’s healthcare app, Streams, is designed to help providers kick out patient status updates to physicians and nurses working with them. Under the terms of the deal, which was to span five years, DeepMind was supposed to gain access to 1.6 million patient records managed by the hospitals.

Now, the agreement seems to have collapsed under regulatory scrutiny. The UK’s data protection watchdog has ruled that DeepMind’s deal with the Trust “failed to comply with data protection law,” according to a story in Business Insider. The watchdog, known as the Information Commissioner’s Office (ICO), has spent a year investigating the deal, BI reports.

As it turns out, the agreement empowered the Trust hospitals to share the data without the patients’ prior knowledge, something that presumably wouldn’t fly in the U.S. either. This includes data, intended for use in developing the Streams’ app kidney monitoring technology, which includes info on whether people are HIV-positive, along with details of drug overdoses and abortions.

In its defense, DeepMind and the Royal Free Trust argued that patients had provided “implied consent” for such data sharing, given that the app was delivering “direct care” to patients using it. (Nice try. Got any other bridges you wanna sell?) Not surprisngly, that didn’t satisfy the ICO, which found several other shortcomings and how the data was handled as well.

While the ICO has concluded that the DeepMind/Royal Free Trust deal was illegal, it doesn’t plan to sanction either party, despite having the power to hand out fines of up to £500,000, BI said. But DeepMind, which set up his own independent review panel to oversee its data sharing agreements, privacy and security measures and product roadmaps last year, is taking a closer look at this deal. Way to self-police, guys! (Or maybe not.)

Not to be provincial, but what worries me about this is less the politics of UK patient protection laws, and bore the potential for Google subsidiaries to engage in other questionable data sharing activities. DeepMind has always said that they do not share patient data with its corporate parent, but while this might be true now, Google could do incalculable harm to patient privacy if they don’t maintain this firewall.

Hey, just consider that even for an entity the size of Google, healthcare data is an incredibly valuable asset. Reportedly, even street-level data thieves pay 10x for healthcare data as they do for, say, credit card numbers. It’s hard to even imagine what an entity the size of Google could do with such data if crunched in incredibly advanced ways. Let’s just say I don’t want to find out.

Unfortunately, as far as I know U.S. law hasn’t caught up with the idea of crime-by-analytics, which could be an issue even if an entity has legal possession of healthcare data. But I hope it does soon. The amount of harm this kind of data manipulation could do is immense.

eClinicalWorks Settlement Raises Question Of Customer Liability

Posted on July 19, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Not much ago, my colleague John Lynn shared the news that EMR vendor eClinicalWorks had settled a whistleblower lawsuit for $155 million. The U.S. Department of Justice found that the vendor had skirted many EMR certification requirements, which in turn had caused providers using its software to file false claims for Meaningful Use incentives.

Today, I read an interesting follow-up by Becker’s Hospital Review addressing the issue of whether eCW users faced any liability for the vendor’s failure to meet certification standards.  The Becker’s writer, who reached out to CMS to find out its policy on the matter, found that while eCW’s customers technically submitted false claims for MU reimbursement, the agency won’t be asking any them to return any of the money.

If anyone has calculated how much CMS paid them, I haven’t seen the figures, but I’m sure it’s a pretty substantial sum of money. It’s good to see that the feds aren’t putting the squeeze on these customers, who presumably weren’t aware of eCW’s apparent skullduggery.

The thing is, I find it hard to believe that eCW is the only vendor who fudged things to get certified for the MU program. In fact, I’d guess that virtually every vendor in the industry has skirted if not crossed the line when it comes to EMR certification. That’s the way it goes, realistically, when you’re dealing with federal oversight.

After all, doesn’t every company work to save as much on taxes as they can? Yes, some are very conservative and only take whatever deductions they see as clearly legal, but others push harder. A goodly number of firms are willing to adopt strategies a tax lawyer might call “aggressive” – which don’t clearly violate the law but may raise a few eyebrows – in an effort to maximize their profits.

The big question here is whether an EMR customers could be on the hook for incentives paid wrongly due to an invalid vendor certification. If vendors are coloring outside the lines, it’s likely some will be caught, and if so, I’m betting that CMS will eventually get tough with their customers.

In the absence of clear evidence of customer wrongdoing, CMS might let customers keep their incentive payments. But I imagine that under some circumstances, the agency might wonder if they knew they what was going on and decided to take, say, a price cut in exchange for keeping its mouth shut.

Also, particularly if other vendors are hit with whistleblower suits, CMS might decide that customers should have validated that the EMR they were using actually had a legitimate certification. I don’t know how (or if) EMR customers would do this, but I can imagine a scenario under which CMS might take this tack.

Bottom line, we’d all better hope that CMS doesn’t decide to audit every vendor’s EMR certification filings. As I see it, their customers could easily be caught in the backlash.

Is The ONC Still Relevant?

Posted on July 18, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Today, I read an article in Healthcare IT News reporting on the latest word from the ONC. Apparently, during a recent press call, National Coordinator Donald Rucker, MD, gave an update on agency activities without sharing a single new idea.

Now, if I were the head of the ONC, I might do the same. I’m sure it played well with the wire services and daily newspapers reporters, most of whom don’t dig in to tech issues like interoperability too deeply.

But if I were wiseacre health IT blogger (and I am, of course) I’d react a bit differently. By which I mean that I would wonder aloud, very seriously, if the ONC is even relevant anymore. To be fair, I can’t judge the agency’s current efforts by what it said at a press conference, but I’m not going to ignore what was said, either.

According to HIN, the ONC sees developing a clear definition of interoperability, improving EMR usability and getting a better understanding of information blocking as key objectives.

To address some of these issues, Dr. Rucker apparently suggested that using open APIs, notably RESTful APIs like JSON, would be important to future EMR interoperability efforts. Reportedly, he’s also impressed with the FHIR standard, because it’s a modern API and because large vendors have very get some work with the SMART project.

To put it kindly, I doubt any of this was news to the health IT press.

Now, I’m not saying that Dr. Rucker got anything wrong, exactly. It’s hard to argue that we’re far behind when it comes to EMR usability, embarrassingly so. In fact, if we address that issue many of EMR-related efforts aren’t worth much. That being said, much of the rest strikes me as, well, lacking originality and/or substance.

Addressing interoperability by using open APIs? I’m pretty sure someone the health IT business has thought that through before. If Dr. Rucker knows this, why would he present this as a novel idea (as seems to be the case)? And if he doesn’t, is the agency really that far behind the curve?

Establishing full interoperability with FHIR? Maybe, someday. But at least as of a year ago, FHIR product director Grahame Grieve argued that people are “[making] wildly inflated claims about what is possible, [willfully] misunderstanding the limits of the technology and evangelizing the technology for all sorts of ill-judged applications.”  If Grieve thinks people are exaggerating FHIR’s capabilities, does ONC bring anything useful to the table by endorsing it?

Understanding information blocking?  Well, perhaps, but I think we already know what’s going on. At its core, this is a straightforward business use: EMR vendors and many of their customers have an incentive to make health data sharing tough. Until they have a stronger incentive share data, they won’t play ball voluntarily. And studying a much-studied problem probably won’t help things much.

To be clear, I’m relying on HIN as a source of facts here. Also, I realize that Dr. Rucker may have been simplifying things in an effort to address a general audience.

But if my overall impression is right, the news from this press conference isn’t encouraging. I would have hoped that by 2017, ONC would be advancing the ball further, and telling us something we truly didn’t know already. If it’s not sharing new ideas by this point, what good can it do? Maybe that’s why the rumors of HHS budget cuts could hit ONC really hard.

The Health Plans’ Role in Meeting MACRA Requirements – MACRA Monday

Posted on July 17, 2017 I Written By

The following is a guest blog post by Karen Way, Health Plan Analytics and Consulting Practice Lead at NTT DATA Services. This post is part of the MACRA Monday series of blog posts where we dive into the details of the MACRA Quality Payment Program.

When the Medicare Access and CHIP Reauthorization Act (MACRA) became an official federal ruling for the healthcare industry in 2015, the act replaced the previous Medicare reimbursement schedule with a new pay-for-performance program focused on quality, value and accountability. In short, the legislation rewards healthcare providers for quality of care, not quantity.

While many discuss the impact on providers, what is the health plans’ role in aiding health systems and physicians to meet MACRA requirements?

MACRA provides multiple opportunities for health plans to increase and improve collaboration with provider networks. Recommendations on how health plans can accomplish this include sharing information and services, creating new partnerships and bringing about financial awareness as the legislation continues to take effect.

Sharing Data

One of the requirements under MACRA is for providers to enhance clinical measures and data analytics to strengthen members’ experiences. Health plans can assist by recognizing where providers lack expertise in data-related facts to offer input and support where it’s most beneficial.

For example, a provider may not have as much knowledge on advanced data science, but health plans can share their predictive models and tools to strengthen analytics. Sharing advanced technical infrastructure to facilitate data exchange will enable providers to access a more complete picture of members’ profiles. In return, the picture will provide a higher quality service to individual members, as well as opportunities for health plans to continue offering tailored consulting and data support.

At its best, sharing data to improve clinical measures is a win-win scenario. The Healthcare Effectiveness Data and Information Set (HEDIS) is a tool used by more than 90 percent of America’s health plans to measure performance on important dimensions of care and service. Just as HEDIS calls for measurement, MACRA also encourages health plans to aid providers with reporting standards. Under these rules, health plans are required to record a wealth of information on members, and when shared with providers, the tide lifts all boats.

Partnering to Manage Risk

Some of the changes under MACRA are reminders for providers to be highly aware of risk management. Providers will seek strong partners with the necessary skills, experience and knowledge to ensure they do not take on risk greater than they can support. To assist, health plans should enter into risk-sharing relationships, such as value-based contracts, with high-performing providers.

Health plans should actively strive to be strong partners by enabling robust data analytics that support quantitative action plans in the areas of quality and clinical care gaps, medical cost and trend analysis, population health, as well as member-risk management. As health plans partner with providers, they should also stay flexible on potential changes to provider payments as the pay-for-performance model(s) mature over time.

Financial Awareness

Health plans also need to be aware of the financial considerations that result from increased value-based contracting for small and large providers.

Under MACRA, smaller providers and individual physicians are more likely to be exposed to potential increase in costs, which may result in additional provider considerations. As Medicare payments shrink, these providers will look to shift costs to other payers, making contract negotiations more difficult and potentially increasing unit costs for some services. Large physician groups, or those located in markets with progressive healthcare systems, will look to negotiate even higher reimbursement rates due to the potential for increased competition.

Health plans should also be aware of potential impacts beyond Medicare fee-for-service (FFS), which is the initial focus of the MACRA legislation. Pay-for-performance is likely to extend beyond Medicare FFS into other health plan lines of business, such as Medicaid or commercial plans. For example, under MACRA, Centers for Medicare and Medicaid Services stated it would consider permitting Medicaid Medical Homes to count as an alternative payment model if participating practices would risk at least four percent of their revenue in 2019 and five percent in 2020.

Why This Matters

Overall, MACRA creates a tall order as it aims to increase pay-for-performance and decrease care based on quantity. This notion is an altruistic adjustment for the health system and each party has a specific role to play to achieve the dream. But the backbone of this goal is collaboration between health plans and providers. Collaboration will result in shared clinical measures, awareness and management of risk, lower healthcare costs and, most importantly, improved patient outcomes.

EMR Impact on Patient Care Differs, But Doctors Never Win

Posted on July 14, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Nearly all physicians agree that using EMRs isn’t great for their relationship with patients. But, hospital-based and office-based physicians seem to have different reactions to the problem. (Neither group is happy with their lot, but I’m sure you already guessed that much.)

The study, by researchers at Brown University and Healthcentric Advisors, is based on the open-ended answers provided by 744 doctors to a survey question: “How does using an EHR affect her interaction with patients?” (The question was posed by the Rhode Island Department of Health in 2014.)

In analyzing the responses, researchers found that office-based physicians and hospital-based physicians had different concerns about patients care and EMR use.

Office-based physicians, who typically bring their computer into the exam room, worry that staring at a computer screen will undermine the quality of their visit with the patient. “[It’s] like having someone at the dinner table texting rather than paying attention,” one doctor wrote.

Hospital-based physicians, for their part, usually do their record-keeping on EMRs based outside the exam room.  They said that record-keeping took up too much time, leaving little for direct contact with patients. Said one physician: “I now spend much less time [with] patients because I know I have hours of data entry to complete.”

To maintain their standards of patient care, physicians are doing the data entry at home rather than at work, sometimes many hours at a time. Others are taking CME classes which promise to help them integrate EMR use with patient consults in the least disruptive manner. But nobody had found any good solutions to the patient care conundrum.

Of course, we knew most of this already. This study just offers some added color to a picture we’ve already seen. Both patients and physicians are suffering under current models of EMR use, and there’s little relief on the horizon.

Yes, a few physicians said that EMRs hadn’t impacted their time with patients. This might’ve been encouraging, but this group included one physician who treated newborns and another using a scribe to handle data entry during consults.

And there were a few respondents that cited positive aspects of EMR use in patient care. For example, one hospital-based doctor noted that EMRs offered him an easy way to look at a comprehensive patient history. Some office-based physicians noted that web-based patient portals were improving their patient interactions.

But the striking thing here is that few if any physicians suggested that EMRs offered any ongoing clinical benefits. As researchers have discovered many times over, most doctors saw their EMR use as a work requirement rather than a clinical exercise. This only underscores that as they presently work, EMRs benefit administrators, not care providers.

I wish I was so smart that I’d come up with some sort of solution to this problem. I haven’t. But it doesn’t hurt to harp on the existence of the problem. We should remind ourselves over and over again that it’s time to roll out EMRs that support clinicians.

Did Meaningful Use Really Turn EMRs Into A Commodity?

Posted on July 12, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Not long ago, I had a nice email exchange with a sales manager with one of the top ambulatory EMR vendors.  He had written to comment on “The EMR Vendor’s Dilemma,” a piece I wrote about the difficult choices such vendors face in staying just slightly ahead of the market.

In our correspondence, he argued that Meaningful Use (MU) had led customers to see EMRs as commodities. I think he meant that MU sucked the innovation out of EMR development.

After reflecting on his comments, I realized that I didn’t quite agree that EMRs had become a commodity item. Though the MU program obviously relied on the use of commoditized, certified EMR technology, I’d argue that the industry has simply grown around that obstacle.

If anything, I’d argue that MU has actually sparked greater innovation in EMR development. Follow me for a minute here.

Consider the early stages of the EMR market. At the outset, say, in the 50s, there were a few innovators who figured out that medical processes could be automated, and built out versions of their ideas. However, there was essentially no market for such systems, so those who developed them had no incentive to keep reinventing them.

Over time, a few select healthcare providers developed platforms which had the general outline EMRs would later have, and vendors like Epic began selling packaged EMR systems. These emerging systems began to leverage powerful databases and connect with increasingly powerful front-end systems available to clinicians. The design for overall EMR architecture was still up for grabs, but some consensus was building on what its core was.

Eventually, the feds decided that it was time for mass EMR adoption, the Meaningful Use program came along. MU certification set some baselines standards for EMR vendors, leaving little practical debate as to what an EMR’s working parts were. Sure, at least at first, these requirements bled a lot of experimentation out of the market, and certainly discouraged wide-ranging innovation to a degree. But it also set the stage for an explosion of ideas.

Because the truth is, having a dull, standardized baseline that defines a product can be liberating. Having a basic outline to work with frees up energy and resources for use in innovating at the edges. Who wants to keep figuring out what the product is? There’s far more upside in, say, creating modules that help providers tackle their unique problems.

In other words, while commoditization solves one (less interesting) set of problems, it also lets vendors focus on the high-level solutions that arguably have the most potential to help providers.

That’s certainly been the case when an industry agrees on a technology specification set such as, say, the 802.11 and 802.11x standards for wireless LANs. I doubt Wi-Fi tech would be ubiquitous today if the IEEE hadn’t codified these standards. Yes, working from technical specs is different than building complex systems to meet multi-layered requirements, but I’d argue that the principle still stands.

All told, I think the feds did EMR vendors a favor when they created Meaningful Use EMR certification standards. I doubt the vendors could have found common ground any other way.