Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

OIG Says HHS Needs To Play Health IT Catch-Up

Posted on December 1, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

A new analysis by the HHS Office of the Inspector General suggests that the agency still has work to do and appropriately managing health information technology and making sure it performs, according to Health Data Management. And unfortunately, the problems it highlights don’t seem likely to go away anytime soon.

The critique of HHS’s HIT capabilities came as part of an annual report from the OIG, in which the oversight body lists what it sees as the department’s top 10 management and performance issues. The OIG ranked HIT third on its list.

In that critique, auditors from the OIG pointed out that there are still major concerns over the future of health data sharing in the US, not just for HHS but also in the US healthcare system at large. Specifically, the OIG notes that while HHS has spent a great deal on health IT, it hasn’t gotten too far in enabling and supporting the flow of health data between various stakeholders.

In this analysis, the OIG sites several factors which auditors see as a challenge to HHS, including the lack of interoperability between health data sources, barriers imposed by federal and state privacy and security laws, the cost of health IT infrastructure and environmental issues such as information blocking by vendors. Of course, the problems it outlines are the same old pains in the patoot that we’ve been facing for several years, though it doesn’t hurt to point them out again.

In particular, the OIG’s report argues, it’s essential for HHS to improve the flow of up-to-date, accurate and complete electronic information between the agency and providers it serves. After all, it notes, having that data is important to processing Medicare and Medicaid payments, quality improvement efforts and even HHS’s internal program integrity and operations efforts. Given the importance of these activities, the report says, HHS leaders must find ways to better streamline and speed up internal data exchange as well as share that data with Medicare and Medicaid systems.

The OIG also critiqued HHS security and privacy efforts, particularly as the number of healthcare data breaches and potential cyber security threats like ransomware continue to expand. As things stand, HHS cybersecurity shortfalls abound, including inadequacies and access controls, patch management, encryption of data and website security vulnerabilities.  These vulnerabilities, it noted, include not only HHS, but also the states and other entities that do business with the agency, as well as healthcare providers.

Of course, the OIG is doing its job in drawing attention to these issues, which are stubborn and long-lasting. Unfortunately, hammering away at these issues over and over again isn’t likely to get us anywhere. I’m not sure the OIG should have wasted the pixels to remind us of challenges that seem intractable without offering some really nifty solutions, or at least new ideas.

E-Patient Update: Time To Share EMR Data With Apps

Posted on November 18, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Like most Americans, I’ve used many a health-related app, in categories including vitals tracking, weight control, sleep management, medication management and exercise tracking. While I’ve continued to use a few, I’ve dropped most after a few uses because they didn’t contribute anything to my life.

Now, those of you who are reading this might assume that I lost interest in the apps because they were poorly designed. I admit that this was true in some cases. But in others, I’ve ceased to use the apps because the data they collect and display hasn’t been terribly useful, as most of it lives in a vacuum. Sure, I might be able to create line graph of my heart rate or pulse ox level, but that’s mildly interesting at best. (I doubt physicians would find it terribly interesting either.)

That being said, I believe there is a way healthcare organizations can make the app experience more useful. I’d argue that hospitals and clinics, as well as other organizations caring for patients, need to connect with major app developers and synch their data with those platforms. If done right, the addition of outside data would enrich the patient experience dramatically, and hopefully, provide more targeted feedback that would help shape their health behaviors.

How it would work

How would this work? Here’s an example from my own life, as an e-patient who digitally manages a handful of chronic, sometimes-complex conditions.

I have tested a handful of medication management apps, whose interfaces were quite different but whose goals seem to be quite similar — the primary one being to track the date and time each medicine on my regimen was taken. In each case, I could access my med compliance history rather easily, but had no information on what results my level of compliance might have accomplished.

However, if I could have overlaid those compliance results with changes in my med regimen, changes in my vital signs and changes in my lab values, I have a better picture of how all of my health efforts fit together. Such a picture would be far more likely to prompt changes in my health behavior than uncontextualized data points based on my self-report alone.

I should mention that I know of at least one medication management app developer (the inspiration for this essay) which hopes to accomplish just this result already, and is hard at work enriching its platform to make such integration possible. In other words, developers may not need much convincing to come on board.

The benefits of added data

“Yes,” I hear you saying, “but why should I share my proprietary data?” The answer is fairly simple; in the world of value-based reimbursement, you need patients to get and stay well, and helping them better manage their health fits this goal.

Admittedly, achieving this level of synchronization between apps and provider data won’t be simple. However, my guess is that it would be easier for app developers to import, say, pharmacy or EMR data than the other way around. After all, app platforms aren’t at the center of nearly as many overlapping data systems as a health organization or even a clinic. While they might not be starting from zero, they have less bridges to build.

And once providers have synchronized key data with app developers, they might be able to forge long-term partnerships in which each side learned from the exchange. After all, I’d submit that few app developers would turn up the chance to make their data more valuable — at least if they have bigger goals than displaying a few dots on a smartphone screen.

I realize that for many providers, doing this might be a tall order, as they can’t lose their focus on cultivating their own data. But as a patient, I’d welcome working with any provider that wanted to give this a try. I think it would be a real win-win.

Creating Healthcare Interoperability Bundles

Posted on October 25, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

At this point in the evolution of healthcare data, you’d think it would be easy to at least define interoperability, even if we can’t make it happen. But the truth is that despite the critical importance of the term, we still aren’t as clear as we should be on how to define it. In fact, the range of possible solutions that can be called “interoperable” is all over the map.

For example, a TechTarget site defines interoperability as “the ability of a system or a product to work with other systems or products without special effort on the part of the customer.” When defined down to its most basic elements, even passive methods of pushing data from one to another count is interoperability, even if that data doesn’t get used in clinical care.

Meanwhile, an analysis by research firm KLAS breaks interoperability down into four levels of usefulness, ranked from information being available, to providers having the ability to locate records, to the availability of clinical view to this data having an impact on patient care.

According to a recent survey by the firm, 20% of respondents had access to patient information, 13% could easily locate the data, 8% could access the data via a clinical view and just 6% had interoperable data in hand that could impact patient care.

Clearly, there’s a big gap between these two definitions, and that’s a problem. Why? Because the way we define baseline interoperability will have concrete consequences on how data is organized, transmitted and stored. So I’d argue that until we have a better idea of what true, full interoperability looks like, maybe we should map out interoperability “bundles” that suit a given clinical situation.

A Variety of Interoperabilities

For example, if you’re an ACO addressing population health issues, it would make sense to define a specific level of interoperability needed to support patient self-management and behavioral change. And that would include not only sharing between EMR databases, but also remote monitoring information and even fitness tracking data. After all, there is little value to trying to, say, address chronic health concerns without addressing some data collected outside of clinic or hospital.

On the other hand, when caring for a nursing home-bound patient, coordination of care across hospitals, rehab centers, nurses, pharmacists and other caregivers is vital. So full-fledged interoperability in this setting must be effective horizontally, i.e. between institutions. Without a richly-detailed history of care, it can be quite difficult to help a dependent patient with a low level of physical or mental functioning effectively. (For more background on nursing home data sharing click here.)

Then, consider the case of a healthy married couple with two healthy children. Getting together the right data on these patients may simply be a matter of seeing to it that urgent care visit data is shared with a primary care physician, and that the occasional specialist is looped in as needed. To serve this population, in other words, you don’t need too many bells and whistles interoperability-wise.

Of course, it would be great if we could throw the floodgates open and share data with everyone everywhere the way, say, cellular networks do already. But given that such in event won’t happen anytime in the near future, it probably makes sense to limit our expectations and build some data sharing models that work today.

Is Interoperability Worth Paying For?

Posted on August 18, 2016 I Written By

When Carl Bergman isn't rooting for the Washington Nationals or searching for a Steeler bar, he’s Managing Partner of EHRSelector.com, a free service for matching users and EHRs. For the last dozen years, he’s concentrated on EHR consulting and writing. He spent the 80s and 90s as an itinerant project manger doing his small part for the dot com bubble. Prior to that, Bergman served a ten year stretch in the District of Columbia government as a policy and fiscal analyst.

A member of our extended family is a nurse practitioner. Recently, we talked about her practice providing care for several homebound, older patients. She tracks their health with her employer’s proprietary EHR, which she quickly compared to a half-dozen others she’s used. If you want a good, quick EHR eval, ask a nurse.

What concerned her most, beyond usability, etc., was piecing together their medical records. She didn’t have an interoperability problem, she had several of them. Most of her patients had moved from their old home to Florida leaving a mixed trail of practioners, hospitals, and clinics, etc. She has to plow through paper and electronic files to put together a working record. She worries about being blindsided by important omissions or doctors who hold onto records for fear of losing patients.

Interop Problems: Not Just Your Doc and Hospital

She is not alone. Our remarkably decentralized healthcare system generates these glitches, omissions, ironies and hang ups with amazing speed. However, when we talk about interoperability, we focus on mainly on hospital to hospital or PCP to PCP relations. Doing so, doesn’t fully cover the subject. For example, others who provide care include:

  • College Health Systems
  • Pharmacy and Lab Systems
  • Public Health Clinics
  • Travel and other Specialty Clinics
  • Urgent Care Clinics
  • Visiting Nurses
  • Walk in Clinics, etc., etc.

They may or may not pass their records back to a main provider, if there is one. When they do it’s usually by FAX making the recipient key in the data. None of this is particularly a new story. Indeed, the AHA did a study of interoperability that nails interoperability’s barriers:

Hospitals have tried to overcome interoperability barriers through the use of interfaces and HIEs but they are, at best, costly workarounds and, at worst, mechanisms that will never get the country to true interoperability. While standards are part of the solution, they are still not specified enough to make them truly work. Clearly, much work remains, including steps by the federal government to support advances in interoperability. Until that happens, patients across the country will be shortchanged from the benefits of truly connected care.

We’ve Tried Standards, We’ve Tried Matching, Now, Let’s Try Money

So, what do we do? Do we hope for some technical panacea that makes these problems seem like dial-up modems? Perhaps. We could also put our hopes in the industry suddenly adopting an interop standard. Again, Perhaps.

I think the answer lies not in technology or standards, but by paying for interop successes. For a long time, I’ve mulled over a conversation I had with Chandresh Shah at John’s first conference. I’d lamented to him that buying a Coke at a Las Vegas CVS, brought up my DC buying record. Why couldn’t we have EHR systems like that? Chandresh instantly answered that CVS had an economic incentive to follow me, but my medical records didn’t. He was right. There’s no money to follow, as it were.

That leads to this question, why not redirect some MU funds and pay for interoperability? Would providers make interop, that is data exchange, CCDs, etc., work if they were paid? For example, what if we paid them $50 for their first 500 transfers and $25 for their first 500 receptions? This, of course, would need rules. I’m well aware of the human ability to game just about anything from soda machines to state lotteries.

If pay incentives were tried, they’d have to start slowly and in several different settings, but start they should. Progress, such as it is, is far too slow and isn’t getting us much of anywhere. My nurse practitioner’s patients can’t wait forever.

Patients and Their Medical Data

Posted on April 4, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Sometimes they say a picture is worth a thousand words. That’s what I thought when I saw this image from Nature.com:
Patient Health Data Sharing and Big Healthcare Data

It’s great to see Nature.com talking about healthcare data. The authors are two people you likely know: Leonard Kish and Eric Topol.

This graphic shows the ideal. It’s interesting to think about what the reality would actually look like. Sadly, it would be much more complex, disconnected, and would lack the fluid sharing that this graphic shows.

It’s good to know what the idea for data sharing and understanding data would look like. Shows the potential of what’s possible and that’s exciting.

OpenUMA: New Privacy Tools for Health Care Data

Posted on August 10, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The health care field, becoming more computer-savvy, is starting to take advantage of conveniences and flexibilities that were developed over the past decade for the Web and mobile platforms. A couple weeks ago, a new open source project was announced to increase options for offering data over the Internet with proper controls–options with particular relevance for patient control over health data.

The User-Managed Access (UMA) standard supports privacy through a combination of encryption and network protocols that have a thirty-year history. UMA reached a stable release, 1.0 in April of this year. A number of implementations are being developed, some of them open source.

Before I try to navigate the complexities of privacy protocols and standards, let’s look at a few use cases (currently still hypothetical) for UMA:

  • A parent wants to show the child’s school records from the doctor’s office just long enough for the school nurse to verify that the child has received the necessary vaccinations.

  • A traveler taking a temporary job in a foreign city wants to grant a local clinic access to the health records stored by her primary care physician for the six months during which the job lasts.

The open source implementation I’ll highlight in this article is OpenUMA from a company named ForgeRock. ForgeRock specializes in identity management online and creates a number of open source projects that can be found on their web page. They are also a leading participant in the non-profit Kantara Initiative, where they helped develop UMA as part of the UMA Developer Resources Work Group.

The advantage of open source libraries and tools for UMA is that the standard involves many different pieces of software run by different parts of the system: anyone with data to share, and anyone who wants access to it. The technology is not aimed at any one field, but health IT experts are among its greatest enthusiasts.

The fundamental technology behind UMA is OAuth, a well-tested means of authorizing people on web sites. When you want to leave a comment on a news article and see a button that says, “Log in using Facebook” or some other popular site, OAuth is in use.

OAuth is an enabling technology, by which I mean that it opens up huge possibilities for more complex and feature-rich tools to be built on top. It provides hooks for such tools through its notion of profiles–new standards that anyone can create to work with it. UMA is one such profile.

What UMA contributes over and above OAuth was described to me by Eve Maler, a leading member of the UMA working group who wrote their work up in the specification I cited earlier, and who currently works for ForgeRock. OAuth lets you manage different services for yourself. When you run an app that posts to Twitter on your behalf, or log in to a new site through your Facebook account, OAuth lets your account on one service do something for your account on another service.

UMA, in contrast, lets you grant access to other people. It’s not your account at a doctor’s office that is accessing data, but the doctor himself.

UMA can take on some nitty-gritty real-life situations that are hard to handle with OAuth alone. OAuth provides a single yes/no decision: is a person authorized or not? It’s UMA that can handle the wide variety of conditions that affect whether you want information released. These vary from field to field, but the conditions of time and credentials mentioned earlier are important examples in health care. I covered one project using UMA in an earlier article.

With OAuth, you can grant access to an account and then revoke it later (with some technical dexterity). But UMA allows you to build a time limit into the original access. Of course, the recipient does not lose the data to which you granted access, but when the time expires he cannot return to get new data.

UMA also allows you to define resource sets to segment data. You could put vaccinations in a resource set that you share with others, withholding other kinds of data.

OpenUMA contains two crucial elements of a UMA implementation:

The authorization server

This server accepts a list of restrictions from the site holding the data and the credentials submitted by the person requesting access to the data. The server is a very generic function: any UMA request can use any authorization server, and the server can run anywhere. Theoretically, you could run your own. But it would be more practical for a site that hosts data–Microsoft HealthVault, for instance, or some general cloud provider–to run an authorization server. In any case, the site publicizes a URL where it can be contacted by people with data or people requesting data.

The resource server

These submit requests to the authorization server from applications and servers that hold people’s data. The resource server handles the complex interactions with the authorization server so that application developers can focus on their core business.

Instead of the OpenUMA resource server, apps can link in libraries that provide the same functions. These libraries are being developed by the Kantara Initiative.

So before we can safely share and withhold data, what’s missing?

The UMA standard doesn’t offer any way to specify a condition, such as “Release my data only this week.” This gap is filled by policy languages, which standards groups will have to develop and code up in a compatible manner. A few exist already.

Maler points out that developers could also benefit from tools for editing and testing code, along with other supporting software that projects build up over time. The UMA resource working group is still at the beginning of their efforts, but we can look forward to a time when fine-grained patient control over access to data becomes as simple as using any of the other RESTful APIs that have filled the programmer’s toolbox.