Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Research Shows that Problems with Health Information Exchange Resist Cures (Part 2 of 2)

Posted on March 23, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this paper introduced problems found in HIE by two reports: one from the Office of the National Coordinator and another from experts at the Oregon Health & Science University. Tracing the causes of these problems is necessarily somewhat speculative, but the research helps to confirm impressions I have built up over the years.

The ONC noted that developing HIE is very resource intensive, and not yet sustainable. (p. 6) I attribute these problems to the persistence of the old-fashioned, heavyweight model of bureaucratic, geographically limited organizations hooking together clinicians. (If you go to another state, better carry your medical records with you.) Evidence of their continued drag on the field appeared in the report:

Grantees found providers did not want to login to “yet another system” to access data, for example; if information was not easily accessible, providers were not willing to divert time and attention from patients. Similarly, if the system was not user friendly and easy to navigate, or if it did not effectively integrate data into existing patient records, providers abandoned attempts to obtain data through the system. (pp. 76-77)

The Oregon researchers in the AHRQ webinar also confirmed that logging in tended to be a hassle.

Hidden costs further jacked up the burden of participation (p. 72). But even though HIEs already suck up unsustainable amounts of money for little benefit, “Informants noted that it will take many years and significantly more funding and resources to fully establish HIE.” (p. 62) “The paradox of HIE activities is that they need participants but will struggle for participants until the activities demonstrate value. More evidence and examples of HIE producing value are needed to motivate continued stakeholder commitment and investment.” (p. 65)

The adoption of the Direct protocol apparently hasn’t fixed these ongoing problems; hopefully FHIR will. The ONC hopes that, “Open standards, interfaces, and protocols may help, as well as payment structures rewarding HIE.” (p. 7) Use of Direct did increase exchange (p. 56), and directory services are also important (pp. 59-60). But “Direct is used mostly for ADT notifications and similar transitional documents.” (p. 35)

One odd complaint was, “While requirements to meet Direct standards were useful for some, those standards detracted attention from the development of query-based exchange, which would have been more useful.” (p. 77) I consider this observation to be a red herring, because Direct is simply a protocol, and the choice to use it for “push” versus “pull” exchanges is a matter of policy.

But even with better protocols, we’ll still need to fix the mismatch of the data being exchanged: “…the majority of products and provider processes do not support LOINC and SNOMED CT. Instead, providers tended to use local codes, and the process of mapping these local codes to LOINC and SNOMED CT codes was beyond the capacity of most providers and their IT departments.” (p. 77) This shows that the move to FHIR won’t necessarily improve semantic interoperability, unless FHIR requires the use of standard codes.

Trust among providers remains a problem (p. 69) as does data quality (pp. 70-71). But some informants put attitude about all: “Grantees questioned whether HIE developers and HIE participants are truly ready for interoperability.” (p. 71)

It’s bad enough that core health care providers–hospitals and clinics–make little use of HIE. But a wide range of other institutions who desperately need HIE have even less of it. “Providers not eligible for MU incentives consistently lag in HIE connectivity. These setting include behavioral health, substance abuse, long-term care, home health, public health, school-based settings, corrections departments, and emergency medical services.” (p. 75) The AHRQ webinar found very limited use of HIE for facilities outside the Meaningful Use mandate, such as nursing homes (Long Term and Post Acute Care, or LTPAC). Health information exchange was used 10% to 40% of the time in those settings.

The ONC report includes numerous recommendations for continuing the growth of health information exchange. Most of these are tweaks to bureaucratic institutions responsible for promoting HIE. These are accompanied by the usual exhortations to pay for value and improve interoperability.

But six years into the implementation of HITECH–and after the huge success of its initial goal of installing electronic records, which should have served as the basis for HIE–one gets the impression that the current industries are not able to take to the dance floor together. First, ways of collecting and sharing data are based on a 1980s model of health care. And even by that standard, none of the players in the space–vendors, clinicians, and HIE organizations–are thinking systematically.

How Open mHealth Designed a Popular Standard (Part 3 of 3)

Posted on December 3, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first section of this article introduced the common schemas for mobile health designed by Open mHealth, and the second section covered the first two design principles driving their schemas. We’ll finish off the design principles in this section.

Balancing permissiveness and constraints

Here, the ideal is to get accurate measurements to the precision needed by users and researchers. But many devices are known to give fuzzy results, or results that are internally consistent but out of line with absolute measurements.

The goal adopted by Open mHealth is to firm up the things that are simple to get right and also critical to accuracy, such as units of measurement discussed earlier. They also require care in reporting the time interval that a measurement covers: day, week, month. There’s no excuse if you add up the walks recorded for the day and the sum doesn’t match the total steps that the device reports for that day.

Some participants suggested putting in checks, such as whether the BMI is wildly out of range. The problem (in terms of public health as well as technology) is that there are often outlier cases in health care, and the range of what’s a “normal” BMI can change. The concept of a maximum BMI is therefore too strict and ultimately unhelpful.

Designing for data liquidity

Provenance is the big challenge here: where does data come from, how was it collected, and what algorithm was used to manipulate it? Open mHealth expects data to go far and wide among researchers and solution providers, so the schema must keep a trail of all the things done to it from its origin.

Dr. Sim said the ecosystem is not yet ready to ensure quality. For instance, a small error introduced at each step of data collection and processing can add up to a yawning gap between the reported measure and the truth. This can make a difference not only to researchers, but to the device’s consumers. Think, for instance, of a payer basing the consumer’s insurance premium on analytics performed on data from the device over time.

Alignment with clinical data standards

Electronic health records are starting to accept medical device data. Eventually, all EHRs will need to do this so that monitoring and connected health can become mainstream. Open mHealth adopted widespread medical ontologies such as SNOMED, which may seem like an obvious choice but is not at all what the devices do. Luckily, Open mHealth’s schemas come pre-labelled with appropriate terminology codes, so device developers don’t need to get into the painful weeds of medical coding.

Modeling of Time

A seemingly simple matter, time is quite challenging. The Open mHealth schema can represent both points in time and time intervals. There are still subtleties that must be handled properly, as when a measurement for one day is reported on the next day because the device was offline. These concerns feed into provenance, discussed under “Designing for data liquidity.”

Preliminary adoption is looking good. The schema will certainly evolve, hopefully allowing for diversity while not splintering into incompatible standards. This is the same balance that FHIR must strike under much more difficult circumstances. From a distance, it appears like Open mHealth, by keeping a clear eye on the goal and a firm hand on the development process, have avoided some of the pitfalls that the FHIR team has encountered.

How Open mHealth Designed a Popular Standard (Part 2 of 3)

Posted on December 2, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article introduced the intensive research and consultation strategy used by Open mHealth to develop a common schema for exploiting health data by app developers, researchers, clinicians, individuals, and manufacturers of medical and fitness devices. Next we’ll go through the design principles with a look at specific choices and trade-offs.

Atomicity

Normally, one wants to break information down into chunks as small as possible. By doing this, you allow data holders to minimize the amount of data they need to send data users, and data users are free to scrutinize individual items or combine them any way they want. But some values in health need to be chunked together. When someone requests blood pressure, both the systolic and diastolic measures should be sent. The time zone should go with the time.

On the other hand, mHealth doesn’t need combinations of information that are common in medical settings. For instance, a dose may be interesting to know, but you don’t need the prescribing doctor, when the prescription was written, etc. On the other hand, some app developers have asked the prescription to include the number of refills remaining, so the app can issue reminders.

Balancing parsimony and complexity

Everybody wants all the data items they find useful, but don’t want to scroll through screenfuls of documentation for other people’s items. So how do you give a bewildering variety of consumers and researchers what they need most without overwhelming them?

An example of the process used by Open mHealth was the measurement for blood sugar. For people with Type 1 or Type 2 diabetes, the canonical measurement is fasting blood sugar first thing in the morning (the measurement can be very different at different times of the day). This helps the patients and their clinicians determine their overall blood sugar control. Measurements of blood sugar in relation to meals (e.g., two hours after lunch) or to sleep (e.g., at bedtime) is also clinically useful for both patients and clinicians.

Many of these users are curious what their blood sugar level is at other times, such as after a run. But to extend the schema this way would render it mind-boggling. And Dr. Sim says these values have far less direct clinical value for people with Type 2 diabetes, who are the majority of diabetic patients. So the schema sticks with reporting blood sugar related to meals and sleep. If users and vendors work together, they are free to extend the standard–after all, it is open source.

Another reason to avoid fine-grained options is that it leads to many values being reported inconsistently or incorrectly. This is a concern with the ICD-10 standard for diagnoses, which has been in use in europe for a long time and became a requirement for billing in the US since early October. ICD-9 is woefully outdated, but so much was dumped into ICD-10 that its implementation has left clinicians staying up nights and ignoring real opportunities for innovation. (Because ICD is aimed mostly at billing, it is not used for coding in Open mHealth schemas.)

Thanks to the Open mHealth schema, a dialog has started between users and device manufacturers about what new items to include. For instance, it could include average blood sugar over a fixed period of time, such as one month.

In the final section of this article, we’ll cover the rest of the design principles.

How Open mHealth Designed a Popular Standard (Part 1 of 3)

Posted on December 1, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

If standards have not been universally adopted in the health care field, and are often implemented incorrectly when adopted, the reason may simply be that good standards are hard to design. A recent study found that mobile health app developers would like to share data, but “Less progress has been made in enabling apps to connect and communicate with provider healthcare systems–a fundamental requirement for mHealth to realize its full value in healthcare management.”

Open mHealth faced this challenge when they decided to provide a schema to represent the health data that app developers, research teams, and other individuals want to plug into useful applications. This article is about how they mined the health community for good design decisions and decided what necessary trade-offs to make.

Designing a good schema involves intensive conversations with several communities that depend on each other but often have trouble communicating their needs to each other:

Consumers/users

They can tell you what they’re really interested in, and give you surprising insights about what a product should produce. In the fitness device space, for instance, Open mHealth was told that consumers would like time zones including with timing data–something that currently is supported rarely and poorly. Manufacturers find time zones hard to do, and feel little competitive pressure to offer them.

Vendors/developers

They can fill you in on the details of their measurements, which might be hard to discern from the documentation or the devices themselves. A simple example: APIs often retrieve weight values without units. If you’re collecting data across many people and devices for clinical or scientific purposes (e.g., across one million people for the new Precision Medicine Initiative), you can’t be guessing whether someone weighs 70 pounds or 70 kilograms.

Clinicians/Researchers

They offer insights on long-range uses of data and subtleties that aren’t noticeable in routine use by consumers. For example, in the elderly and those on some types of medications, blood pressure can be quite different standing up or lying down. Open mHealth captures this distinction.

With everybody weighing in, listening well and applying good medical principles is a must, otherwise, you get (as co-founder Ida Sim repeatedly said in our phone call) “a mess.” Over the course of many interviews, one can determine the right Pareto distribution: finding the 20% of possible items that satisfy 90% of the most central uses for mobile health data.

Open mHealth apparently made good use of these connections, because the schema is increasingly being adhered to by manufacturers and adopted by researchers as well as developers throughout the medical industry. In the next section of this article I’ll take a look at some of the legwork that that went into turning the design principles into a useful schema.

OpenUMA: New Privacy Tools for Health Care Data

Posted on August 10, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The health care field, becoming more computer-savvy, is starting to take advantage of conveniences and flexibilities that were developed over the past decade for the Web and mobile platforms. A couple weeks ago, a new open source project was announced to increase options for offering data over the Internet with proper controls–options with particular relevance for patient control over health data.

The User-Managed Access (UMA) standard supports privacy through a combination of encryption and network protocols that have a thirty-year history. UMA reached a stable release, 1.0 in April of this year. A number of implementations are being developed, some of them open source.

Before I try to navigate the complexities of privacy protocols and standards, let’s look at a few use cases (currently still hypothetical) for UMA:

  • A parent wants to show the child’s school records from the doctor’s office just long enough for the school nurse to verify that the child has received the necessary vaccinations.

  • A traveler taking a temporary job in a foreign city wants to grant a local clinic access to the health records stored by her primary care physician for the six months during which the job lasts.

The open source implementation I’ll highlight in this article is OpenUMA from a company named ForgeRock. ForgeRock specializes in identity management online and creates a number of open source projects that can be found on their web page. They are also a leading participant in the non-profit Kantara Initiative, where they helped develop UMA as part of the UMA Developer Resources Work Group.

The advantage of open source libraries and tools for UMA is that the standard involves many different pieces of software run by different parts of the system: anyone with data to share, and anyone who wants access to it. The technology is not aimed at any one field, but health IT experts are among its greatest enthusiasts.

The fundamental technology behind UMA is OAuth, a well-tested means of authorizing people on web sites. When you want to leave a comment on a news article and see a button that says, “Log in using Facebook” or some other popular site, OAuth is in use.

OAuth is an enabling technology, by which I mean that it opens up huge possibilities for more complex and feature-rich tools to be built on top. It provides hooks for such tools through its notion of profiles–new standards that anyone can create to work with it. UMA is one such profile.

What UMA contributes over and above OAuth was described to me by Eve Maler, a leading member of the UMA working group who wrote their work up in the specification I cited earlier, and who currently works for ForgeRock. OAuth lets you manage different services for yourself. When you run an app that posts to Twitter on your behalf, or log in to a new site through your Facebook account, OAuth lets your account on one service do something for your account on another service.

UMA, in contrast, lets you grant access to other people. It’s not your account at a doctor’s office that is accessing data, but the doctor himself.

UMA can take on some nitty-gritty real-life situations that are hard to handle with OAuth alone. OAuth provides a single yes/no decision: is a person authorized or not? It’s UMA that can handle the wide variety of conditions that affect whether you want information released. These vary from field to field, but the conditions of time and credentials mentioned earlier are important examples in health care. I covered one project using UMA in an earlier article.

With OAuth, you can grant access to an account and then revoke it later (with some technical dexterity). But UMA allows you to build a time limit into the original access. Of course, the recipient does not lose the data to which you granted access, but when the time expires he cannot return to get new data.

UMA also allows you to define resource sets to segment data. You could put vaccinations in a resource set that you share with others, withholding other kinds of data.

OpenUMA contains two crucial elements of a UMA implementation:

The authorization server

This server accepts a list of restrictions from the site holding the data and the credentials submitted by the person requesting access to the data. The server is a very generic function: any UMA request can use any authorization server, and the server can run anywhere. Theoretically, you could run your own. But it would be more practical for a site that hosts data–Microsoft HealthVault, for instance, or some general cloud provider–to run an authorization server. In any case, the site publicizes a URL where it can be contacted by people with data or people requesting data.

The resource server

These submit requests to the authorization server from applications and servers that hold people’s data. The resource server handles the complex interactions with the authorization server so that application developers can focus on their core business.

Instead of the OpenUMA resource server, apps can link in libraries that provide the same functions. These libraries are being developed by the Kantara Initiative.

So before we can safely share and withhold data, what’s missing?

The UMA standard doesn’t offer any way to specify a condition, such as “Release my data only this week.” This gap is filled by policy languages, which standards groups will have to develop and code up in a compatible manner. A few exist already.

Maler points out that developers could also benefit from tools for editing and testing code, along with other supporting software that projects build up over time. The UMA resource working group is still at the beginning of their efforts, but we can look forward to a time when fine-grained patient control over access to data becomes as simple as using any of the other RESTful APIs that have filled the programmer’s toolbox.

Could the DoD be SMART to Choose Cerner?

Posted on August 4, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Even before the health IT world could react (with surprise) to the choice of a Cerner EHR (through its lead partner, Leidos Health Solutions Group) by the Department of Defense, rumors have it that Cerner beat out Epic through the perception that it is more open and committed to interoperability. The first roll-out they’ll do at the DoD is certain to be based on HL7 version 2 and more recent version 3 standards (such as the C-CDA) that are in common use today. But the bright shining gems of health exchange–SMART and FHIR–are anticipated for the DoD’s future.

Read more..

Assessment Released of Health Information Exchanges (Part 2 of 2)

Posted on January 7, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous installment of this article talked about the survivability of HIEs, drawing on a report released under ONC auspices. This installment delves into some other interesting aspects of information exchange.

Data Ownership and Privacy Raise Their Heads
Whenever data is a topic, policy issues around ownership and privacy cannot be dismissed. The HIE report does not address them directly, but they peek out from behind questions of how all this stuff gets stored.

Two essential strategies allow data sharing. In the simpler strategy, the HIE vacuums up data from all the providers who join. In a more subtle and supple strategy, known as a federated system, the HIE leaves the data at the providers and just provides connectivity. For instance, the HIE report explains that some HIEs store enough data to identify patients and list the providers who have data on them (this uses a master patient index, which solves the common problem of matching a patient). Once a patient is matched, the HIE retrieves relevant data from each provider.

The advantage of the vacuum suction strategy is that, once an HIE has all the data in one place, it can efficiently run analytics across a humongous data set and deliver the highly desirable analytics and planning that make the HIE attractive to clients. But this strategy brings significant risk as well.

Programmers and administrators in the computer field have long understood the classic problem of copying data: if you keep two or more copies of data, they can get out of sync. The HIE report recognizes this weakness, indicating that HIEs storing patient data can get outdated (p. 12). According to the report, “Stakeholders reported it is very damaging to the reputation of state efforts when provider queries return insufficient results, leading users to conclude the system is not useful.” (p. 17) In fact, some HIEs don’t even know when a patient has died (p. 20).

Another classic problem of copying data is that it forces the HIE to maintain a huge repository, along with enough server power and bandwidth to handle requests. This in turn raises costs and drives away potential clients. Success in such cases can be self-defeating: if you really do offer convenient query facilities and strong analytic power, demands will increase dramatically. Larger facilities, which (as I’ve said) are more attractive to HIEs, will also use data in more highly developed and sophisticated ways, which will lead to more requests banging on the HIE’s door. It’s no whim that Amazon Web Services, the leading cloud offering in the computer field, imposes limits on data transferred, as well as other uses of the system.

Thus the appeal of federated systems. However, they are technically more complex. More significantly, their success or failure rests on standardization more than a vacuum suction strategy. If you have a hundred different providers using a couple dozen different and incompatible EHRs, it’s easier to provide one-way channels that vacuum up EHR data than to upgrade all the EHRs to engage in fine-grained communication. Indeed, incomplete standards were identified as a burden on HIEs (p. 19). Furthermore, data isn’t clean: it’s entered inconsistently by different providers, or in different fields (p. 20). This could be solved by translation facilities.

What intrigues me about the federated approach is that the very possibility of its use puts providers on the defensive over their control of patient data. If an HIE gets a federated system to work, there is little reason to leave data at the provider instead of putting it under the control of the patient. Now that Apple’s HealthKit and similar initiatives put patient health records back on the health care agenda, patient advocates can start pushing for a form of HIE that gives patients back their data.

What Direction for Direct Project?
The Direct project was one of the proudest achievements of the health IT reforms unleashed by the HITECH act. It was open source software developed in a transparent manner, available to all, and designed to use email so even the least technically able health care provider could participate in the program. But Direct may soon become obsolete.

It’s still best for providers without consistent Internet access, but almost anyone with an always-on Internet connection could do better. The HIE report says that in some places, “Direct use is low because providers must access the secure messaging system through a web portal instead of through their EHRs.” (p. 11)

A recent article uncovered the impedances put up by EHR vendors to prevent Direct from working. The HIE report bolstered this assessment (pp. 19-20). As for DirectTrust (also covered by the article’s reporter), even though it was meant to solve connectivity problems, it could turn into yet another silo because it requires providers to sign up and not all do so.

Ideally, health information exchange would disappear quietly into a learning health care system. The ONC-sponsored report shows how far we are from this vision. At the same time, it points to a few ways forward: more engagement with providers (pp. 14, 25), more services that add value to patient care, tighter standards. With some of these advances, the health care field may find the proper architecture and funding model for data exchange.

Open Source Electronic Health Records: Will They Support the Clinical Data Needs of the Future? (Part 1 of 2)

Posted on November 10, 2014 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Open source software missed out on making a major advance into health care when it was bypassed during hospitals’ recent stampede toward electronic health records, triggered over the past few years by Meaningful Use incentives. Some people blame the neglect of open source alternatives on a lack of marketing (few open source projects are set up to woo non-technical adoptors), some on conservative thinking among clinicians and their administrators, and some on the readiness of the software. I decided to put aside the past and look toward the next stage of EHRs. As Meaningful Use ramps down and clinicians have to look for value in EHRs, can the open source options provide what they need?

The oncoming end of Meaningful Use payments (which never came close to covering the costs of proprietary EHRs, but nudged many hospitals and doctors to buy them) may open a new avenue to open source. Deanne Clark of DSS, which markets a VistA-based product called vxVistA, believes open source EHRs are already being discovered by institutions with tight budgets, and that as Meaningful Use reimbursements go away, open source will be even more appealing.

My question in this article, though, is whether open source EHRs will meet the sophisticated information needs of emerging medical institutions, such as Accountable Care Organizations (ACOs). Shahid Shah has suggested some of the EHR requirements of ACOs. To survive in an environment of shrinking reimbursement and pay-for-value, more hospitals and clinics will have to beef up their uses of patient data, leading to some very non-traditional uses for EHRs.

EHRs will be asked to identify high-risk patients, alert physicians to recommended treatments (the core of evidence-based medicine), support more efficient use of clinical resources, contribute to population health measures, support coordinated care, and generally facilitate new relationships among caretakers and with the patient. A host of tools can be demanded by users as part of the EHR role, but I find that they reduce to two basic requirements:

  • The ability to interchange data seamlessly, a requirement for coordinated care and therefore accountable care. Developers could also hook into the data to create mobile apps that enhance the value of the EHR.

  • Support for analytics, which will support all the data-rich applications modern institutions need.

Eventually, I would also hope that EHRs accept patient-generated data, which may be stored in types and formats not recognized by existing EHRs. But the clinical application of patient-generated data is far off. Fred Trotter, a big advocate for open source software, says, “I’m dubious at best about the notion that Quantified Self data (which can be very valuable to the patients themselves) is valuable to a doctor. The data doctors want will not come from popular commercial QS devices, but from FDA-approved medical devices, which are more expensive and cumbersome.”

Some health reformers also cast doubt on the value of analytics. One developer on an open source EHR labeled the whole use of analytics to drive ACO decisions as “bull” (he actually used a stronger version of the word). He aired an opinion many clinicians hold, that good medicine comes from the old-fashioned doctor/patient relationship and giving the patient plenty of attention. In this philosophy, the doctor doesn’t need analytics to tell him or her how many patients have diabetes with complications. He or she needs the time to help the diabetic with complications keep to a treatment plan.

I find this attitude short-sighted. Analytics are proving their value now that clinicians are getting serious about using them–most notably since Medicare penalizes hospital readmissions with 30 days of discharge. Open source EHRs should be the best of breed in this area so they can compete with the better-funded but clumsy proprietary offerings, and so that they can make a lasting contribution to better health care.

The next installment of this article looks at current support for interoperability and analytics in open-source EHRs.

Ten-year Vision from ONC for Health IT Brings in Data Gradually

Posted on August 25, 2014 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

This is the summer of reformulation for national U.S. health efforts. In June, the Office of the National Coordinator (ONC) released its 10-year vision for achieving interoperability. The S&I Framework, a cooperative body set up by ONC, recently announced work on the vision’s goals and set up a comment forum. A phone call by the Health IT Standards Committeem (HITSC) on August 20, 2014 also took up the vision statement.

It’s no news to readers of this blog that interoperability is central to delivering better health care, both for individual patients who move from one facility to another and for institutions trying to accumulate the data that can reduce costs and improve treatment. But the state of data exchange among providers, as reported at these meetings, is pretty abysmal. Despite notable advances such as Blue Button and the Direct Project, only a minority of transitions are accompanied by electronic documents.

One can’t entirely blame the technology, because many providers report having data exchange available but using it on only a fraction of their patients. But an intensive study of representative documents generated by EHRs show that they make an uphill climb into a struggle for Everest. A Congressional request for ideas to improve health care has turned up similar complaints about inadequate databases and data exchange.

This is also a critical turning point for government efforts at health reform. The money appropriated by Congress for Meaningful Use is time-limited, and it’s hard to tell how the ONC and CMS can keep up their reform efforts without that considerable bribe to providers. (On the HITSC call, Beth Israel CIO John Halamka advised the callers to think about moving beyond Meaningful Use.) The ONC also has a new National Coordinator, who has announced a major reorganization and “streamlining” of its offices.

Read more..