Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

How Open mHealth Designed a Popular Standard (Part 3 of 3)

Posted on December 3, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first section of this article introduced the common schemas for mobile health designed by Open mHealth, and the second section covered the first two design principles driving their schemas. We’ll finish off the design principles in this section.

Balancing permissiveness and constraints

Here, the ideal is to get accurate measurements to the precision needed by users and researchers. But many devices are known to give fuzzy results, or results that are internally consistent but out of line with absolute measurements.

The goal adopted by Open mHealth is to firm up the things that are simple to get right and also critical to accuracy, such as units of measurement discussed earlier. They also require care in reporting the time interval that a measurement covers: day, week, month. There’s no excuse if you add up the walks recorded for the day and the sum doesn’t match the total steps that the device reports for that day.

Some participants suggested putting in checks, such as whether the BMI is wildly out of range. The problem (in terms of public health as well as technology) is that there are often outlier cases in health care, and the range of what’s a “normal” BMI can change. The concept of a maximum BMI is therefore too strict and ultimately unhelpful.

Designing for data liquidity

Provenance is the big challenge here: where does data come from, how was it collected, and what algorithm was used to manipulate it? Open mHealth expects data to go far and wide among researchers and solution providers, so the schema must keep a trail of all the things done to it from its origin.

Dr. Sim said the ecosystem is not yet ready to ensure quality. For instance, a small error introduced at each step of data collection and processing can add up to a yawning gap between the reported measure and the truth. This can make a difference not only to researchers, but to the device’s consumers. Think, for instance, of a payer basing the consumer’s insurance premium on analytics performed on data from the device over time.

Alignment with clinical data standards

Electronic health records are starting to accept medical device data. Eventually, all EHRs will need to do this so that monitoring and connected health can become mainstream. Open mHealth adopted widespread medical ontologies such as SNOMED, which may seem like an obvious choice but is not at all what the devices do. Luckily, Open mHealth’s schemas come pre-labelled with appropriate terminology codes, so device developers don’t need to get into the painful weeds of medical coding.

Modeling of Time

A seemingly simple matter, time is quite challenging. The Open mHealth schema can represent both points in time and time intervals. There are still subtleties that must be handled properly, as when a measurement for one day is reported on the next day because the device was offline. These concerns feed into provenance, discussed under “Designing for data liquidity.”

Preliminary adoption is looking good. The schema will certainly evolve, hopefully allowing for diversity while not splintering into incompatible standards. This is the same balance that FHIR must strike under much more difficult circumstances. From a distance, it appears like Open mHealth, by keeping a clear eye on the goal and a firm hand on the development process, have avoided some of the pitfalls that the FHIR team has encountered.

How Open mHealth Designed a Popular Standard (Part 2 of 3)

Posted on December 2, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article introduced the intensive research and consultation strategy used by Open mHealth to develop a common schema for exploiting health data by app developers, researchers, clinicians, individuals, and manufacturers of medical and fitness devices. Next we’ll go through the design principles with a look at specific choices and trade-offs.

Atomicity

Normally, one wants to break information down into chunks as small as possible. By doing this, you allow data holders to minimize the amount of data they need to send data users, and data users are free to scrutinize individual items or combine them any way they want. But some values in health need to be chunked together. When someone requests blood pressure, both the systolic and diastolic measures should be sent. The time zone should go with the time.

On the other hand, mHealth doesn’t need combinations of information that are common in medical settings. For instance, a dose may be interesting to know, but you don’t need the prescribing doctor, when the prescription was written, etc. On the other hand, some app developers have asked the prescription to include the number of refills remaining, so the app can issue reminders.

Balancing parsimony and complexity

Everybody wants all the data items they find useful, but don’t want to scroll through screenfuls of documentation for other people’s items. So how do you give a bewildering variety of consumers and researchers what they need most without overwhelming them?

An example of the process used by Open mHealth was the measurement for blood sugar. For people with Type 1 or Type 2 diabetes, the canonical measurement is fasting blood sugar first thing in the morning (the measurement can be very different at different times of the day). This helps the patients and their clinicians determine their overall blood sugar control. Measurements of blood sugar in relation to meals (e.g., two hours after lunch) or to sleep (e.g., at bedtime) is also clinically useful for both patients and clinicians.

Many of these users are curious what their blood sugar level is at other times, such as after a run. But to extend the schema this way would render it mind-boggling. And Dr. Sim says these values have far less direct clinical value for people with Type 2 diabetes, who are the majority of diabetic patients. So the schema sticks with reporting blood sugar related to meals and sleep. If users and vendors work together, they are free to extend the standard–after all, it is open source.

Another reason to avoid fine-grained options is that it leads to many values being reported inconsistently or incorrectly. This is a concern with the ICD-10 standard for diagnoses, which has been in use in europe for a long time and became a requirement for billing in the US since early October. ICD-9 is woefully outdated, but so much was dumped into ICD-10 that its implementation has left clinicians staying up nights and ignoring real opportunities for innovation. (Because ICD is aimed mostly at billing, it is not used for coding in Open mHealth schemas.)

Thanks to the Open mHealth schema, a dialog has started between users and device manufacturers about what new items to include. For instance, it could include average blood sugar over a fixed period of time, such as one month.

In the final section of this article, we’ll cover the rest of the design principles.

How Open mHealth Designed a Popular Standard (Part 1 of 3)

Posted on December 1, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

If standards have not been universally adopted in the health care field, and are often implemented incorrectly when adopted, the reason may simply be that good standards are hard to design. A recent study found that mobile health app developers would like to share data, but “Less progress has been made in enabling apps to connect and communicate with provider healthcare systems–a fundamental requirement for mHealth to realize its full value in healthcare management.”

Open mHealth faced this challenge when they decided to provide a schema to represent the health data that app developers, research teams, and other individuals want to plug into useful applications. This article is about how they mined the health community for good design decisions and decided what necessary trade-offs to make.

Designing a good schema involves intensive conversations with several communities that depend on each other but often have trouble communicating their needs to each other:

Consumers/users

They can tell you what they’re really interested in, and give you surprising insights about what a product should produce. In the fitness device space, for instance, Open mHealth was told that consumers would like time zones including with timing data–something that currently is supported rarely and poorly. Manufacturers find time zones hard to do, and feel little competitive pressure to offer them.

Vendors/developers

They can fill you in on the details of their measurements, which might be hard to discern from the documentation or the devices themselves. A simple example: APIs often retrieve weight values without units. If you’re collecting data across many people and devices for clinical or scientific purposes (e.g., across one million people for the new Precision Medicine Initiative), you can’t be guessing whether someone weighs 70 pounds or 70 kilograms.

Clinicians/Researchers

They offer insights on long-range uses of data and subtleties that aren’t noticeable in routine use by consumers. For example, in the elderly and those on some types of medications, blood pressure can be quite different standing up or lying down. Open mHealth captures this distinction.

With everybody weighing in, listening well and applying good medical principles is a must, otherwise, you get (as co-founder Ida Sim repeatedly said in our phone call) “a mess.” Over the course of many interviews, one can determine the right Pareto distribution: finding the 20% of possible items that satisfy 90% of the most central uses for mobile health data.

Open mHealth apparently made good use of these connections, because the schema is increasingly being adhered to by manufacturers and adopted by researchers as well as developers throughout the medical industry. In the next section of this article I’ll take a look at some of the legwork that that went into turning the design principles into a useful schema.

Usability Principles for Health IT Tools

Posted on October 22, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

A recent article of mine celebrated a clever educational service offered on the Web by the US Department of Health and Human Services. I ended with a list of three lessons for the health care field regarding usability of health IT Tools, which deserve further explanation.

Respecting contemporary Web practices

Communications can be improved by using the advanced features provided by the Web and mobile devices. In the HHS case, developers went to great lengths to provide a comfortable, pleasant experience to anyone who viewed their content, even if the viewers were visiting a different web site and the HHS content was merely embedded there.

This commitment to modern expectations is rare in the health care field. Web sites and electronic records are famously stuck in the 1990s. Doctors have been warned that they can’t use unencrypted email or text messages to communicate sensitive information to patients, so they use patient portals that are self-contained and hard to access. The tools on my family practice’s portal, provided by eClinicalWorks, don’t even come up to the standards of email systems developed in the 1980s. They lack such fundamental features as viewing messages by sender or viewing threads of multiple messages.

Worse happens if a clinician needs to perform complex tasks in an EHR or to work in multiple windows at once. There are whole areas of health IT (such as the notions of health information exchanges and patient portals) that reflect its primitiveness. Access to data should be fundamental to health IT products.

Why? EHR vendors are focused on HL7 standards, clinical decision support, and other nuts and bolts of data-crunching. They don’t possess the most advanced design and Web coding teams. Given their small market size–compared to social networks or e-commerce sites–one shouldn’t be surprised that health sites and EHRs don’t invest in cutting-edge Web technology. It’s no surprise, for instance, that when athenahealth (the most forward-looking proprietary EHR vendor, in my personal view) decided to reach out to the mobile world, they purchased an existing mobile app development company.

Another barrier may be the old software and hardware used at many health care sites, as described in item 6 of an Open mHealth round-up.

The problem is that health care applications and web sites need to make things easy for the user–at least as easy as retailers do. Both clinicians and patients tend to visit such sites when the are feeling pressured, tense, and depressed about what they’re dealing with. Mistakes have serious negative consequences. So interfaces should be as usable as possible. It also helps if their interactive elements behave like others that the users have encountered in other apps and web sites; hence the value of keeping up with current user interface practices.

Consider the people at the other end

I’ve already explained how the mood and mindset of the app user or web visitor has a critical effect on user interface design. Designers never know in advance–even when they think they do–what the users are asking for. And users vary widely as well. Therefore, sites must be prepared to evolve continuously with input and feedback from users. This requirement leads directly to the next suggestion.

Open source meets more needs

Most health care developers (and app buyers) assume that software must be kept closed to establish viable businesses. In other industries, large institutions are thriving on Linux, open source Java technologies, free databases such as MySQL and various NoSQL options, and endless free libraries for software development. Yet proprietary software still rules in electronic health records, medical devices, consumer products, and mobile apps.

Releasing source code in the open seems counter-intuitive, but it can lead to greater business success by promoting a richer ecosystem of tools. The vendors of health apps and software still haven’t realized–or at least, haven’t really pursued to its logical conclusion–the truth that health prospers only when many different parts of the health care system work together. Under the shepherdship of the Department of Health and Human Services, doctors are groping their way toward working with other doctors, with nursing homes and rehab facilities, with behavioral health clinics, and with patients themselves. Technology has to keep up, and that means eliminating barriers to interoperability.

APIs are a fine way to allow data sharing, but they don’t open up the tools behind the APIs. Creating a computing environment for health that ties together different systems requires free and open source software. It enables deep hooks between different parts of the system. Open source EHRs, open source device software, and open source research tools can be integrated to make larger systems that offer opportunities to all players.

Platforms for innovation

Instead of picking off bits of the existing health care infrastructure to serve, developers and vendors should be making platforms with vast new horizons and new opportunities for business. Platforms that encourage outsiders to build new functions are the concept that ties together the three observations in this article. These platforms can be presented to users in different ways by leading Web developers, can incorporate enhancements suggested by users, and can rely on open source to make adaptation easy.

Two platforms I have discussed in previous articles include SMART and Shimmer. SMART is an API that provides a standard to app developers working with patient data. Shimmer is a new tool for processing data from fitness devices. Each is starting to make a mark in the health care field, and illustrate what the field can achieve when parties work together and share results.

A Mature API for an Electronic Health Record: the OpenMRS Process

Posted on August 14, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

By some measures, OpenMRS may be the most successful of the open source EHRs, widely deployed around the world. It also has a long experience with its API, which has been developed and refined over the last several years. I talked to OpenMRS developer Wyclif Luyima recently and looked at OpenMRS’s REST API documentation to see what the API offers.
Read more..

WearDuino Shows That Open Source Devices Are a Key Plank in Personal Health

Posted on August 13, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

New devices are democratizing health. We see it not only in the array or wearable fitness gear that an estimated 21 percent of Americans own (and that some actually wear), but also in innovative uses for mobile phones (such as testing vision in regions that lack doctors or checking athletes for concussions) and now in low-cost devices that are often open source hardware and software. Recent examples of the latter include the eyeSelfie, which lets a non-professional take an image of his retina, and the WearDuino, a general-purpose personal device that is the focus of this article.

WearDuino is the brainchild of Mark Leavitt, a medical internist who turned to technology (as have so many doctors pursuing visions of radical reform in health care). I ran into Leavitt at the 2015 Open Source convention, where he also described his work briefly in a video interview.

Leavitt’s goal is to produce a useful platform that satisfies two key criteria for innovation: low-cost and open. Although some of the functions of the WearDuino resembles devices on the market, you can take apart the WearDuino, muck with it, and enhance it in ways those closed platforms don’t allow.

Traits and Uses of WearDuino
Technically, the device has simple components found everywhere, but is primed for expansion. A small Bluetooth radio module provides the processing, and as the device’s name indicates, it supports the Arduino programming language. To keep power consumption low there’s no WiFi, and the device can run on a cheap coin cell battery for several months under normal use.

Out of the box, the WearDuino could be an excellent fitness device. Whereas most commercial fitness wearables collect their data through an accelerometer, the WearDuino has an accelerometer (which can measure motion), a gyroscope (which is useful for more complex measurements as people twist and turn), and a magnetometer (which acts as a compass). This kind of three-part device is often called a “9-degree of freedom sensor,” because each of those three measurements is taken in three dimensions.

When you want more from the device, such as measuring heartbeat, muscle activity, joint flexing, or eye motion, a board can be added to one of the Arduino’s 7 digital I/O pins. Leavitt said that one user experimented with a device that lets a parent know when to change a baby’s diaper, through an added moisture detector.

Benefits of an Open Architecture
Proprietary device manufacturers often cite safety reasons for keeping their devices closed. But Leavitt believes that openness is quite safe through most phases of data use in health. Throughout the stages of collecting data, visualizing the relationships, and drawing insights, Leavitt believes people should be trusted with any technologies they want. (I am not sure these activities are so benign–if one comes up with an incorrect insight it could lead you to dangerous behavior.) It is only when you get to giving drugs or other medical treatments that the normal restrictions to professional clinicians makes sense.

Whatever safety may adhere to keeping devices closed, there can be no justification on the side of the user for keeping the data closed. And yet proprietary device manufacturers play games with the user’s data (and not just games for health). Leavitt, for instance, who wears a fitness monitor, says he can programmatically download a daily summary of his footsteps, but not the exact amounts taken at different parts of the day.

The game is that device manufacturers cannot recoup the costs of making and selling the devices through the price of the device alone. Therefore, they keep hold of users’ data and monetize it through marketing, special services, and other uses.

Leavitt doesn’t have a business plan yet. Instead, in classic open source practice, he is building community. Where he lives in Portland, Oregon a number of programmers and medical personnel have shown interest. The key to the WearDuino project is not the features of the device, but whether it succeeds in encouraging an ecosystem of useful personal monitors around it.

OpenUMA: New Privacy Tools for Health Care Data

Posted on August 10, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The health care field, becoming more computer-savvy, is starting to take advantage of conveniences and flexibilities that were developed over the past decade for the Web and mobile platforms. A couple weeks ago, a new open source project was announced to increase options for offering data over the Internet with proper controls–options with particular relevance for patient control over health data.

The User-Managed Access (UMA) standard supports privacy through a combination of encryption and network protocols that have a thirty-year history. UMA reached a stable release, 1.0 in April of this year. A number of implementations are being developed, some of them open source.

Before I try to navigate the complexities of privacy protocols and standards, let’s look at a few use cases (currently still hypothetical) for UMA:

  • A parent wants to show the child’s school records from the doctor’s office just long enough for the school nurse to verify that the child has received the necessary vaccinations.

  • A traveler taking a temporary job in a foreign city wants to grant a local clinic access to the health records stored by her primary care physician for the six months during which the job lasts.

The open source implementation I’ll highlight in this article is OpenUMA from a company named ForgeRock. ForgeRock specializes in identity management online and creates a number of open source projects that can be found on their web page. They are also a leading participant in the non-profit Kantara Initiative, where they helped develop UMA as part of the UMA Developer Resources Work Group.

The advantage of open source libraries and tools for UMA is that the standard involves many different pieces of software run by different parts of the system: anyone with data to share, and anyone who wants access to it. The technology is not aimed at any one field, but health IT experts are among its greatest enthusiasts.

The fundamental technology behind UMA is OAuth, a well-tested means of authorizing people on web sites. When you want to leave a comment on a news article and see a button that says, “Log in using Facebook” or some other popular site, OAuth is in use.

OAuth is an enabling technology, by which I mean that it opens up huge possibilities for more complex and feature-rich tools to be built on top. It provides hooks for such tools through its notion of profiles–new standards that anyone can create to work with it. UMA is one such profile.

What UMA contributes over and above OAuth was described to me by Eve Maler, a leading member of the UMA working group who wrote their work up in the specification I cited earlier, and who currently works for ForgeRock. OAuth lets you manage different services for yourself. When you run an app that posts to Twitter on your behalf, or log in to a new site through your Facebook account, OAuth lets your account on one service do something for your account on another service.

UMA, in contrast, lets you grant access to other people. It’s not your account at a doctor’s office that is accessing data, but the doctor himself.

UMA can take on some nitty-gritty real-life situations that are hard to handle with OAuth alone. OAuth provides a single yes/no decision: is a person authorized or not? It’s UMA that can handle the wide variety of conditions that affect whether you want information released. These vary from field to field, but the conditions of time and credentials mentioned earlier are important examples in health care. I covered one project using UMA in an earlier article.

With OAuth, you can grant access to an account and then revoke it later (with some technical dexterity). But UMA allows you to build a time limit into the original access. Of course, the recipient does not lose the data to which you granted access, but when the time expires he cannot return to get new data.

UMA also allows you to define resource sets to segment data. You could put vaccinations in a resource set that you share with others, withholding other kinds of data.

OpenUMA contains two crucial elements of a UMA implementation:

The authorization server

This server accepts a list of restrictions from the site holding the data and the credentials submitted by the person requesting access to the data. The server is a very generic function: any UMA request can use any authorization server, and the server can run anywhere. Theoretically, you could run your own. But it would be more practical for a site that hosts data–Microsoft HealthVault, for instance, or some general cloud provider–to run an authorization server. In any case, the site publicizes a URL where it can be contacted by people with data or people requesting data.

The resource server

These submit requests to the authorization server from applications and servers that hold people’s data. The resource server handles the complex interactions with the authorization server so that application developers can focus on their core business.

Instead of the OpenUMA resource server, apps can link in libraries that provide the same functions. These libraries are being developed by the Kantara Initiative.

So before we can safely share and withhold data, what’s missing?

The UMA standard doesn’t offer any way to specify a condition, such as “Release my data only this week.” This gap is filled by policy languages, which standards groups will have to develop and code up in a compatible manner. A few exist already.

Maler points out that developers could also benefit from tools for editing and testing code, along with other supporting software that projects build up over time. The UMA resource working group is still at the beginning of their efforts, but we can look forward to a time when fine-grained patient control over access to data becomes as simple as using any of the other RESTful APIs that have filled the programmer’s toolbox.

Could the DoD be SMART to Choose Cerner?

Posted on August 4, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Even before the health IT world could react (with surprise) to the choice of a Cerner EHR (through its lead partner, Leidos Health Solutions Group) by the Department of Defense, rumors have it that Cerner beat out Epic through the perception that it is more open and committed to interoperability. The first roll-out they’ll do at the DoD is certain to be based on HL7 version 2 and more recent version 3 standards (such as the C-CDA) that are in common use today. But the bright shining gems of health exchange–SMART and FHIR–are anticipated for the DoD’s future.

Read more..

Gathering a Health Care Industry Around an Open Source Solution: the Success of tranSMART

Posted on May 18, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The critical software driving web sites, cloud computing, and other parts of our infrastructure are open source: free to download, change, and share. The role of open source software in healthcare is relatively hidden and uncelebrated, but organizations such as the tranSMART Foundation prove that it is making headway behind the scenes. tranSMART won three awards at the recent Bio‐IT World conference, including Best in Show.

The tranSMART Foundation is a non‐profit organization that develops creates software for translational research, performing tasks such as searching for patterns in genomes and how they are linked to clinical outcomes. Like most of the sustainable, highly successful open source projects, tranSMART avoids hiring programmers to do the work itself, but fosters a sense of community by coordinating more than 100 developers from the companies who benefit from the software.

I talked recently to Keith Elliston, CEO of the tranSMART Foundation. He explained that the tranSMART platform was first developed by Johnson & Johnson. When they realized they had a big project that other pharmaceutical companies could both benefit from and contribute to, they reached out to Pfizer, Millennium (now the Takeda Oncology Company), and Sanofi to collaborate on the development of the platform.

In 2012, Johnson & Johnson decided to release the platform as open source under the GPLv3 open source license. In early 2013, a group of scientists from the University of Michigan, Imperial College and the Pistoia Alliance gathered together to form the tranSMART Foundation, in order to steward the growing community development efforts on the platform. The growing foundation has garnered support from other major companies, including Oracle (whose database contained the data they were operating on), Deloitte, and PerkinElmer.

Two major challenges faced by open source projects are funding and community management. Elliston demonstrated to me that tranSMART is quite successful at both. The foundation currently receives 95% of their funding from members. In fact, as they develop, they would like to tap into grants and philanthropies, reducing the member funding to about 20%. In pursuit of that goal, the foundation recently gained 501(c)(3) non‐profit status in the US. They currently are sitting comfortably, with 12 months of secured funding, and are in the process of raising monies for both the foundation’s Fellows program and their version 1.3 development program.

Like the Linux Foundation, a model for much of what tranSMART does, it offers two levels of membership. Gold membership costs $100,000 per year and earns the company a seat on the board of directors. Silver membership costs $5,000 to $20,000 per year based on several factors (size of the company, whether it is for‐profit or non‐profit, etc.) and allows one to participate in electing Silver member board directors.

Development is spread among a large number of programmers across the community. The most recent version of the software (1.2) was created mostly by developers paid by member companies. The foundation has set up working groups on which developers and community members volunteer to solve key tasks such as project management and defining the architecture. Developer coordination is spread across three main committees representing code, community, and content. These committees are composed of member representatives, and coordinate the working groups that carry out much of the foundation’s mission.

Consequently, tranSMART evolves briskly and is seen by its users as meeting their needs. A major achievement of the 1.2 release was to add support for the open source relational database PostgreSQL (in addition to the original database, Oracle). Because the many of the large, semi‐structured sets of data tranSMART deals with may be more suited to some sort of NoSQL database, they are exploring a move to one of those options as a part of their research program. The architecture group is beginning to define version 2.0, and coding may start toward the end of this year.

Although headquartered in the US, tranSMART is finding 60% to 70% of its activity taking place in Europe, and is forming a sister organization there. Elliston claims that the current funding environment for genetic and pharma research is far more favorable in Europe, even though they are taking longer than the US to recover from the recent recession. As an example, he compared the 215 million dollars offered by the White House for its Precision Medicine Initiative (considered a major research advance here in the US) with the 3 billion Euros recently announced by Europe’s Innovative Medicines Initiative (IMI).

I asked Elliston how the tranSMART Foundation achieved such success in a field known to be difficult to open source projects. He said that they approached the non‐profit space with an entrepreneurial strategy learned in for‐profit environments, and focused on staying lean. For instance, they built out marketing and communications departments like a venture‐backed start‐up, using contractors drawn from their startup experience. Furthermore, they have assembled a team of highly experienced part‐time employees who spend most of their time in other organizations. Elliston himself devotes only 10 hours a week to his job as tranSMART CEO.

We could pause here to imagine what could be achieved if other parts of the health care industry adopted this open source model. For instance, the US contains 5,686 hospitals, of which half have installed what the government calls a “basic EHR system” (see page 12 of an ONC report to Congress). What if the 2,700‐odd hospitals saved the hundreds of millions each had spent on a proprietary API, and combined the money to spend a few billions developing a core open source EHR that each could adapt to its needs? What sort of EHR could you get for a couple billion dollars of developer time?

Naturally, technical and organizational obstacles would stand in the way of such an effort. Choosing an effective governing board, designing an open architecture that would meet the needs of 2,700 very diverse organizations, and keeping the scope reasonable are all challenges that go beyond the scope of this article. On the other hand, many existing projects could serve as a basis. VistA is used not only throughout VA hospitals but in many US hospitals and foreign national health care systems. OpenMRS is also widely ensconced in several African countries and elsewhere.

I believe tranSMART’s success is hard to reproduce in the clinical environment for other reasons. tranSMART deals with pharmaceutical companies first and foremost, and biomedical academic researchers as well. Although neither environment focuses on computer technology, they work in highly technical fields and know how heavily they depend on computing. They are comfortable with trends in modern computing, and are therefore probably more comfortable conforming to the open source development model that is so widespread in computing.

In contrast, hospitals and clinics are run by people whose orientation is to other people. You can walk through one of these settings and find plenty of advanced technology, all driven by embedded computers, but the computing tends to be hidden. Anything that brings the clinician into close contact with the computer (notably the EHR) is usually cumbersome and disliked by its users.

The main users and managers in these environments are therefore not comfortable discussing computing topics and don’t have any understanding of the open source model. Furthermore, because the EHRs are mostly insular and based on legacy technologies, the IT staff and programmers employed by the hospitals or clinics tend to be outside the mainstream of computing. Still, open source software is making inroads as support tools and glue for other systems. Open source EHRs also have seen some adoption, as I mentioned. The tranSMART Foundation persists as a model that others can aspire to.

Clinical Decision Support Should Be Open Source

Posted on January 26, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Clinical decision support is a long-standing occupant of the medical setting. It got in the door with electronic medical records, and has recently received a facelift under the term “evidence based medicine.” We are told that CDS or EBM is becoming fine-tuned and energized through powerful analytics that pick up the increasing number of patient and public health data sets out in the field. But how does the clinician know that the advice given for a treatment or test is well-founded?

Most experts reaffirm that the final word lies with the physician–that each patient is unique, and thus no canned set of rules can substitute for the care that the physician must give to a patient’s particular conditions (such as a compromised heart or a history of suicidal ideation) and the sustained attention that the physician must give to the effects of treatment. Still, when the industry gives a platform to futurists such as Vinod Khosla who suggest that CDS can become more reliable than a physician’s judgment, we have to start demanding a lot more reliability from the computer.

It’s worth stopping a moment to consider the various inputs to CDS. Traditionally, it was based on the results of randomized, double-blind clinical trials. But these have come under scrutiny in recent years for numerous failings: the questionable validity of extending the results found on selected test subjects to a broader population, problems reproducing results for as many as three quarters of the studies, and of course the bias among pharma companies and journals alike for studies showing positive impacts.

More recently, treatment recommendations are being generated from “big data,” which trawl through real-life patient experiences instead of trying to isolate a phenomenon in the lab. These can turn up excellent nuggets of unexpected impacts–such as Vioxx’s famous fatalities–but suffer also from the biases of the researches designing the algorithms, difficulties collecting accurate data, the risk of making invalid correlations, and the risk of inappropriately attributing causation.

A third kind of computerized intervention has recently been heralded: IBM’s Watson. However, Watson does not constitute CDS (at least not in the demo I saw at HIMSS a couple years ago). Rather, Watson just does the work every clinician would ideally do but doesn’t have time for: it consults thousands of clinical studies to find potential diagnoses relevant to the symptoms and history being reported, and ranks these diagnoses by probability. Both of those activities hijack a bit of the clinician’s human judgment, but they do not actually offer recommendations.

So there are clear and present justifications for demanding that CDS vendors demonstrate its reliability. We don’t really know what goes into CDS and how it works. Meanwhile, doctors are getting sick and tired of bearing the liability for all the tools they use, and the burden of their malpractice insurance is becoming a factor in doctors leaving the field. The doctors deserve some transparency and auditing, and so do the patients who ultimately incorporate the benefits and risks of CDS into their bodies.

CDS, like other aspects of the electronic health records into which it is embedded, has never been regulated or subjected to public safety tests and audits. The argument trotted out by EHR vendors–like every industry–when opposing regulation is that it will slow down innovation. But economic arguments have fuzzy boundaries–one can always find another consideration that can reverse the argument. In an industry that people can’t trust, regulation can provide a firm floor on which a new market can be built, and the assurance that CDS is working properly can open up the space for companies to do more of it and charge for it.

Still, there seems to be a pendulum swing away from regulation at present. The FDA has never regulated electronic health records as it has other medical software, and has been carving out classes of medical devices that require little oversight. When it took up EHR safety last year, the FDA asked merely for vendors to participate voluntarily in a “safety center.”

The prerequisite for gauging CDS’s reliability is transparency. Specifically, two aspects should be open:

  • The vendor must specify which studies, or analytics and data sets, went into the recommendation process.

  • The code carrying out the recommendation process must be openly published.

These fundamentals are just the start of of the medical industry’s responsibilities. Independent researchers must evaluate the sources revealed in the first step and determine whether they are the best available choices. Programmers must check the code in the second step for accuracy. These grueling activities should be funded by the clinical institutions that ultimately use the CDS, so that they are on a firm financial basis and free from bias.

The requirement for transparent studies raises the question of open access to medical journals, which is still rare. But that is a complex issue in the fields of research and publishing that I can’t cover here.

Finally, an independent service has to collect reports of CDS failures and make them public, like the FDA Adverse Event Reporting System (FAERS) for drugs, and the FDA’s Manufacturer and User Facility Device Experience (MAUDE) for medical devices.

These requirements are reasonably light-weight, although instituting them will seem like a major upheaval to industries accustomed to working in the dark. What the requirements can do, though, is put CDS on the scientific basis it never has had, and push forward the industry more than any “big data” can do.