Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Shimmer Addresses Interoperability Headaches in Fitness and Medical Devices

Posted on October 19, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The promise of device data pervades the health care field. It’s intrinsic to patient-centered medical homes, it beckons clinicians who are enamored with hopes for patient engagement, and it causes data analysts in health care to salivate. This promise also drives the data aggregation services offered by Validic and just recently, the Shimmer integration tool from Open mHealth. But according to David Haddad, Executive Director and Co-Founder of Open mHealth, devices resist attempts to yield up their data to programmers and automated tools.

Every device manufacturer has its own idiosyncratic way of handling data, focused on the particular uses for its own device. According to Haddad, for instance, different manufacturers provide completely different representations for the same data, leave out information like time zones and units, and can provide information as granular as once per second or as vague as once per day.. Even something as basic as secure connectivity is unstandardized. Although most vendors use the OAuth protocol that is widespread on the Web, many alter it in arbitrary ways. This puts barriers in the way of connecting to their APIs.

Validic and Shimmer have to overcome these hurdles one by one, vendor by vendor. The situation is just like the burdens facing applications that work with electronic health records. Haddad reports that the cacophony of standards among device vendors seems to come from lack of attention to the API side of their product, not deliberate obstructionism. With all the things device manufacturers have to worry about–the weight, feel, and attractiveness of the object itself, deals with payers and retailers offering the product, user interface issues, etc.–the API always seems to be an afterthought. (Apple may be an exception.)

So when Shimmer contacts the tool makers at these vendors, most respond and take suggestions in a positive manner. But they may have just one or two programmers working on the API, so progress is slow. It comes down to the old problem in health care: even with government emphasis on data sharing, there is still no strong advocate for interoperability in the field.

Why did Open mHealth take on this snake’s nest and develop Shimmer? Haddad says they figured that the advantages of open source–low cost of adoption and the ease of adding extensions–will open up new possibilities for app developers, clinical settings, and researchers. Most sites are unsure what to do with device data and are just starting to experiment with it. Being able to develop a prototype they can throw away later will foster innovation. Open mHealth has produced a detailed cost analysis in an appeal to researchers and clinicians to give Shimmer a try.

Shimmer, like the rest of the Open mHealth tools, rests on their own schemas for health data. The schemas in themselves can’t revolutionize health care. Every programmer maintains a healthy cynicism about schemas, harking back to xkcd’s cartoon about “one universal standard that covers everyone’s use cases.” But this schema took a broader view than most programs in health care, based on design principles that try to balance simplicity against usefulness and specificity. Of course, every attempt to maintain a balance comes up against complaints the the choices were too complex for some users, too simple for others. The true effects of Open mHealth appear as it is put to use–and that’s where open source tools and community efforts really can make a difference in health care. The schemas are showing value through their community adoption: they are already used by many sites, including some major commercial users, prestigious research sites, and start-ups.

A Pulse app translates between HL7 and the Open mHealth schema. This brings Open mHealth tools within easy reach of EHR vendors trying to support extensions, or users of the EHRs who consume their HL7-formatted data.

The Granola library translates between Apple’s HealthKit and JSON. Built on this library, the hipbone app takes data from an iPhone and puts it in JSON format into a Dropbox file. This makes it easier for researchers to play with HealthKit data.

In short, the walls separating medicine must be beaten down app by app, project by project. As researchers and clinicians release open source tools that tie different systems together, a bridge between products will emerge. Haddad hopes that more widespread adoption of the Open mHealth schema and Shimmer will increase pressure on device vendors to produce standardized data accessible to all.

Understanding Personal Health Data: Not All Bits Are the Same (Part 4 of 4, Personal Health Data)

Posted on October 1, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Previous segments of this article explained what makes data sharing difficult in four major areas of Internet data: money, personal data, media content, and government information. Now it’s time to draw some lessons for the health care field.

Personal Health Data

So let’s look now at our health data. It’s clearly sensitive, because disclosure can lead to discrimination by employees and insurers, as well as ostracism by the general public. For instance, an interesting article highlights the prejudice faced by recovering opiate addicts among friends and co-workers if they dare to reveal that they are successfully dealing with their addition.

The value of personal health data is caught up with our very lives. We cannot change our diagnoses or genetic predispositions to disease the way we can change our bank accounts or credit cards. At the same time, whatever information we can provide about ourselves is of immense value to researchers who are trying to solve the health conditions we suffer from.

So we can assume that health data has an enhanced value and requires more protection than other types of personal data.

Currently, we rarely control our data. Anything we tell doctors is owned by them. HIPAA strictly controls the sharing of such data (especially as it was clarified a couple years ago in the handling of third parties known as “business associates”). But doctors have many ways to deny us access to our own data. One of my family members goes to a doctor who committed the sin of changing practices. We had to pay the old practice to transfer records to the new practice. (I have written about problems with interoperability and data exchange in many other contexts, including blog posts about the Health Datapalooza and the HIT Standards Committee. Data exchange problems hinder research, big data inquiries, and clinical interventions.)

A doctor might well claim, “Why shouldn’t I own that data? Didn’t I do the exam? Didn’t I order the test whose data is now in the record?” Using that logic, the doctor should grant the lab ownership of the test. Now that patients can order their own medical tests (at least in Arizona), how does this dynamic around ownership change? And as more and more patients collect data on themselves using things such as the Apple Watch, network-connected scales, and fitness devices — data that may contain inaccuracies but is still useful for understanding people’s behavior and health status — how does this affect the power balance between a patient and the healthcare provider, or a researcher pursuing a clinical trial?

It’s also interesting to note that although HIPAA covers data collected by people who treat us and insurers who pay for the treatment, it has no impact on data in other settings. In particular, anything we choose to share online joins the enormous stream of public data without restrictions on use.

And it’s disturbing how freely data can be shared with marketers. For instance, when Vermont tried to restrain pharmacies from selling data about prescriptions to marketers, it was overruled by the U.S Supreme Court. The court took it for granted that pharmacies would adequately de-identify patients, but this is by no means assured.

What are the competing priorities, then, about protection of health data? On the research side — where data can really help patients by finding cures or palliative measures — pressures are increasing to loosen our personal control over data. Laws and regulations are being amended to override the usual restrictions placed on researchers for the reuse of patient data.

The argument for reform is that researchers often find new uses for old data, and that the effort of contacting patients and getting permission to reuse the data impose prohibitive expenses on researchers.

Certainly, I would get annoyed to be asked every week to approve the particular reuse of my personal data. But I’d rather be asked than have my preferences overridden. In the Internet age, I find it ridiculous to argue that researchers would be overly burdened to request access to data for new uses.

A number of efforts have been launched to give researchers a general, transferable consent to patient data. Supposedly, the patient would grant a general release of data for some class of research at the beginning of data collection. But these efforts have all come to naught. Remember that a patient is often asked for consent to release data at a very tense moment — just after being diagnosed with a serious disease or while on the verge of starting a difficult treatment regimen. Furthermore, the task of designing a general class of research is a major semantic issue. It would require formalizing in software what the patient does and does not allow — and no one has solved that problem.

How, then, do I suggest resolving the question of how we should handle patient data? First, patients need to control all data about themselves. All clinicians, pharmacies, labs, and other institutions exist to serve patients and support their health. They can certainly validate data — for instance, by providing digital signatures indicating the diagnoses, test results, and other information are accurate — but they do not own the data.

A look at how we’re protecting money on the Internet may help us understand the urgency of protecting health data: storing it securely, encrypting it, and making outside organizations jump through hoops to access it.

Ownership of patient data is currently as murky as personal data of other types, HIPAA notwithstanding. We can use many of the same arguments and concepts for health data that we’ve seen for other personal data. As with government data, we can hold interesting discussions about how much difference anonymization makes to ownership — do you have no right to restrict the use of your health or government data once it is supposedly anonymized?

Dr. Adrian Gropper, CTO of Patient Privacy Rights, says that the concept of “ownership” is not helpful for patient data. It is better in terms of both law and computer science to speak of authorization: who can look at the data and who grants the right to look at it. Gropper works on the open source HEART WG project, which is creating an OAuth-based system to support patient control, and which he and I have written about on the Radar site.

The corollary of this principle is that patients need repositories for their data that are easy to manage. HEART WG can tie together data in different repositories — the patient’s, the clinicians’, and others — and control the flow from one repository to another.

Finally, researchers must contact patients to explain how their data will be used and to request permission. With Internet tools, this should not be onerous for the researcher or the patient. Hey, everybody in medicine nowadays touts “patient engagement.” One is likely to get better data if one engages. So, let’s do it. And that way we can avoid the uncertain protection of anonymization or de-identification, which degrades patient data in order to render it harder to track back to an individual.

Researchers worry about request fatigue if individuals have to respond to every request manually, although I see this as a great opportunity for research projects to explain their goals and drum up public support. A number of organizations are trying to design systems to let individuals approve use of their data in advance, and I wish them the best, but all such attempts have shipwrecked on two unforgiving shoals. First is the impossibility of anticipating new research and the radically different directions it can take. Second is the trap of ontologies: who can define a useful concept such as “non-profit research” in terms strict enough to be written into computer programs? And how will the health care world agree on representations of the ontologies and produce perfectly interoperable computer programs to automate consent?

Value, ownership, and protection are difficult questions on an Internet that was designed in the 1960s and 1970s as a loose, open platform. We can fill the gaps through policy measures and technical protections based on well-grounded principles. Patients care about their data and its privacy. We can give them the control they crave and deserve.

Understanding Personal Health Data: Not All Bits Are the Same (Part 3 of 4, Government Information)

Posted on September 30, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Previous segments of this article (parts 1 and 2) have explored the special characteristics of various types of data shared on the Internet. This one will look at one more type of data before we turn to the health care field.

Government Information

Governments generate data during their routine activities, often in wild and unstructured ways. They have exploited this data for a long time, as some friends of mine found out a good 35 years ago when they started receiving promotions for wedding registries from local companies. They decided that the only way those companies could know they were getting married is from the town where they obtained their marriage license.

Government data offers many less exploitative uses, however; it forms a whole discipline of its own explored by such groups as the Governance Lab and the Personal Democracy Forum. Governments open data on transportation, bills and regulations, and crime and enforcement, among other things, to promote civic engagement and new businesses.

The value of such data comes from its reliability. Therefore, data that is inconsistently collected, poorly coded, outdated, or inordinately redacted reduces public confidence. Such lapses are all too common, even on the U.S. government’s celebrated data.gov site.

Joel Gurin, president and founder of the Center for Open Data Enterprise, told me that some of the most advanced federal agencies in the open data area — the Departments of Health and Human Services, Energy, Transportation, and Commerce — provide better access to their records on their own sites than on data.gov. The latter is not set up as well for finding data or getting information about its provenance, meaning, and use.

Some government data requires protection because it contains sensitive personal information. Legal battles often arise regarding whether data should be released on elected officials and employees — for instance, on police officers who were arrested for drunk driving — because the privacy rights of the official clash with the public’s right to know. De-identification is not always done properly, or succumbs to later re-identification efforts. And data can be misleading in the cases where analysts and journalists don’t understand the constraints around data collection. In addition, protection is currently decided on a rather arbitrary basis, and varies wildly from jurisdiction to jurisdiction.

For a long-range perspective on government data quality, I talked to Stefaan G. Verhulst, co-founder and Chief Research and Development Officer of the Governance Laboratory at NYU. He said, “The question is whether a government should only share data that is of high value and high quality, or whether we can benefit from a hybrid approach where the market addresses some of the current weaknesses of data. A site such as data.gov represents a long tail: some data may be of value only to a tiny set of people, but they may be willing to invest money in extracting the data from formats and repositories that are less than optimal. And hopefully, weaknesses will be rectified at the source by governments over time.”

Gurin, in his book Open Data Now (which I reviewed), calls for government outreach and partnerships with stakeholders, such as businesses that can capitalize on open data. Such partnerships would help decide what data to release and where to put resources to improve data.

One gets interesting results when asking who owns government data. The obvious answer is that it belongs to the taxpayers who paid for its collection, and by extension (because restricting it to taxpayers is unfeasible) to the public as a whole.

Nonetheless, many foreign countries and local U.S. governments copyright data. Access to such data is prohibitively expensive. Even when information is supposedly in the public domain, obscure data formats make it hard to retrieve online, and government agencies throughout the U.S. often charge exorbitant fees to people who obtain data, even when requests are granted under the Freedom of Information Act. Recent low points include resistance in Massachusetts to reforming the worst public record policies in the country, and the bizarre persecution of open government advocates by Georgia and Oregon. Unfortunately, the idea that government data should be open to all is intuitive, but far from universally accepted.

Now we have looked at four types of data in a series of articles; the next one will bring the focus back to health care.

Understanding Personal Health Data: Not All Bits Are the Same (Part 1 of 4)

Posted on September 28, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

When people run out of new things to say in the field of health IT, they utter the canard, “Why can’t exchanging patient data be as easy as downloading a file on the Internet?” For a long time, I was equally smitten by the notion of seamless exchange, which underlies the goals of accountable care, patient-centered medical homes, big data research, and the Precision Medicine Initiative so dear to the White House. Then I began to notice that patient information differs in deep ways from arbitrary data on the Internet.

Personal health data isn’t alone in having special characteristics that make handling it fraught with dangers and complications. In this article, I’ll look at several other types of online data laden with complexity — money, personal data, media content, and government information — and draw some conclusions for how we might handle health data.

Money

I am not an early adopter by habit, even though I work in high tech. When someone announces, “Now you can pay your bills using your phone!” it sounds to me like “Now you can mow your lawn using your violin!” Certain things just don’t go together naturally. Money is not like other bits; you can’t copy it the way people casually share their photos or email messages.

Of course I endorse the idea of online payment systems. They have transformed the economies of rural communities in underdeveloped parts of the world like sub-Saharan Africa. They can be useful in the U.S. for people who can’t get credit cards or even checking accounts.

Perhaps that’s why there are at least 235 (as of the time of publication) online payment systems. But money isn’t a casual commodity. It requires coordination and control. Even the ballyhooed Bitcoin system needs checks and balances. Famously described as decentralized because many uncoordinated systems create the coins and individuals store their own, Bitcoin-like systems are actually heavily centralized around the blockchain they hold in common.

Furthermore, most people don’t feel safe storing large quantities of bitcoins on personal servers, so they end up using centralized exchanges, which in turn suffer serious security breaches, as happened to Mt. Gox and Bitstamp.

So let’s look at some special aspects of money as data.

First, money has value. Ultimately — as we have seen in the crisis of the Euro and the narrowly averted default by Greece — money’s value comes from guarantees by banks, including countries’ central banks. Money’s value is increased by the importance placed on it by the people that want to steal it from us or cheat us out of it.

Second, money has an owner. In fact, I can’t imagine money without an owner. It would be like gold bullion buried on a desert island, contributing nothing to the world economy. So, the Internet culture of sharing has no meaning for money.

Third, money must be protected. Most of us — who can — use credit cards, because they are backed by complex systems for detecting theft and fraud run by multinational corporations who can indemnify us and handle our mishaps. If we store our money outside the banking system, we lack these protections.

These three traits — value, ownership, and protection — will turn up again in each of the types of Internet content I’ll look at in upcoming installments of this article. Does a review of money on the Internet help us assess health data? Comparisons are shaky, because they are very different. But because health data is so sensitive, we might learn a lot about its protection by paying attention to how money is handled.

Bringing the Obvious to the Surface Through Analytics

Posted on May 26, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Analytics can play many roles, big and small, in streamlining health care. Data crunching may uncover headline-making revelations such as the role smoking plays in cancer. Or it may save a small clinic a few thousand dollars. In either case, it’s the hidden weapon of modern science.

The experience of Dr. Jordan Shlain (@drshlain) is a success story in health care analytics, one that he taking big time with a company called HealthLoop. The new venture dazzles customers with fancy tools for tracking and measuring their customer interactions–but it all started with an orthopedic clinic and a simple question Shlain asked the staff: how many phone calls do you get each week?

Asking the right question is usually the start to a positive experience with analytics. In the clinic’s case, it wasn’t hard to find the right question because Shlain could hear the phones ringing off the hook all day. The staff told him they get some 200 calls each week and it was weighing them down.

OK, the next step was to write down who called and the purpose of every call. The staff kept journals for two weeks. Shlain and his colleagues then reviewed the data and found out what was generating the bulk of the calls.

Sometimes, analytics turns up an answer so simple, you feel you should have known it all along. That’s what happened in this case.

The clinic found that most calls came from post-operative patients who were encountering routine symptoms during recovery. After certain surgeries, for instance, certain things tend to happen 6 to 9 days afterward. As if they had received instructions to do, patients were calling during that 6-to-9-day period to ask whether they symptoms were OK and what they should do. Another set of conditions might turn up 11 to 14 days after the surgery.

Armed with this information, the clinic proceeded to eliminate most of their phone calls and free up their time for better work. Shlain calls the clinic’s response to patient needs “health loops,” a play on the idea of feedback loops. Around day 5 after a surgery, staff would contact the patient to warn her to look for certain symptoms during the 6-to-9-day period. They did this for every condition that tended to generate phone calls.

HealthLoop builds on this insight and attaches modern digital tools for tracking and communications. Patients are contacted through secure messaging on the device of their choice. They are provided with checklists of procedures to perform at home. There’s even a simple rating system, like the surveys you receive after taking your car in to be fixed or flying on an airline.

Patient engagement–probably the most popular application of health IT right now–is also part of HealthLoop. A dashboard warns the clinician which patients to perform each day, surfacing the results of risk stratification at a glance. There’s also an activity feed for each patient that summarizes what a doctor needs to know.

Analytics doesn’t have to be rocket science. But you have to know what you’re looking for, collect the data that tells you the answer, and embody the resulting insights into workflow changes and supporting technologies. With his first experiment in phone call tracking, Shlain just took the time to look. So look around your own environment and ask what obvious efficiencies analytics could turn up for you.

Open Source Electronic Health Records: Will They Support the Clinical Data Needs of the Future? (Part 1 of 2)

Posted on November 10, 2014 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Open source software missed out on making a major advance into health care when it was bypassed during hospitals’ recent stampede toward electronic health records, triggered over the past few years by Meaningful Use incentives. Some people blame the neglect of open source alternatives on a lack of marketing (few open source projects are set up to woo non-technical adoptors), some on conservative thinking among clinicians and their administrators, and some on the readiness of the software. I decided to put aside the past and look toward the next stage of EHRs. As Meaningful Use ramps down and clinicians have to look for value in EHRs, can the open source options provide what they need?

The oncoming end of Meaningful Use payments (which never came close to covering the costs of proprietary EHRs, but nudged many hospitals and doctors to buy them) may open a new avenue to open source. Deanne Clark of DSS, which markets a VistA-based product called vxVistA, believes open source EHRs are already being discovered by institutions with tight budgets, and that as Meaningful Use reimbursements go away, open source will be even more appealing.

My question in this article, though, is whether open source EHRs will meet the sophisticated information needs of emerging medical institutions, such as Accountable Care Organizations (ACOs). Shahid Shah has suggested some of the EHR requirements of ACOs. To survive in an environment of shrinking reimbursement and pay-for-value, more hospitals and clinics will have to beef up their uses of patient data, leading to some very non-traditional uses for EHRs.

EHRs will be asked to identify high-risk patients, alert physicians to recommended treatments (the core of evidence-based medicine), support more efficient use of clinical resources, contribute to population health measures, support coordinated care, and generally facilitate new relationships among caretakers and with the patient. A host of tools can be demanded by users as part of the EHR role, but I find that they reduce to two basic requirements:

  • The ability to interchange data seamlessly, a requirement for coordinated care and therefore accountable care. Developers could also hook into the data to create mobile apps that enhance the value of the EHR.

  • Support for analytics, which will support all the data-rich applications modern institutions need.

Eventually, I would also hope that EHRs accept patient-generated data, which may be stored in types and formats not recognized by existing EHRs. But the clinical application of patient-generated data is far off. Fred Trotter, a big advocate for open source software, says, “I’m dubious at best about the notion that Quantified Self data (which can be very valuable to the patients themselves) is valuable to a doctor. The data doctors want will not come from popular commercial QS devices, but from FDA-approved medical devices, which are more expensive and cumbersome.”

Some health reformers also cast doubt on the value of analytics. One developer on an open source EHR labeled the whole use of analytics to drive ACO decisions as “bull” (he actually used a stronger version of the word). He aired an opinion many clinicians hold, that good medicine comes from the old-fashioned doctor/patient relationship and giving the patient plenty of attention. In this philosophy, the doctor doesn’t need analytics to tell him or her how many patients have diabetes with complications. He or she needs the time to help the diabetic with complications keep to a treatment plan.

I find this attitude short-sighted. Analytics are proving their value now that clinicians are getting serious about using them–most notably since Medicare penalizes hospital readmissions with 30 days of discharge. Open source EHRs should be the best of breed in this area so they can compete with the better-funded but clumsy proprietary offerings, and so that they can make a lasting contribution to better health care.

The next installment of this article looks at current support for interoperability and analytics in open-source EHRs.

Ten-year Vision from ONC for Health IT Brings in Data Gradually

Posted on August 25, 2014 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

This is the summer of reformulation for national U.S. health efforts. In June, the Office of the National Coordinator (ONC) released its 10-year vision for achieving interoperability. The S&I Framework, a cooperative body set up by ONC, recently announced work on the vision’s goals and set up a comment forum. A phone call by the Health IT Standards Committeem (HITSC) on August 20, 2014 also took up the vision statement.

It’s no news to readers of this blog that interoperability is central to delivering better health care, both for individual patients who move from one facility to another and for institutions trying to accumulate the data that can reduce costs and improve treatment. But the state of data exchange among providers, as reported at these meetings, is pretty abysmal. Despite notable advances such as Blue Button and the Direct Project, only a minority of transitions are accompanied by electronic documents.

One can’t entirely blame the technology, because many providers report having data exchange available but using it on only a fraction of their patients. But an intensive study of representative documents generated by EHRs show that they make an uphill climb into a struggle for Everest. A Congressional request for ideas to improve health care has turned up similar complaints about inadequate databases and data exchange.

This is also a critical turning point for government efforts at health reform. The money appropriated by Congress for Meaningful Use is time-limited, and it’s hard to tell how the ONC and CMS can keep up their reform efforts without that considerable bribe to providers. (On the HITSC call, Beth Israel CIO John Halamka advised the callers to think about moving beyond Meaningful Use.) The ONC also has a new National Coordinator, who has announced a major reorganization and “streamlining” of its offices.

Read more..