Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Physician Calls For Widespread Patient Data Ownership

Posted on October 26, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

At present, patients anywhere in the United States are entitled to access their patient records, but the records are typically controlled by providers. New Hampshire is the only state which provides citizens with legal ownership of the health information, notes Eric Topol, MD.

“That’s completely wrong. That has to get fixed,” said Topol, who spoke at the MedCity ENGAGE show last week. “It should be your data.”  In fact, he calls patient data ownership “a civil right that’s yet to be granted.”

Patient data ownership rules vary across the U.S. In many states, including Washington, Idaho, North Dakota, Minnesota, Wisconsin, Michigan, New York, Maine, Pennsylvania, and Nevada, there was no law in place as of mid-2015 which specified whether patients are providers owned or had property rights medical records. But in a large number of additional states, including Oregon, California, Texas, Georgia and New Mexico, state laws specifically state that the hospital or physician owns the medical record.

Long before EMRs went into wide use, ownership of medical records would occasionally come into dispute, such as when a practice went out of business or a hospital was acquired. The historic lack of clear case law governing such transactions would occasionally lead to major legal controversies during such transitions.

Today, the stakes are even higher, contends Topol, who serves as director of the Scripps Translational Science Institute at San Diego-based Scripps Health. To realize the benefits of “individualized medicine” – Topol’s term for “precision medicine” — patients will have to control their health data, he said.

“We are going to be leaving population medicine – where it’s one size fits all — in favor of individualized medicine,” Topol told the audience. With individualized medicine, patients drive their own care, he said.

The current centralized model of health data ownership actually poses a risk to patients, Topol argues, given the ripe, financially-attractive lure that big databases pose. “We need to decentralize this data because the more it’s amassed, the more it’s going to be hacked,” he contends.

So what of Topol’s vision for “individualized medicine”? Well, here’s how I see it. Topol’s comments are interesting, but it seems to me that there’s an inherent contradiction between one half of his arguments and the other.

If by talking about individualized medicine, he’s referring to what is otherwise known as precision medicine, I’m not sure how we can pull it off without building big databases. After all, you don’t gain broad understandings of how, say, a cancer drug works without crunching numbers on thousands or millions of cases. So while giving consumers more power over the medical records makes sense, I don’t see how we could fail to aggregate them to some degree at least.

On the other hand, however, it does seem absurd to me that patients should ever lack the right to retrieve all of the records from the custody of a provider, particularly if the patient alleges malpractice or some form of malfeasance. If we want patients to engage with their health, owning the documentation on the care they received strikes me as an absolutely necessary first step.

Integrating With EMR Vendors Remains Difficult, But This Must Change

Posted on October 4, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Eventually, big EMR vendors will be forced to provide a robust API that makes it easy to attach services on to their core platform. While they may see it as a dilution of their value right now, in time it will become clear that they can’t provide everything to everyone.

For example, is pretty unlikely that companies like Epic and Cerner will build genomics applications, so they’re going to need to connect using an API to add that functionality for their users. (Check out this video with John Lynn, Chris Bradley of Mana Health and Josh Siegel of CareCloud for more background on building a usable healthcare API.)

But as recent research points out, some of the vendors may be dragged kicking and screaming in that direction before they make it easy to connect to their systems. In fact, a new study by Health 2.0 concludes that smaller health IT vendors still face significant difficulties integrating with EMRs created by larger vendors.

“The complaint is true: it’s hard for smaller health tech companies to integrate their solutions with big EMR vendors,” wrote Health 2.0’s Matthew Holt on The Health Care Blog. “Most EMR vendors don’t make it easy.”

The study, which was supported by the California Health Care Foundation, surveyed more than 100 small health technology firms. The researchers found that only two EMR vendors (athenahealth and Allscripts) were viewed by smaller vendors as having a well-advertised, easy to access partner program. When it came to other large vendors, about half were happy with Epic, Cerner and GE’s efforts, while NextGen and eClinicalWorks got low marks for ease of integration, Holt reported.

To get the big vendors on board, it seems as though customer pressure is still critical at present, Holt says. Vendors reported that it helped a great deal if they had a customer who was seeking the integration. The degree to which this mattered varied, but it seemed to be most important in the case of Epic, with 70% of small vendors saying that they needed to have a client recommend them before Epic would get involved in integration project.

But that doesn’t mean it’s smooth sailing from there on out.  Even in the case where the big EMR vendors got involved with the integration project, smaller tech vendors weren’t fond of many of their APIs .

More than a quarter of those using Epic and Cerner APIs rated them poorly, followed by 30% for NextGen, GE and MEDITECH and a whopping 50% for eClinicalWorks. The smaller vendors’ favorite APIs seemed to be the ones offered by athenahealth, Allscripts and McKesson. According to Holt, athenahealth’s API got the best ratings overall.

All that being said, some of the smaller vendors weren’t that enthusiastic about pushing for integration with big EMR vendors at present. Of the roughly 30% who haven’t integrated with such vendors, half said it wasn’t worth the effort to try and integrate, for reasons that included the technical or financial cost would be too great. Also, some of the vendors surveyed by Health 2.0 reported they were more focused on other data-gathering efforts, such as accessing wearables data.

Still, EMR vendors large and small need to change their attitude about opening up the platform, and smaller vendors need to support them when they do so. Otherwise, the industry will remain trapped by a self-fulfilling prophecy that true integration can never happen.

Healthcare Consent and its Discontents (Part 3 of 3)

Posted on May 18, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article rated the pros and cons of new approaches to patient consent and control over data. Here we’ll look at emerging risks.

Privacy solidarity

Genetics present new ethical challenges–not just in the opportunity to change genes, but even just when sequencing them. These risks affect not only the individual: other members of her family and ethnic group can face discrimination thanks to genetic weaknesses revealed. Isaac Kohane said that the average person has 40 genetic markers indicating susceptibility to some disease or other. Furthermore, we sometimes disagree on what we consider a diseased condition.

Big data, particularly with genomic input, can lead to group harms, so Brent Mittelstadt called for moving beyond an individual view of privacy. Groups also have privacy needs (a topic I explored back in 1998). It’s not enough for an individual to consider the effect of releasing data on his own future, but on the future of family members, members of his racial group, etc. Similarly, Barbara Evans said we have to move from self-consciousness to social consciousness. But US and European laws consider privacy and data protection only on the basis of the individual.

The re-identification bogey man

A good many references were made at the conference to the increased risk of re-identifying patients from supposedly de-identified data. Headlines are made when some researcher manages to uncover a person who thought himself anonymous (and who database curators thought was anonymous when they released their data sets). In a study conducted by a team that included speaker Catherine M. Hammack, experts admitted that there is eventually a near 100% probability of re-identifying each person’s health data. The culprit in all this is burgeoning set of data collected from people as they purchase items and services, post seemingly benign news about themselves on social media, and otherwise participate in modern life.

I think the casual predictions of the end of anonymity we hear so often are unnecessarily alarmist. The field of anonymity has progressed a great deal since Latanya Sweeney famously re-identified a patient record for Governor William Weld of Massachusetts. Re-identifications carried out since then, by Sweeney and others, have taken advantage of data that was not anonymized (people just released it with an intuitive assumption that they could not be re-identified) or that was improperly anonymized, not using recommended methods.

Unfortunately, the “safe harbor” in HIPAA (designed precisely for medical sites lacking the skills to de-identify data properly) enshrines bad practices. Still, in a HIPAA challenge cited by Ameet Sarpatwari,only two of 15,000 individuals were re-identified. The mosaic effect is still more of a theoretical weakness, not an immediate threat.

I may be biased, because I edited a book on anonymization, but I would offer two challenges to people who cavalierly dismiss anonymization as a useful protection. First, if we threw up our hands and gave up on anonymization, we couldn’t even carry out a census, which is mandated in the U.S. Constitution.

Second, anonymization is comparable to encryption. We all know that computer speeds are increasing, just as are the sophistication of re-identification attacks. The first provides a near-guarantee that, eventually, our current encrypted conversations will be decrypted. The second, similarly, guarantees that anonymized data will eventually be re-identified. But we all still visit encrypted web sites and use encryption for communications. Why can’t we similarly use the best in anonymization?

A new article in the Journal of the American Medical Association exposes a gap between what doctors consider adequate consent and what’s meaningful for patients, blaming “professional indifference” and “organizational inertia” for the problem. In research, the “reasonable-patient standard” is even harder to define and achieve.

Patient consent doesn’t have to go away. But it’s getting harder and harder for patients to anticipate the uses of their data, or even to understand what data is being used to match and measure them. However, precisely because we don’t know how data will be used or how patients can tolerate it, I believe that incremental steps would be most useful in teasing out what will work for future research projects.

Healthcare Consent and its Discontents (Part 2 of 3)

Posted on May 17, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article laid out what is wrong with informed consent today. We’ll continue now to look at possible remedies.

Could we benefit from more opportunities for consent?

Donna Gitter said that the Common Rule governing research might be updated to cover de-identified data as well as personally identifiable information. The impact of this on research, of course, would be incalculable. But it might lead to more participation in research, because 72% of patients say they would like to be asked for permission before their data is shared even in de-identified form. Many researchers, such as conference speaker Liza Dawson, would rather give researchers the right to share de-identified data without consent, but put protections in place.

To link multiple data sets, according to speaker Barbara Evans, we need an iron-clad method of ensuring that the data for a single individual is accurately linked. This requirement butts up against the American reluctance to assign a single ID to a patient. The reluctance is well-founded, because tracking individuals throughout their lives can lead to all kinds of seamy abuses.

One solution would be to give each individual control over a repository where all of her data would go. That solution implies that the individual would also control each release of the data. A lot of data sets could easily vanish from the world of research, as individuals die and successors lose interest in their data. We must also remember that public health requires the collection of certain types of data even if consent is not given.

Another popular reform envisioned by health care technologists, mentioned by Evans, is a market for health information. This scenario is part of a larger movement known as Vendor Relationship Management, which I covered several years ago. There is no doubt that individuals generate thousands of dollars worth of information, in health care records and elsewhere. Speaker Margaret Foster Riley claimed that the data collected from your loyalty card by the grocer is worth more than the money you spend there.

So researchers could offer incentives to share information instead of informed consent. Individuals would probably hire brokers to check that the requested uses conform to the individuals’ ethics, and that the price offered is fair.

Giving individuals control and haggling over data makes it harder, unfortunately, for researchers to assemble useful databases. First of all, modern statistical techniques (which fish for correlations) need huge data sets. Even more troubling is that partial data sets are likely to be skewed demographically. Perhaps only people who need some extra cash will contribute their data. Or perhaps only highly-educated people. Someone can get left out.

These problems exist even today, because our clinical trials and insurance records are skewed by income, race, age, and gender. Theoretically, it could get even worse if we eliminate the waiver that lets researchers release de-identified data without patient consent. Disparities in data sets and research were heavily covered at the Petrie-Flom conference, as I discuss in a companion article.

Privacy, discrimination, and other legal regimes

Several speakers pointed out that informed consent loses much of its significance when multiple data sets can be combined. The mosaic effect adds another layer of uncertainty about what will happen to data and what people are consenting to when they release it.

Nicolas Terry pointed out that American law tends to address privacy on a sector-by-sector basis, having one law for health records, another for student records, and so forth. He seemed to indicate that the European data protection regime, which is comprehensive, would be more appropriate nowadays where the boundary between health data and other forms of data is getting blurred. Sharona Hoffman said that employers and insurers can judge applicants’ health on the basis of such unexpected data sources as purchases at bicycle stores, voting records (healthy people have more energy to get involved in politics), and credit scores.

Mobile apps notoriously bring new leaks to personal data. Mobile operating systems fastidiously divide up access rights and require apps to request these rights during installation, but most of us just click Accept for everything, including things the apps have no right to need, such as our contacts and calendar. After all, there’s no way to deny an app one specific access right while still installing it.

And lots of these apps abuse their access to data. So we remain in a contradictory situation where certain types of data (such as data entered by doctors into records) are strongly protected, and other types that are at least as sensitive lack minimal protections. Although the app developers are free to collect and sell our information, they often promise to aggregate and de-identify it, putting them at the same level as traditional researchers. But no one requires the app developers to be complete and accurate.

To make employers and insurers pause before seeking out personal information, Hoffman suggested requiring that data brokers, and those who purchase their data, to publish the rules and techniques they employ to make use of the data. She pointed to the precedent of medical tests for employment and insurance coverage, where such disclosure is necessary. But I’m sure this proposal would be fought so heavily, by those who currently carry out their data spelunking under cover of darkness, that we’d never get it into law unless some overwhelming scandal prompted extreme action. Adrian Gropper called for regulations requiring transparency in every use of health data, and for the use of open source algorithms.

Several speakers pointed out that privacy laws, which tend to cover the distribution of data, can be supplemented by laws regarding the use of data, such as anti-discrimination and consumer protection laws. For instance, Hoffman suggested extending the Americans with Disabilities Act to cover people with heightened risk of suffering from a disability in the future. The Genetic Information Nondiscrimination Act (GINA) of 2008 offers a precedent. Universal health insurance coverage won’t solve the problem, Hoffman said, because businesses may still fear the lost work time and need for workplace accommodations that spring from health problems.

Many researchers are not sure whether their use of big data–such as “data exhaust” generated by people in everyday activities–would be permitted under the Common Rule. In a particularly wonky presentation (even for this conference) Laura Odwazny suggested that the Common Rule could permit the use of data exhaust because the risks it presents are no greater than “daily life risks,” which are the keystone for applying the Common Rule.

The final section of this article will look toward emerging risks that we are just beginning to understand.

Healthcare Consent and its Discontents (Part 1 of 3)

Posted on May 16, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Not only is informed consent a joke flippantly perpetrated on patients; I expect that it has inspired numerous other institutions to shield themselves from the legal consequences of misbehavior by offering similar click-through “terms of service.” We now have a society where powerful forces can wring from the rest of us the few rights we have with a click. So it’s great to see informed consent reconsidered from the ground up at the annual conference of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School.

Petrie-Flom annual 2016 conference

Petrie-Flom annual 2016 conference

By no means did the speakers and audience at this conference agree on what should be done to fix informed consent (only that it needs fixing). The question of informed consent opens up a rich dialog about the goals of medical research, the relationship between researchers and patients, and what doctors have a right to do. It also raises questions for developers and users of electronic health records, such as:

  • Is it ethical to save all available data on a person?

  • If consent practices get more complex, how are the person’s wishes represented in the record?

  • If preferences for the data released get more complex, can we segment and isolate different types of data?

  • Can we find and notify patients of research results that might affect them, if they choose to be notified?

  • Can we make patient matching and identification more robust?

  • Can we make anonymization more robust?

A few of these topics came up at the conference. The rest of this article summarizes the legal and ethical topics discussed there.

The end of an era: IRBs under attack

The annoying and opaque informed consent forms we all have to sign go back to the 1970s and the creation of Institutional Review Boards (IRBs). Before that lay the wild-west era of patient relationships documented in Rebecca Skloot’s famous Immortal Life of Henrietta Lacks.

IRBs were launched in a very different age, based on assumptions that are already being frayed and will probably no longer hold at all a few years from now:

  • Assumption: Research and treatment are two different activities. Challenge: Now they are being combined in many institutions, and the ideal of a “learning heath system” will make them inextricable.

  • Assumption: Each research project takes place within the walls of a single institution, governed by its IRB. Challenge: Modern research increasingly involves multiple institutions with different governance, as I have reported before.

  • Assumption: A research project is a time-limited activity, lasting generally only about a year. Challenge: Modern research can be longitudinal and combine data sets that go back decades.

  • Assumption: The purpose for which data is collected can be specified by the research project. Challenge: Big data generally runs off of data collected for other purposes, and often has unclear goals.

  • Assumption: Inclusion criteria for each project are narrow. Challenge: Big data ranges over widely different sets of people, often included arbitrarily in data sets.

  • Assumption: Rules are based on phenotypal data: diagnoses, behavior, etc. Challenge: Genetics introduces a whole new set of risks and requirements, including the “right not to know” if testing turns up an individual’s predisposition to disease.

  • Assumption: The risks of research are limited to the individuals who participate. Challenge: As we shall see, big data affects groups as well as individuals.

  • Assumption: Properly de-identified data has an acceptably low risk of being re-identified. Challenge: Privacy researchers are increasingly discovering new risks from combining multiple data sources, a trend called the “mosaic effect.” I will dissect the immediacy of this risk later in the article.

Now that we have a cornucopia of problems, let’s look at possible ways forward.

Chinese menu consent

In the Internet age, many hope, we can provide individuals with a wider range of ethical decisions than the binary, thumbs-up-thumbs-down choice thrust before them by an informed consent form.

What if you could let your specimens or test results be used only for cancer research, or stipulate that they not be used for stem cell research, or even ask for your contributions to be withdrawn from experiments that could lead to discrimination on the basis of race? The appeal of such fine-grained consent springs from our growing realization that (as in the Henrietta Lacks case) our specimens and data may travel far. What if a future government decides to genetically erase certain racial or gender traits? Eugenics is not a theoretical risk; it has been pursued before, and not just by Nazis.

As Catherine M. Hammack said, we cannot anticipate future uses for medical research–especially in the fast-evolving area of genetics, whose possibilities alternate between exciting and terrifying–so a lot of individuals would like to draw their own lines in the sand.

I don’t personally believe we could implement such personalized ethical statements. It’s a problem of ontology. Someone has to list all the potential restrictions individuals may want to impose–and the list has to be updated globally at all research sites when someone adds a new restriction. Then we need to explain the list and how to use it to patients signing up for research. Researchers must finally be trained in the ontology so they can gauge whether a particular use meets the requirements laid down by the patient, possibly decades earlier. This is not a technological problem and isn’t amenable to a technological solution.

More options for consent and control over data will appear in the next part of this article.

When Providing a Health Service, the Infrastructure Behind the API is Equally Important

Posted on May 2, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

In my ongoing review of application programming interfaces (APIs) as a technical solution for offering rich and flexible services in health care, I recently ran into two companies who showed as much enthusiasm for their internal technologies behind the APIs as for the APIs themselves. APIs are no longer a novelty in health services, as they were just five years ago. As the field gets crowded, maintenance and performance take on more critical roles in offering a successful business–so let’s see how Orion Health and Mana Health back up their very different offerings.

Orion Health

This is a large analytics firm that has staked a a major claim in the White House’s Precision Medicine Initiative. Orion Health’s data platform, Amadeus, addresses population health management as well as “considering how they can better tailor care for each chronically ill individual,” as put by Dave Bennett, executive vice president for Product & Strategy. “We like to say that population health is the who and precision medicine is the how.” Thus, Amadeus can harmonize a huge variety of inputs, such as how many steps a patient takes each day at home, to prevent readmissions.

Orion Health has a cloud service, a capacity for handling huge data sets such as genomes, and a selection of tools for handling such varied sources as clinical, claims, pharmacy, genetic, and consumer device or other patient-generated data. Environmental and social data are currently being added. It has more than 90 million patient records in its systems worldwide.

Patient matching links up data sets from different providers. All this data is ingested, normalized, and made accessible through APIs to authorized parties. Customers can write their own applications, visualizations, and SQL queries. Amadeus is used by the Centers for Disease Control, and many hospitals join the chorus to submit data to the CDC.

So far, Orion Health resembles some other big initiatives that major companies in the health care space are offering. I covered services from Philips in a recent article, and another site talks about GE. Bennett says that Orion Health really distinguishes itself through the computing infrastructure that drives the analytics and data access.

Many companies use conventional relational database as their canonical data store. Relational databases are 1980s-era technology, unmatched in their robustness and sophistication in querying (through the SQL language), but becoming a bottleneck for the data sizes that health analytics deals with.

Over the past decade, every industry that needs to handle enormous, streaming sets of data has turned to a variety of data stores known collectively as NoSQL. Ironically, these are often conceptually simpler than SQL databases and have roots going much farther back in computing history (such as key/value stores). But these data stores let organizations run a critical subset of queries in real time over huge data sets. In addition, analytics are carried out by newer MapReduce algorithms and in-memory services such as Spark. As an added impetus for development, these new technologies are usually free and open source software.

Amadeus itself stores data in Cassandra, one of the most mature NoSQL data stores, and uses Spark for processing. According to Bennett, “Spark enables Amadeus to future proof healthcare organizations for long term innovation. Bringing data and analytics together in the cloud allows our customers to generate deeper insights efficiently and with increased relevancy, due to the rapidity of the analytics engine and the streaming of current data in Amadeus. All this can be done at a lower cost than traditional healthcare analytics that move the data from various data warehouses that are still siloed.” Elastic Search is also used. In short, the third-party tools used within Orion Health are ordinary and commonly found. It is simply modern in the same way as computing facilities in other industries–così fan tutte.

Mana Health

This company integrates device data into EHRs and other data stores. It achieved fame when it was chosen for the New York State patient portal. According to Raj Amin, co-founder and Executive Chairman, the company won over the judges with the convenient and slick tile concept in their user interface. Each tile could be clicked to reveal a deeper level of detail in the data. The company tries to serve clinicians, patients, and data analysts alike. Clients include HIEs, health systems, medical device manufacturers, insurers, and app developers.

Like Orion Health, Mana Health is very conscious of staying on the leading edge of technology. They are mobile-friendly and architect their solutions using microservices, a popular form of modular development that attempts to maximize flexibility in coding and deploying new services. On a lark, they developed a VR engine compatible with the Oculus Rift to showcase what can creatively be built on their API. Although this Rift project has no current uses, the development effort helps them stay flexible so that they can adapt to whatever new technologies come down the pike.

Because Mana Health developed their API some eighteen months ago, they pre-dated some newer approaches and standards. They plan to offer compatibility with emerging standards such as FHIR that see industry adoption. The company recently was announced as a partner in the Commonwell Alliance, a project formed by a wide selection of major EHR vendors to pursue interoperability.

To support machine learning, Mana Health stores data in an open source database called Neo4j. This is a very unusual technology called a graph database, whose history and purposes I described two years ago.

Graphs are familiar to anyone who has seen airline maps showing the flights between cities. Graphs are also common for showing social connections, such as your friends-of-friends on Facebook. In health care, as well, graphs are very useful tools. They show relationships, but in a very different way from relational databases. Graphs are better than relational databases at tracing connections between people or other entities. For instance, a team led by health IT expert Fred Trotter used Neo4J to store and query the data in DocGraph, linking primary care physicians to the specialists to which they refer patients.

In their unique ways, Mana Health and Orion Health follow trends in the computing industry and judiciously choose tools that offer new forms of access to data, while being proven in the field. Although commenters in health IT emphasize the importance of good user interfaces, infrastructure matters too.

Another Quality Initiative Ahead of Its Time, From California

Posted on March 21, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

When people go to get health care–or any other activity–we evaluate it for both cost and quality. But health care regulators have to recognize when the ingredients for quality assessment are missing. Otherwise, assessing quality becomes like the drunk who famously looked for his key under the lamplight instead of where the key actually lay. And sadly, as I read a March 4 draft of a California initiative to rate health care insurance, I find that once again the foundations for assessing quality are not in place, and we are chasing lamplights rather than the keys that will unlock better care.

The initiative I’ll discuss in this article comes out of Covered California, one of the Unites States’ 13 state-based marketplaces for health insurance mandated by the ACA. (All the other states use a federal marketplace or some hybrid solution.) As the country’s biggest state–and one known for progressive experiments–California is worth following to see how adept they are at promoting the universally acknowledged Triple Aim of health care.

An overview of health care quality

There’s no dearth of quality measurement efforts in health care–I gave a partial overview in another article. The Covered California draft cites many of these efforts and advises insurers to hook up with them.

Alas–there are problems with all the quality control efforts:

  • Problems with gathering accurate data (and as we’ll see in California’s case, problems with the overhead and bureaucracy created by this gathering)

  • Problems finding measures that reflect actual improvements in outcomes

  • Problems separating things doctors can control (such as follow-up phone calls) with things they can’t (lack of social supports or means of getting treatment)

  • Problems turning insights into programs that improve care.

But the biggest problem in health care quality, I believe, is the intractable variety of patients. How can you say that a particular patient with a particular combination of congestive heart failure, high blood pressure, and diabetes should improve by a certain amount over a certain period of time? How can you guess how many office visits it will take to achieve a change, how many pills, how many hospitalizations? How much should an insurer pay for this treatment?

The more sophisticated payers stratify patients, classifying them by the seriousness of their conditions. And of course, doctors have learned how to game that system. A cleverly designed study by the prestigious National Bureau of Economic Research has uncovered upcoding in the U.S.’s largest quality-based reimbursement program, Medicare Advantage. They demonstrate that doctors are gaming the system in two ways. First, as the use of Medicare Advantage goes up, so do the diagnosed risk levels of patients. Second, patients who transition from private insurance into Medicare Advantage show higher risk not seen in fee-for-service Medicare.

I don’t see any fixes in the Covered California draft to the problem of upcoding. Probably, like most government reimbursement programs, California will slap on some weighting factor that rewards hospitals with higher numbers of poor and underprivileged patients. But this is a crude measure and is often suspected of underestimating the extra costs these patients bring.

A look at the Covered California draft

Covered California certainly understands what the health care field needs, and one has to be impressed with the sheer reach and comprehensiveness of their quality plan. Among other things, they take on:

  • Patient involvement and access to records (how the providers hated that in the federal Meaningful Use requirements!)

  • Racial, ethnic, and gender disparities

  • Electronic record interoperability

  • Preventive health and wellness services

  • Mental and behavioral health

  • Pharmaceutical costs

  • Telemedicine

If there are any pet initiatives of healthcare reformers that didn’t make it into the Covered California plan, I certainly am having trouble finding them.

Being so extensive, the plan suffers from two more burdens. First, the reporting requirements are enormous–I would imagine that insurers and providers would balk simply at that. The requirements are burdensome partly because Covered California doesn’t seem to trust that the major thrust of health reform–paying for outcomes instead of for individual services–will provide an incentive for providers to do other good things. They haven’t forgotten value-based reimbursement (it’s in section 8.02, page 33), but they also insist on detailed reporting about patient engagement, identifying high-risk patients, and reducing overuse through choosing treatments wisely. All those things should happen on their own if insurers and clinicians adopt payments for outcomes.

Second, many of the mandates are vague. It’s not always clear what Covered California is looking for–let alone how the reporting requirements will contribute to positive change. For instance, how will insurers be evaluated in their use of behavioral health, and how will that use be mapped to meeting the goals of the Triple Aim?

Is rescue on the horizon?

According to a news report, the Covered California plan is “drawing heavy fire from medical providers and insurers.” I’m not surprised, given all the weaknesses I found, but I’m disappointed that their objections (as stated in the article) come from the worst possible motivation: they don’t like its call for transparent pricing. Hiding the padding of costs by major hospitals, the cozy payer/provider deals, and the widespread disparities unrelated to quality doesn’t put providers and insurers on the moral high ground.

To me, the true problem is that the health care field has not learned yet how to measure quality and cost effectiveness. There’s hope, though, with the Precision Medicine initiative that recently celebrated its first anniversary. Although analytical firms seem to be focusing on processing genomic information from patients–a high-tech and lucrative undertaking, but one that offers small gains–the real benefit would come if we “correlate activity, physiological measures and environmental exposures with health outcomes.” Those sources of patient variation account for most of the variability in care and in outcomes. Capture that, and quality will be measurable.

Streamlining Pharmaceutical and Biomedical Research in Software Agile Fashion

Posted on January 18, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Medical research should not be in a crisis. More people than ever before want its products, and have the money to pay for them. More people than ever want to work in the field as well, and they’re uncannily brilliant and creative. It should be a golden era. So the myriad of problems faced by this industry–sources of revenue slipping away from pharma companies, a shift of investment away from cutting-edge biomedical firms, prices of new drugs going through the roof–must lie with the development processes used in the industry.

Like many other industries, biomedicine is contrasted with the highly successful computer industry. Although the financial prospects of this field have sagged recently (with hints of an upcoming dot-com bust similar to the early 2000s), there’s no doubt that computer people have mastered a process for churning out new, appealing products and services. Many observers dismiss the comparison between biomedicine and software, pointing out that the former has to deal much more with the prevalence of regulations, the dominance of old-fashioned institutions, and the critical role of intellectual property (patents).

Still, I find a lot of intriguing parallels between how software is developed and how biomedical research becomes products. Coding up a software idea is so simple now that it’s done by lots of amateurs, and Web services can try out and throw away new features on a daily basis. What’s expensive is getting the software ready for production, a task that requires strict processes designed and carried out by experienced professionals. Similarly, in biology, promising new compounds pop up all the time–the hard part is creating a delivery mechanism that is safe and reliable.

Generating Ideas: An Ever-Improving Environment

Software development has benefited in the past decade from an incredible degree of evolving support:

  • Programming languages that encapsulate complex processes in concise statements, embody best practices, and facilitate maintenance through modularization and support for testing

  • Easier development environments, especially in the cloud, which offer sophisticated test tools (such as ways to generate “mock” data for testing and rerun tests automatically upon each change to the code), easy deployment, and performance monitoring

  • An endless succession of open source libraries to meet current needs, so that any problem faced by programmers in different settings is solved by the first wave of talented programmers that encounter it

  • Tools for sharing and commenting on code, allowing massively distributed teams to collaborate

Programmers have a big advantage over most fields, in that they are experts in the very skills that produce the tools they use. They have exploited this advantage of the years to make software development cheaper, faster, and more fun. Treated by most of the industry as a treasure of intellectual property, software is actually becoming a commodity.

Good software still takes skill and experience, no doubt about that. Some research has discovered that a top programmer is one hundred times as productive as a mediocre one. And in this way, the programming field also resembles biology. In both cases, it takes a lot of effort and native talent to cross the boundary from amateur to professional–and yet more than enough people have done so to provoke unprecedented innovation. The only thing holding back medical research is lack of funding–and that in turn is linked to costs. If we lowered the costs of drug development and other treatments, we’d free up billions of dollars to employ the thousands of biologists, chemists, and others striving to enter the field.

Furthermore, there are encouraging signs that biologists in research labs and pharma companies are using open source techniques as software programmers do to cut down waste and help each other find solutions faster, as described in another recent article and my series on Sage Bionetworks. If we can expand the range of what companies call “pre-competitive research” and sign up more of the companies to join the commons, innovation in biotech will increase.

On the whole, most programming teams practice agile development, which is creative, circles around a lot, and requires a lot of collaboration. Some forms of development still call for a more bureaucratic process of developing requirements, approving project plans, and so forth–you can’t take an airplane back to the hanger for a software upgrade if a bug causes it to crash into a mountain. And all those processes exist in agile development too, but subject to a more chaotic process. The descriptions I’ve read of drug development hark of similar serendipity and unanticipated twists.

The Chasm Between Innovation and Application

The reason salaries for well-educated software developers are skyrocketing is that going from idea to implementation is an entirely different job from idea generation.

Software that works in a test environment often wilts when exposed to real-life operating conditions. It has to deal with large numbers of requests, with ill-formed or unanticipated requests from legions of new users, with physical and operational interruptions that may result from a network glitch halfway around the world, with malicious banging from attackers, and with cost considerations associated with scaling up.

In recent years, the same developers who created great languages and development tools have put a good deal of ingenuity into tools to solve these problems as well. Foremost, as I mentioned before, are cloud offerings–Infrastructure as a Service or Platform as a Service–that take hardware headaches out of consideration. At the cost of increased complexity, cloud solutions let people experiment more freely.

In addition, a bewildering plethora of tools address every task an operations person must face: creating new instances of programs, scheduling them, apportioning resources among instances, handling failures, monitoring them for uptime and performance, and so on. You can’t count the tools built just to help operations people collect statistics and create visualizations so they can respond quickly to problems.

In medicine, what happens to a promising compound? It suddenly runs into a maze of complicated and costly requirements:

  • It must be tested on people, animals, or (best of all) mock environments to demonstrate safety.

  • Researchers must determine what dose, delivered in what medium, can withstand shipping and storage, get into the patient, and reach its target.

  • Further testing must reassure regulators and the public that the drug does its work safely and effectively, a process that involves enormous documentation.

As when deploying software, developing and testing a treatment involves much more risk and many more people than the original idea took. But software developers are making progress on their deployment problem. Perhaps better tools and more agile practices can cut down the tool taken by the various phases of pharma development. Experiments being run now include:

  • Sharing data about patients more widely (with their consent) and using big data to vastly increase the pool of potential test subjects. This is crucial because a a large number of tests fail for lack of subjects

  • Using big data also to track patients better and more quickly find side effects and other show-stoppers, as well as potential off-label uses.

  • Tapping into patient communities to determine better what products they need, run tests more efficiently, and keep fewer from dropping out.

There’s hope for pharma and biomedicine. The old methods are reaching the limits of their effectiveness, as we demand ever more proof of safety and effectiveness. The medical field can’t replicate what software developers have done for themselves, but it can learn a lot from them nevertheless.

Significant Articles in the Health IT Community in 2015

Posted on December 15, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Have you kept current with changes in device connectivity, Meaningful Use, analytics in healthcare, and other health IT topics during 2015? Here are some of the articles I find significant that came out over the past year.

The year kicked off with an ominous poll about Stage 2 Meaningful Use, with implications that came to a head later with the release of Stage 3 requirements. Out of 1800 physicians polled around the beginning of the year, more than half were throwing in the towel–they were not even going to try to qualify for Stage 2 payments. Negotiations over Stage 3 of Meaningful Use were intense and fierce. A January 2015 letter from medical associations to ONC asked for more certainty around testing and certification, and mentioned the need for better data exchange (which the health field likes to call interoperability) in the C-CDA, the most popular document exchange format.

A number of expert panels asked ONC to cut back on some requirements, including public health measures and patient view-download-transmit. One major industry group asked for a delay of Stage 3 till 2019, essentially tolerating a lack of communication among EHRs. The final rules, absurdly described as a simplification, backed down on nothing from patient data access to quality measure reporting. Beth Israel CIO John Halamka–who has shuttled back and forth between his Massachusetts home and Washington, DC to advise ONC on how to achieve health IT reform–took aim at Meaningful Use and several other federal initiatives.

Another harbinger of emerging issues in health IT came in January with a speech about privacy risks in connected devices by the head of the Federal Trade Commission (not an organization we hear from often in the health IT space). The FTC is concerned about the security of recent trends in what industry analysts like to call the Internet of Things, and medical devices rank high in these risks. The speech was a lead-up to a major report issued by the FTC on protecting devices in the Internet of Things. Articles in WIRED and Bloomberg described serious security flaws. In August, John Halamka wrote own warning about medical devices, which have not yet started taking security really seriously. Smart watches are just as vulnerable as other devices.

Because so much medical innovation is happening in fast-moving software, and low-budget developers are hankering for quick and cheap ways to release their applications, in February, the FDA started to chip away at its bureaucratic gamut by releasing guidelines releasing developers from FDA regulation medical apps without impacts on treatment and apps used just to transfer data or do similarly non-transformative operations. They also released a rule for unique IDs on medical devices, a long-overdue measure that helps hospitals and researchers integrate devices into monitoring systems. Without clear and unambiguous IDs, one cannot trace which safety problems are associated with which devices. Other forms of automation may also now become possible. In September, the FDA announced a public advisory committee on devices.

Another FDA decision with a potential long-range impact was allowing 23andMe to market its genetic testing to consumers.

The Department of Health and Human Services has taken on exceedingly ambitious goals during 2015. In addition to the daunting Stage 3 of Meaningful Use, they announced a substantial increase in the use of fee-for-value, although they would still leave half of providers on the old system of doling out individual payments for individual procedures. In December, National Coordinator Karen DeSalvo announced that Health Information Exchanges (which limit themselves only to a small geographic area, or sometimes one state) would be able to exchange data throughout the country within one year. Observers immediately pointed out that the state of interoperability is not ready for this transition (and they could well have added the need for better analytics as well). HHS’s five-year plan includes the use of patient-generated and non-clinical data.

The poor state of interoperability was highlighted in an article about fees charged by EHR vendors just for setting up a connection and for each data transfer.

In the perennial search for why doctors are not exchanging patient information, attention has turned to rumors of deliberate information blocking. It’s a difficult accusation to pin down. Is information blocked by health care providers or by vendors? Does charging a fee, refusing to support a particular form of information exchange, or using a unique data format constitute information blocking? On the positive side, unnecessary imaging procedures can be reduced through information exchange.

Accountable Care Organizations are also having trouble, both because they are information-poor and because the CMS version of fee-for-value is too timid, along with other financial blows and perhaps an inability to retain patients. An August article analyzed the positives and negatives in a CMS announcement. On a large scale, fee-for-value may work. But a key component of improvement in chronic conditions is behavioral health which EHRs are also unsuited for.

Pricing and consumer choice have become a major battleground in the current health insurance business. The steep rise in health insurance deductibles and copays has been justified (somewhat retroactively) by claiming that patients should have more responsibility to control health care costs. But the reality of health care shopping points in the other direction. A report card on state price transparency laws found the situation “bleak.” Another article shows that efforts to list prices are hampered by interoperability and other problems. One personal account of a billing disaster shows the state of price transparency today, and may be dangerous to read because it could trigger traumatic memories of your own interactions with health providers and insurers. Narrow and confusing insurance networks as well as fragmented delivery of services hamper doctor shopping. You may go to a doctor who your insurance plan assures you is in their network, only to be charged outrageous out-of-network costs. Tools are often out of date overly simplistic.

In regard to the quality ratings that are supposed to allow intelligent choices to patients, A study found that four hospital rating sites have very different ratings for the same hospitals. The criteria used to rate them is inconsistent. Quality measures provided by government databases are marred by incorrect data. The American Medical Association, always disturbed by public ratings of doctors for obvious reasons, recently complained of incorrect numbers from the Centers for Medicare & Medicaid Services. In July, the ProPublica site offered a search service called the Surgeon Scorecard. One article summarized the many positive and negative reactions. The New England Journal of Medicine has called ratings of surgeons unreliable.

2015 was the year of the intensely watched Department of Defense upgrade to its health care system. One long article offered an in-depth examination of DoD options and their implications for the evolution of health care. Another article promoted the advantages of open-source VistA, an argument that was not persuasive enough for the DoD. Still, openness was one of the criteria sought by the DoD.

The remote delivery of information, monitoring, and treatment (which goes by the quaint term “telemedicine”) has been the subject of much discussion. Those concerned with this development can follow the links in a summary article to see the various positions of major industry players. One advocate of patient empowerment interviewed doctors to find that, contrary to common fears, they can offer email access to patients without becoming overwhelmed. In fact, they think it leads to better outcomes. (However, it still isn’t reimbursed.)

Laws permitting reimbursement for telemedicine continued to spread among the states. But a major battle shaped up around a ruling in Texas that doctors have a pre-existing face-to-face meeting with any patient whom they want to treat remotely. The spread of telemedicine depends also on reform of state licensing laws to permit practices across state lines.

Much wailing and tears welled up over the required transition from ICD-9 to ICD-10. The AMA, with some good arguments, suggested just waiting for ICD-11. But the transition cost much less than anticipated, making ICD-10 much less of a hot button, although it may be harmful to diagnosis.

Formal studies of EHR strengths and weaknesses are rare, so I’ll mention this survey finding that EHRs aid with public health but are ungainly for the sophisticated uses required for long-term, accountable patient care. Meanwhile, half of hospitals surveyed are unhappy with their EHRs’ usability and functionality and doctors are increasingly frustrated with EHRs. Nurses complained about technologies’s time demands and the eternal lack of interoperability. A HIMSS survey turned up somewhat more postive feelings.

EHRs are also expensive enough to hurt hospital balance sheets and force them to forgo other important expenditures.

Electronic health records also took a hit from ONC’s Sentinel Events program. To err, it seems, is not only human but now computer-aided. A Sentinel Event Alert indicated that more errors in health IT products should be reported, claiming that many go unreported because patient harm was avoided. The FDA started checking self-reported problems on PatientsLikeMe for adverse drug events.

The ONC reported gains in patient ability to view, download, and transmit their health information online, but found patient portals still limited. Although one article praised patient portals by Epic, Allscripts, and NextGen, an overview of studies found that patient portals are disappointing, partly because elderly patients have trouble with them. A literature review highlighted where patient portals fall short. In contrast, giving patients full access to doctors’ notes increases compliance and reduces errors. HHS’s Office of Civil Rights released rules underlining patients’ rights to access their data.

While we’re wallowing in downers, review a study questioning the value of patient-centered medical homes.

Reuters published a warning about employee wellness programs, which are nowhere near as fair or accurate as they claim to be. They are turning into just another expression of unequal power between employer and employee, with tendencies to punish sick people.

An interesting article questioned the industry narrative about the medical device tax in the Affordable Care Act, saying that the industry is expanding robustly in the face of the tax. However, this tax is still a hot political issue.

Does anyone remember that Republican congressmen published an alternative health care reform plan to replace the ACA? An analysis finds both good and bad points in its approach to mandates, malpractice, and insurance coverage.

Early reports on use of Apple’s open ResearchKit suggested problems with selection bias and diversity.

An in-depth look at the use of devices to enhance mental activity examined where they might be useful or harmful.

A major genetic data mining effort by pharma companies and Britain’s National Health Service was announced. The FDA announced a site called precisionFDA for sharing resources related to genetic testing. A recent site invites people to upload health and fitness data to support research.

As data becomes more liquid and is collected by more entities, patient privacy suffers. An analysis of web sites turned up shocking practices in , even at supposedly reputable sites like WebMD. Lax security in health care networks was addressed in a Forbes article.

Of minor interest to health IT workers, but eagerly awaited by doctors, was Congress’s “doc fix” to Medicare’s sustainable growth rate formula. The bill did contain additional clauses that were called significant by a number of observers, including former National Coordinator Farzad Mostashari no less, for opening up new initiatives in interoperability, telehealth, patient monitoring, and especially fee-for-value.

Connected health took a step forward when CMS issued reimbursement guidelines for patient monitoring in the community.

A wonky but important dispute concerned whether self-insured employers should be required to report public health measures, because public health by definition needs to draw information from as wide a population as possible.

Data breaches always make lurid news, sometimes under surprising circumstances, and not always caused by health care providers. The 2015 security news was dominated by a massive breach at the Anthem health insurer.

Along with great fanfare in Scientific American for “precision medicine,” another Scientific American article covered its privacy risks.

A blog posting promoted early and intensive interactions with end users during app design.

A study found that HIT implementations hamper clinicians, but could not identify the reasons.

Natural language processing was praised for its potential for simplifying data entry, and to discover useful side effects and treatment issues.

CVS’s refusal to stock tobacco products was called “a major sea-change for public health” and part of a general trend of pharmacies toward whole care of the patient.

A long interview with FHIR leader Grahame Grieve described the progress of the project, and its the need for clinicians to take data exchange seriously. A quiet milestone was reached in October with a a production version from Cerner.

Given the frequent invocation of Uber (even more than the Cheesecake Factory) as a model for health IT innovation, it’s worth seeing the reasons that model is inapplicable.

A number of hot new sensors and devices were announced, including a tiny sensor from Intel, a device from Google to measure blood sugar and another for multiple vital signs, enhancements to Microsoft products, a temperature monitor for babies, a headset for detecting epilepsy, cheap cameras from New Zealand and MIT for doing retinal scans, a smart phone app for recognizing respiratory illnesses, a smart-phone connected device for detecting brain injuries and one for detecting cancer, a sleep-tracking ring, bed sensors, ultrasound-guided needle placement, a device for detecting pneumonia, and a pill that can track heartbeats.

The medical field isn’t making extensive use yet of data collection and analysis–or uses analytics for financial gain rather than patient care–the potential is demonstrated by many isolated success stories, including one from Johns Hopkins study using 25 patient measures to study sepsis and another from an Ontario hospital. In an intriguing peek at our possible future, IBM Watson has started to integrate patient data with its base of clinical research studies.

Frustrated enough with 2015? To end on an upbeat note, envision a future made bright by predictive analytics.

Following the Spread of APIs in Health: BaseHealth’s Genomic Health Analysis

Posted on June 3, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Because health care has come late to the party, companies in that field have had plenty of time to see the advantages that Application Programming Interfaces (APIs) have brought to other areas of computing and commerce. BaseHealth Enterprise, which has been offering comprehensive health assessments based on a patient’s genetic information and other health factors for five years through a Software as a Service (SaaS) platform, is now joining the race to APIs. The particular pressures that led to the development of their APIs makes an interesting case study.

Although the concept of an API is somewhat technical and its details call for a bit of programming background, the concept driving API use is simple. We all use web sites and mobile apps to conduct business and interact, but an API allows two applications to talk to each other, serving as a pipe of information transfer. Thus, crucial tasks can be automated and run on a routine basis using an API. BaseHealth modestly suggests in their press release that their API “marks the first time in human history that genomic data is on-call for developers across the globe.”

Example of request for sleep apnea information

Example of request for sleep apnea information

I talked last week to BaseHealth’s CEO Prakash Menon and to Hossein Fakhrai-Rad, founder and Chief Scientific Officer. They offer five basic services, all based on evaluating the genomic and phenomic (observed) data from a patient. A developer can call for such information as:

The patient’s risk for a particular common complex disease, along with risk factors that make it more likely and recommended lifestyle changes The likely effectiveness of a particular drug for a condition, given the patient’s genetic makeup Likely patient responses to various nutrients

Genomic testing is done by companies such as Illumina. Different testing services make very different judgments about the significance of various genes, but there are now evaluation sites (which perform a kind of crowdsourcing to accumulate information validating these judgments) to offer more confidence in the tests. BaseHealth accepts this data along with information about family history, lifestyle, and the patient’s environment to make useful recommendations about handling diabetes, cancer, stroke, gout, sleep apnea, and many other common conditions.

Previously, many health plans and hospitals were interested in the BaseHealth SaaS platform, but did not want to adopt a new application and UI into their existing systems because of the cost of implementation and the time it would take to train healthcare professionals on a new system. The BaseHealth API allows developers at these organizations to use specific features of BaseHealth’s comprehensive health assessment without having to overhaul their existing systems.

Furthermore, large genetic sequencing results are time-consuming and expensive to transmit, and it was wasteful to store them twice (at the provider and at BaseHealth). Some countries also prohibit the transfer of genetic data outside the country’s border for privacy reasons.

BaseHealth’s APIs therefore allow a totally different interaction model. Data can be stored by health care providers and patients, then combined by an application (usually run at the provider’s site) and submitted as a JSON data structure to the API. Only the specific information required by the API needs to be transferred. It is conceivable that apps could be developed for patient use as well. However, because BaseHealth does not offer direct-to-consumer genetic testing, they have none of the problems that 23andMe suffered.

In a field where many vendors scrutinize and limit access to APIs, it’s important to note that BaseHealth’s API is available for all to use–there is no gateway to get through, only a short registration process in which BaseHealth collects a developer’s email address. One can submit 1,000 requests each month for free-making participation easy for small providers-and then pay a small fee for further requests.

APIs hold the promise to streamline health care just as they have reduced information friction in other industries. The BaseHealth experiment illustrates why an API is useful and how it can alter the business of health care.