Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

When Providing a Health Service, the Infrastructure Behind the API is Equally Important

Posted on May 2, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

In my ongoing review of application programming interfaces (APIs) as a technical solution for offering rich and flexible services in health care, I recently ran into two companies who showed as much enthusiasm for their internal technologies behind the APIs as for the APIs themselves. APIs are no longer a novelty in health services, as they were just five years ago. As the field gets crowded, maintenance and performance take on more critical roles in offering a successful business–so let’s see how Orion Health and Mana Health back up their very different offerings.

Orion Health

This is a large analytics firm that has staked a a major claim in the White House’s Precision Medicine Initiative. Orion Health’s data platform, Amadeus, addresses population health management as well as “considering how they can better tailor care for each chronically ill individual,” as put by Dave Bennett, executive vice president for Product & Strategy. “We like to say that population health is the who and precision medicine is the how.” Thus, Amadeus can harmonize a huge variety of inputs, such as how many steps a patient takes each day at home, to prevent readmissions.

Orion Health has a cloud service, a capacity for handling huge data sets such as genomes, and a selection of tools for handling such varied sources as clinical, claims, pharmacy, genetic, and consumer device or other patient-generated data. Environmental and social data are currently being added. It has more than 90 million patient records in its systems worldwide.

Patient matching links up data sets from different providers. All this data is ingested, normalized, and made accessible through APIs to authorized parties. Customers can write their own applications, visualizations, and SQL queries. Amadeus is used by the Centers for Disease Control, and many hospitals join the chorus to submit data to the CDC.

So far, Orion Health resembles some other big initiatives that major companies in the health care space are offering. I covered services from Philips in a recent article, and another site talks about GE. Bennett says that Orion Health really distinguishes itself through the computing infrastructure that drives the analytics and data access.

Many companies use conventional relational database as their canonical data store. Relational databases are 1980s-era technology, unmatched in their robustness and sophistication in querying (through the SQL language), but becoming a bottleneck for the data sizes that health analytics deals with.

Over the past decade, every industry that needs to handle enormous, streaming sets of data has turned to a variety of data stores known collectively as NoSQL. Ironically, these are often conceptually simpler than SQL databases and have roots going much farther back in computing history (such as key/value stores). But these data stores let organizations run a critical subset of queries in real time over huge data sets. In addition, analytics are carried out by newer MapReduce algorithms and in-memory services such as Spark. As an added impetus for development, these new technologies are usually free and open source software.

Amadeus itself stores data in Cassandra, one of the most mature NoSQL data stores, and uses Spark for processing. According to Bennett, “Spark enables Amadeus to future proof healthcare organizations for long term innovation. Bringing data and analytics together in the cloud allows our customers to generate deeper insights efficiently and with increased relevancy, due to the rapidity of the analytics engine and the streaming of current data in Amadeus. All this can be done at a lower cost than traditional healthcare analytics that move the data from various data warehouses that are still siloed.” Elastic Search is also used. In short, the third-party tools used within Orion Health are ordinary and commonly found. It is simply modern in the same way as computing facilities in other industries–così fan tutte.

Mana Health

This company integrates device data into EHRs and other data stores. It achieved fame when it was chosen for the New York State patient portal. According to Raj Amin, co-founder and Executive Chairman, the company won over the judges with the convenient and slick tile concept in their user interface. Each tile could be clicked to reveal a deeper level of detail in the data. The company tries to serve clinicians, patients, and data analysts alike. Clients include HIEs, health systems, medical device manufacturers, insurers, and app developers.

Like Orion Health, Mana Health is very conscious of staying on the leading edge of technology. They are mobile-friendly and architect their solutions using microservices, a popular form of modular development that attempts to maximize flexibility in coding and deploying new services. On a lark, they developed a VR engine compatible with the Oculus Rift to showcase what can creatively be built on their API. Although this Rift project has no current uses, the development effort helps them stay flexible so that they can adapt to whatever new technologies come down the pike.

Because Mana Health developed their API some eighteen months ago, they pre-dated some newer approaches and standards. They plan to offer compatibility with emerging standards such as FHIR that see industry adoption. The company recently was announced as a partner in the Commonwell Alliance, a project formed by a wide selection of major EHR vendors to pursue interoperability.

To support machine learning, Mana Health stores data in an open source database called Neo4j. This is a very unusual technology called a graph database, whose history and purposes I described two years ago.

Graphs are familiar to anyone who has seen airline maps showing the flights between cities. Graphs are also common for showing social connections, such as your friends-of-friends on Facebook. In health care, as well, graphs are very useful tools. They show relationships, but in a very different way from relational databases. Graphs are better than relational databases at tracing connections between people or other entities. For instance, a team led by health IT expert Fred Trotter used Neo4J to store and query the data in DocGraph, linking primary care physicians to the specialists to which they refer patients.

In their unique ways, Mana Health and Orion Health follow trends in the computing industry and judiciously choose tools that offer new forms of access to data, while being proven in the field. Although commenters in health IT emphasize the importance of good user interfaces, infrastructure matters too.

Our Uncontrolled Health Care Costs Can Be Traced to Data and Communication Failures (Part 2 of 2)

Posted on April 13, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article provided whatever detail I could find on the costs of poor communications and data exchange among health care providers. But in truth, it’s hard to imagine the toll taken by communications failures beyond certain obvious consequences, such as repeated tests and avoidable medical errors. One has to think about how the field operates and what we would be capable of with proper use of data.

As patients move from PCP to specialist, from hospital to rehab facility, and from district to district, their providers need not only discharge summaries but intensive coordination to prevent relapses. Our doctors are great at fixing a diabetic episode or heart-related event. Where we fall down is on getting the patient the continued care she needs, ensuring she obtains and ingests her medication, and encouraging her to make the substantial life-style changes that can prevent reoccurrences. Modern health really is all about collaboration–but doctors are decades behind the times.

Clinicians were largely unprepared to handle the new patients brought to them by the Affordable Care Act. Examining the impact of new enrollees, who “have higher rates of disease and received significantly more medical care,” an industry spokesperson said, “The findings underscore the need for all of us in the health care system, and newly insured consumers, to work together to make sure that people get the right health care service in the right care setting and at the right time…Better communication and coordination is needed so that everyone understands how to avoid unnecessary emergency room visits, make full use of primary care and preventive services and learn how to properly adhere to their medications.” Just where the health providers fall short.

All these failures to communicate may explain the disappointing performance of patient centered medical homes and Accountable Care Organizations. While many factors go into the success or failure of such complex practices, a high rate of failure suggests that they’re not really carrying out the coordinated care they were meant to deliver. Naturally, problems persist in getting data from one vendor’s electronic health record to another.

Urgent care clinics, and other alternative treatment facilities offered in places such as pharmacies, can potentially lower costs, but not if the regular health system fails to integrate them.

Successes in coordinated care show how powerful it can be. Even so simple a practice as showing medical records to patients can improve care, but most clinicians still deny patients access to their data.

One care practice drastically lowered ER admissions through a notably low-tech policy–refering their patients to a clinic for follow-up care. This is only the beginning of what we could achieve. If modern communications were in place, hospitals would be linked so that a CDC warning could go to all of them instantly. And if clinicians and their record systems were set up to handle patient-generated data, they could discover a lot more about the patients and monitor behavior change.

How are the hospitals and clinics responding to this crisis and the public pressure to shape up? They push back as if it was not their problem. They claim they are moving toward better information sharing and teamwork, but never get there.

One of their favorite gambits is to ask the government to reward them for achieving interoperability 90 days out of the year. They make this request with no groveling, no tears of shame, no admission that they have failed in their responsibility to meet reasonable goals set seven years ago. If I delivered my projects only 25% of the time, I’d have trouble justifying myself to my employer, especially if I received my compensation plan seven years ago. Could the medical industry imagine that it owes us a modicum of effort?

Robert Schultz, a writer and entrepreneur in health care, says, “Underlying the broken communications model is a lack of empathy for the ultimate person affected–the patient. Health care is one of the few industries where the user is not necessarily the party paying for the product or service. Electronic health records and health information exchanges are designed around the insurance companies, accountable care organizations, or providers, instead of around understanding the challenges and obstacles that patients face on a daily basis. (There are so many!) The innovators who understand the role of the patient in this new accountable care climate will be winners. Those who suffer from the burden of legacy will continue to see the same problems and will become eclipsed by other organizations who can sustain patient engagement and prove value within accountable care contracts.”

Alternative factors

Of course, after such a provocative accusation, I should consider the other contributors that are often blamed for increasing health care costs.

An aging population

Older people have more chronic diseases, a trend that is straining health care systems from Cuba to Japan. This demographic reality makes intelligent data use even more important: remote monitoring for chronic conditions, graceful care transitions, and patient coordination.

The rising cost of drugs

Dramatically increasing drug prices are certainly straining our payment systems. Doctors who took research seriously could be pushing back against patient requests for drugs that work more often in TV ads than in real life. Doctors could look at holistic pain treatments such as yoga and biofeedback, instead of launching the worst opiate addiction crisis America has ever had.

Government bureaucracy

This seems to be a condition of life we need to deal with, like death and taxes. True, the Centers for Medicare & Medicaid Services (CMS) keeps adding requirements for data to report. But much of it could be automated if clinical settings adopted modern programming practices. Furthermore, this data appears to be a burden only because it isn’t exploited. Most of it is quite useful, and it just takes agile organizations to query it.

Intermediaries

Reflecting the Byzantine complexity of our payment systems, a huge number of middlemen–pharmacy benefits managers, medical billing clearinghouses, even the insurers themselves–enter the system, each taking its cut of the profits. Single-payer insurance has long been touted as a solution, but I’d rather push for better and cheaper treatments than attack the politically entrenched payment system.

Under-funded public health

Poverty, pollution, stress, and other external factors have huge impacts on health. This problem isn’t about clinicians, of course, it’s about all of us. But clinicians could be doing more to document these and intervene to improve them.

Clinicians like to point to barriers in their way of adopting information-based reforms, and tell us to tolerate the pace of change. But like the rising seas of climate change, the bite of health care costs will not tolerate complacency. The hard part is that merely wagging fingers and imposing goals–the ONC’s primary interventions–will not produce change. I think that reform will happen in pockets throughout the industry–such as the self-insured employers covered in a recent article–and eventually force incumbents to evolve or die.

The precision medicine initiative, and numerous databases being built up around the country with public health data, may contribute to a breakthrough by showing us the true quality of different types of care, and helping us reward clinicians fairly for treating patients of varying needs and risk. The FHIR standard may bring electronic health records in line. Analytics, currently a luxury available only to major health conglomerates, will become more commoditized and reach other providers.

But clinicians also have to do their part, and start acting like the future is here now. Those who make a priority of data sharing and communication will set themselves up for success long-term.

Our Uncontrolled Health Care Costs Can Be Traced to Data and Communication Failures (Part 1 of 2)

Posted on April 12, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

A host of scapegoats, ranging from the Affordable Care Act to unscrupulous pharmaceutical companies, have been blamed for the rise in health care costs that are destroying our financial well-being, our social fabric, and our political balance. In this article I suggest a more appropriate target: the inability of health care providers to collaborate and share information. To some extent, our health care crisis is an IT problem–but with organizational and cultural roots.

It’s well known that large numbers of patients have difficulty with costs, and that employees’ share of the burden is rising. We’re going to have to update the famous Rodney Dangerfield joke:

My doctor said, “You’re going to be sick.” I said I wanted a second opinion. He answered, “OK, you’re going to be poor too.”

Most of us know about the insidious role of health care costs in holding down wages, in the fight by Wisconsin Governor Scott Walker over pensions that tore the country apart, in crippling small businesses, and in narrowing our choice of health care providers. Not all realize, though, that the crisis is leaching through the health care industry as well, causing hospitals to fail, insurers to push costs onto subscribers and abandon the exchanges where low-income people get their insurance, co-ops to close, and governments to throw people off of subsidized care, threatening the very universal coverage that the ACA aimed to achieve.

Lessons from a ground-breaking book by T.R. Reid, The Healing of America, suggests that we’re undergoing a painful transition that every country has traversed to achieve a rational health care system. Like us, other countries started by committing themselves to universal health care access. This then puts on the pressure to control costs, as well as the opportunities for coordination and economies of scale that eventually institute those controls. Solutions will take time, but we need to be smart about where to focus our efforts.

Before even the ACA, the 2009 HITECH act established goals of data exchange and coordinated patient care. But seven years later, doctors still lag in:

  • Coordinating with other providers treating the patients.

  • Sending information that providers need to adequately treat the patients.

  • Basing treatment decisions on evidence from research.

  • Providing patients with their own health care data.

We’ll look next at the reports behind these claims, and at the effects of the problems.

Why doctors don’t work together effectively

A recent report released by the ONC, and covered by me in a recent article, revealed the poor state of data sharing, after decades of Health Information Exchanges and four years of Meaningful Use. Health IT observers expect interoperability to continue being a challenge, even as changes in technology, regulations, and consumer action push providers to do it.

If merely exchanging documents is so hard–and often unachieved–patient-focused, coordinated care is clearly impossible. Integrating behavioral care to address chronic conditions will remain a fantasy.

Evidence-based medicine is also more of an aspiration than a reality. Research is not always trustworthy, but we must have more respect for the science than hospitals were found to have in a recent GAO report. They fail to collect data either on the problems leading to errors or on the efficacy of solutions. There are incentive programs from payers, but no one knows whether they help. Doctors are still ordering far too many unnecessary tests.

Many companies in the health analytics space offer services that can bring more certainty to the practice of medicine, and I often cover them in these postings. Although increasingly cited as a priority, analytical services are still adopted by only a fraction of health care providers.

Patients across the country are suffering from disrupted care as insurers narrow their networks. It may be fair to force patients to seek less expensive providers–but not when all their records get lost during the transition. This is all too likely in the current non-interoperable environment. Of course, redundant testing and treatment errors caused by ignorance could erase the gains of going to low-cost providers.

Some have bravely tallied up the costs of waste and lack of care coordination in health care. Some causes, such as fraud and price manipulation, are not attributable to the health IT failures I describe. But an enormous chunk of costs directly implicate communications and data handling problems, including administrative overhead. The next section of this article will explore what this means in day-to-day health care.

Randomized Clinical Trial Validates BaseHealth’s Predictive Analytics

Posted on March 11, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

One of the pressing concerns in health care is the validity of medical and health apps. Because health is a 24-hour-a-day, 365-day-a-year concern, people can theoretically overcome many of their health problems by employing apps that track, measure, report, and encourage them in good behavior. But which ones work? Doctors are understandably reluctant to recommend apps–and insurers to cover them–without validation.

So I’ve been looking at the scattered app developers who have managed to find the time and money for randomized clinical studies. One recent article covered two studies showing the value of a platform that provided the basis for Twine Health. Today I’ll look at BaseHealth, whose service and API I covered last year.

BaseHealth’s risk assessment platform is used by doctors and health coaches to create customized patient health plans. According to CEO Prakash Menon, “Five to seven people out of 1,000, for instance, will develop Type II diabetes each year. Our service allows a provider to focus on those five to seven.” The study that forms the basis for my article describes BaseHealth’s service as “based on an individual’s comprehensive information, including lifestyle, personal information, and family history; genetic information (genotyping or full genome sequencing data), if provided, is included for cumulative assessment.” (p. 1) BaseHealth has trouble integrating EHR data, because transport protocols have been standardized but semantics (what field is used to record each bit of information) have not.

BaseHealth analytics are based on clinical studies whose validity seems secure: they check, for instance, whether the studies are reproducible, whether their sample sizes are adequate, whether the proper statistical techniques were used, etc. To determine each patient’s risk, BaseHealth takes into account factors that the patient can’t control (such as family history) as well as factors that he can. These are all familiar: cholesterol, BMI, smoking, physical activity, etc.

Let’s turn to the study that I read for this article. The basic question the study tries to answer is, “How well does BaseHealth predict that a particular patient might develop a particular health condition?” This is not really feasible for a study, however, because the risk factors leading to diabetes or lung cancer can take decades to develop. So instead, the study’s authors took a shortcut: they asked interviewers to take family histories and other data that the authors called “life information” without telling the interviewers what conditions the patients had. Then they ran the BaseHealth analytics and compared results to the patients actual, current conditions based on their medical histories. They examined the success of risk assignment for three conditions: coronary artery disease (CAD), Type 2 diabetes (T2), and hypertension (HTN).

The patients chosen for the study had high degrees of illness: “43% of the patients had an established diagnosis of CAD, 22% with a diagnosis of T2D and 70% with a diagnosis of HTN.” BaseHealth identified even more patients as being at risk: 74.6% for CAD, 66.7% for T2D, and 77% for HTN. It makes sense that the BaseHealth predictions were greater than actual incidence of the diseases, because BaseHealth is warning of potential future disease as well.

BaseHealth assigned each patient to a percentile chance of getting the disease. For instance, some patients were considered 50-75% likely to develop CAD.

The study used 99 patients, 12 of whom had to be dropped from the study. Although a larger sample would be better, results were still impressive.

The study found a “robust correlation” between BaseHealth’s predictions and the patients’ medical histories. The higher the risk, the more BaseHealth was likely to match the actual medical history. Most important, BaseHealth had no false negatives. If it said a patient’s risk of developing a disease was less than 5%, the patient didn’t have the disease. This is important because you don’t want a filter to leave out any at-risk patients.

I have a number of questions about the article: how patients break down by age, race, and other demographics, for instance. There was also an intervention phase in the study: some patients took successful measures to reduce their risk factors. But the relationship of this intervention to BaseHealth, however, was not explored in the study.

Although not as good as a longitudinal study with a large patient base, the BaseHealth study should be useful to doctors and insurers. It shows that clinical research of apps is feasible. Menon says that a second study is underway with a larger group of subjects, looking at risk of stroke, breast cancer, colorectal cancer, and gout, in addition to the three diseases from the first study. A comparison of the two studies will be interesting.

Uncovering Hidden Hospital Resources: Hospital IQ Shows that the First Resource is Data

Posted on January 4, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

We’ve become accustomed to the results that data mining have produced in fields ranging from retail to archeology to climate change. According to Rich Krueger, CEO of HospitalIQ, the health care industry is also ripe with data that could be doing a lot more for strategic operations and care.

Many companies (perhaps even too many) already offer analytics for grouping patients into risk pools and predicting the most unwanted events, such as readmissions within 30 days of a hospital discharge. According to Krueger, Hospital IQ is the first company offering analytics to improve operations. From a policy standpoint, the service has three benefits:

  • Improving operations can significantly lower the cost of providing health care, and the savings can be passed on to payers and consumers.
  • Better operational efficiency means better outcomes. For instance, if the ER is staffed properly and the hospital has enough staff resources to meet each patient’s needs, they will get the right treatment faster. This means patients will recover sooner and spend less time in the hospital.
  • Hospital IQ’s service is an intriguing example of the creative use of clinical data, inspiring further exploration of data analysis in health care.

One of the essential problems addressed by Hospital IQ is unexpected peaks in demand, which strains emergency rooms, ICUs and inpatient units, and ultimately can contribute to poor patient outcomes. The mirror images of this problem are unexpected troughs in usage, where staff is underutilized and costing the healthcare system.

“There’s no reason for surprise,” says Krueger. While it is difficult to predict demand for any given day, it is possible to forecast upper and lower boundaries for demand in a given week. Hospital staff often have an intuitive sense of this. Krueger says with his software, they can mine clinical data to make these forecasts based on actual live data.

There are firms who make sensors that can tell you when a bed is occupied, when someone has come to clean the room or visit a patient, and so forth. The interesting thing about Hospital IQ is that it does its work without needing to install any special equipment or IT builds. Its inputs are the everyday bits of data that every hospital already collects and stores in their electronic record systems: when someone enters the ED, when they are seen, where they are transferred, when surgery started and ended. The more at-risk patients in telemetry beds generate information that is also used for analytics.

Hospital IQ uses mature business analytics tools such as queuing theory that have been popularized for decades since the age of W. Edwards Deming. Even though EHRs differ, Krueger has discovered that it’s not hard to retrieve the necessary data from them. “You tell us where the data is and we’ll get it.”

The service also provides dashboards designed for different types of staff, and tools for data analysis. For instance, a manager can quickly see how often patients are sent to the ICU not because they need ICU care but because all other beds are full. While this doesn’t compromise patient safety, it’s an unnecessary cost and also reduces patient satisfaction because ICUs generally have a more restricted visiting policy. This situation becomes more problematic when a patient arrives through the ED with a real need for an ICU bed and the unit is full (Figure 1). Hospitals can look at trends over time to optimize staffing and even out the load by scheduling elective interventions at non-peak times. There’s a potentially tremendous impact on patient safety, length of stay, and mortality.

solutions-patient-safety-and-quality
Figure 1: Chart showing placements into the ICU. “Misplaced here” are patients that don’t belong in the ICU, whereas “Misplaced elsewhere” are patients that should have gone to the ICU but were sent somewhere else such as the PACU.

Taken to another level of sophistication, analytics can be used for long-term planning. For instance, if you are increasing access to a service, hospitals can forecast the additional number of beds, operating rooms, and staff the service will need based on historic demand and projected growth.. Hospitals can set a policy such as a maximum wait of three hours for a bed and see the resources necessary to meet that goal. Figure 2 shows an example of a graph being tweaked to look at different possible futures.

solutions-capacity-planning
Figure 2: Simulation using historical demand data showing the relationship between bed counts and average wait time for a patient to get a bed.

Hospital IQ is a fairly young company whose customers cover a wide range of hospitals: from safety net hospitals to academic institutions, large and small. It also works across large systems with multiple institutions, aiding in such tasks as consolidating multiple hospitals’ services into one hospital.

My hope for Hospital IQ is that it will open up its system with an API that allows hospitals and third parties to design new services. It seems to offer new insights that were hidden before, and we can only guess where other contributors can take it as well.

Is Claims Data Really So Bad For Health Care Analytics?

Posted on June 12, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Two commonplaces heard in the health IT field are that the data in EHRs is aimed at billing, and that billing data is unreliable input to clinical decision support or other clinically related analytics. These statements form two premises to a syllogism for which you can fill in the conclusion. But at two conferences last week–the Health Datapalooza and the Health Privacy Summit–speakers indicated that smart analysis can derive a lot of value from claims data.

The Healthcare Cost and Utilization Project (HCUP), run by the government’s Agency for Healthcare Research and Quality (AHRQ), is based on hospital release data. Major elements include the payer, diagnoses, procedures, charges, length of stay, etc. along with potentially richer information such as patients’ ages, genders, and income levels. A separate Clinical Content Enhancement Toolkit does allow states to add clinical data, while American Hospital Association Linkage Files let hospitals upload data about their facilities.

But basically. HCUP data revolves around the claims from all-payer databases. It is collected currently from 47 states, and varies on a state-by-state basis depending on what data they allow to be released. HCUP goes back to 2006 and powers a lot of research, notably to improve outreach to underserved racial and ethnic groups.

During an interview at the Health Privacy Summit, Lucia Savage, Chief Privacy Officer at ONC, mentioned that one can use claims data to determine what treatments doctors offer for various conditions (such as mammograms, which tend to be underused, and antibiotics, which tend to be overused). Thus, analysts can target providers who fail to adhere to standards of care and theoretically improve outcomes.

M1, a large data analytics company serving a number of industries, bases a number of products in the health care space on claims data. For instance, medical device companies contract with M1 to find out which devices doctors are ordering. Insurance companies use it to sniff out fraud.

M1’s business model, incidentally, is a bit different from that pursued by most analytics organizations in the health care arena. Most firms contract with some institution–an insurer, for instance–to analyze its data and provide it with unique findings. But M1 goes around buying up data from multiple institutions and combining it for deeper insights. It then sells results back to these institutions, often paying out taking in payment from the same company.

In short, smart organizations are shelling out money for data about billing and claims. It looks like, if you have a lot of this data, you can reliably lower costs, improve marketing, and–most important of all–improve care. But we mustn’t lose sight of the serious limitations and weaknesses of this data.

  • A scandalously amount of it is clinical just wrong. Doctors “upcode” to extract the largest possible reimbursement for what they treat. A number of them go further and assign codes that have no justification whatsoever. And that doesn’t even count outright fraud, which reaches into the billions of dollars each year and therefore must leave a lot of bad data in the system.

  • Data is atomized, each claim standing on its own. A researcher will find it difficult to impossible (if patient identifiers are totally stripped out) to trace a sequence of visits that tell you about the progress of treatment.

  • Data is relatively impoverished. Clinical records flesh out the diagnosis with related conditions, demographic information, and other things that make the difference between correct and incorrect treatments.

But on the other hand, to go beyond billing data and reach the data utopia that reformers dream about, we’d have to slurp up a lot of complex and sensitive patient data. This has pitfalls of its own. Little clinical data is structured, and the doctors who do take the effort to enter it into structured fields do so inconsistently. Privacy concerns also raise their threatening heads when you get deep into patient conditions and demographics. So perhaps we should see how far we can get with claims data.

Bringing the Obvious to the Surface Through Analytics

Posted on May 26, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Analytics can play many roles, big and small, in streamlining health care. Data crunching may uncover headline-making revelations such as the role smoking plays in cancer. Or it may save a small clinic a few thousand dollars. In either case, it’s the hidden weapon of modern science.

The experience of Dr. Jordan Shlain (@drshlain) is a success story in health care analytics, one that he taking big time with a company called HealthLoop. The new venture dazzles customers with fancy tools for tracking and measuring their customer interactions–but it all started with an orthopedic clinic and a simple question Shlain asked the staff: how many phone calls do you get each week?

Asking the right question is usually the start to a positive experience with analytics. In the clinic’s case, it wasn’t hard to find the right question because Shlain could hear the phones ringing off the hook all day. The staff told him they get some 200 calls each week and it was weighing them down.

OK, the next step was to write down who called and the purpose of every call. The staff kept journals for two weeks. Shlain and his colleagues then reviewed the data and found out what was generating the bulk of the calls.

Sometimes, analytics turns up an answer so simple, you feel you should have known it all along. That’s what happened in this case.

The clinic found that most calls came from post-operative patients who were encountering routine symptoms during recovery. After certain surgeries, for instance, certain things tend to happen 6 to 9 days afterward. As if they had received instructions to do, patients were calling during that 6-to-9-day period to ask whether they symptoms were OK and what they should do. Another set of conditions might turn up 11 to 14 days after the surgery.

Armed with this information, the clinic proceeded to eliminate most of their phone calls and free up their time for better work. Shlain calls the clinic’s response to patient needs “health loops,” a play on the idea of feedback loops. Around day 5 after a surgery, staff would contact the patient to warn her to look for certain symptoms during the 6-to-9-day period. They did this for every condition that tended to generate phone calls.

HealthLoop builds on this insight and attaches modern digital tools for tracking and communications. Patients are contacted through secure messaging on the device of their choice. They are provided with checklists of procedures to perform at home. There’s even a simple rating system, like the surveys you receive after taking your car in to be fixed or flying on an airline.

Patient engagement–probably the most popular application of health IT right now–is also part of HealthLoop. A dashboard warns the clinician which patients to perform each day, surfacing the results of risk stratification at a glance. There’s also an activity feed for each patient that summarizes what a doctor needs to know.

Analytics doesn’t have to be rocket science. But you have to know what you’re looking for, collect the data that tells you the answer, and embody the resulting insights into workflow changes and supporting technologies. With his first experiment in phone call tracking, Shlain just took the time to look. So look around your own environment and ask what obvious efficiencies analytics could turn up for you.

Early Warnings Demonstrate an Early Advance in the Use of Analytics to Improve Health Care

Posted on May 4, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Early warning systems–such as the popular Modified Early Warning System (MEWS) used in many hospitals–form one of the first waves in the ocean of analytics we need to wash over our health care system. Eventually, health care will elegantly integrate medical device output, electronic patient records, research findings, software algorithms, and–yes, let us not forget–the clinician’s expertise in a timely intervention into patient care. Because early warning systems are more mature than many of the analytics that researchers are currently trying out, it’s useful to look at advances in early warning to see trends that can benefit the rest of health care as well.

I talked this week to Susan Niemeier, Chief Nursing Officer at CapsuleTech, a provider of medical device integration solutions. They sell (among other things) a bedside mobile clinical computer called the Neuron that collects, displays, and sends to the electronic medical record vital signs from medical devices: temperature, pulse, respiration, pulse oximetry, and so on. A recent enhancement called the Early Warning Scoring System (EWSS) adds an extra level of analytics that, according to Niemeier, can identify subtle signs of patient deterioration well before a critical event. It’s part of Capsule’s overarching aim to enable hospitals to do more with the massive amount of data generated by devices.

For more than 18 years, CapsuleTech provided bedside medical device connectivity products and services that captured patient vital signs and communicated that data to the hospital EMR. Rudimentary as this functionality may appear to people using automated systems in other industries, it was a welcome advance for nurses and doctors in hospitals. Formerly, according to Niemeier, nurses would scribble down on a scrap of paper or a napkin the vital signs they saw on the monitors. It might be a few hours before they could enter these into the record–and lots could go wrong in that time. Furthermore, the record was a simple repository, with no software observing trends or drawing conclusions.

Neuron 2 running Early Warning Scoring System

Neuron 2 running Early Warning Scoring System

So in addition to relieving the nurse of clerical work (along with likely errors that it entails), and enhancing workflow, the Neuron could make sure the record immediately reflected vital signs. Now the Neuron performs an even more important function: it can run a kind of clinical support to warn of patients whose conditions are deteriorating.

The Neuron EWSS application assigns a numerical score to each vital sign parameter. The total early warning score is then calculated on the basis of the algorithm implemented. The higher the score, the greater the likelihood of deterioration. The score is displayed on the Neuron along with actionable steps for immediate intervention. These might include more monitoring, or even calling the rapid response team right away.

The software algorithm is configured in a secure management tool accessible through a web browser and sent wirelessly to the Neuron at a scheduled time. The management tool is password protected and administered by a trained designee at the hospital, allowing for greater flexibility and complete ownership of the solution.

Naturally, the key to making this simple system effective is to choose the right algorithm for combining vital signs. The United Kingdom is out in front in this area. They developed a variety of algorithms in the late 1990s, whereas US hospitals started doing so only 5 years ago. The US cannot simply adopt the UK algorithms, though, because our care delivery and nursing model is different. Furthermore, each hospital has different patient demographics, priorities, and practices.

On the other hand, according to Niemeier, assigning different algorithms to different patients (young gun-shot victims versus elderly cardiac patients, for instance) would be impractical because mobile Neuron computers are used across the entire hospital facility. If you tune an algorithm for one patient demographic, a nurse might inadvertently use it on a different kind of patient as the computer moves from unit to unit. Better, then, to create a single algorithm that does its best to reflect the average patient. The algorithm should use vital signs and observations that are consistently collected, not vitals that are intermittently measured and documented.

Furthermore, algorithms can be tuned over time. Not only do patient populations evolve, but hospitals can learn from the data they collect. CapsuleTech advises a retrospective chart review of rapid response events prior to selecting an algorithm. What vital signs did the patient have during the eight hours before the urgent event? Retrospectively apply the EWSS to the vital signs to determine the right algorithm and trends in that data to recognize deterioration earlier.

Without help such as the Early Warning Scoring System, rapid response teams have to be called when a clear crisis emerges or when a nurse’s intuition suggests they are needed. Now the nurse can check his intuition against the number generated by the system.

I think clinicians are open to the value of analytics in early warning systems because they dramatically heighten chances for avoiding disaster (and the resulting expense). The successes in early warning systems give us a glimpse of what data can do for more mundane aspects of health care as well. Naturally, effective use of data takes a lot more research: we need to know the best ways to collect the data, what standards allow us to aggregate it, and ultimately what the data can tell us. Advances in this research, along with rich new data sources, can put information at the center of medicine.

Annual Evaluation of Health IT: Are We Stuck in a Holding Pattern? (Part 2 of 3)

Posted on April 14, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous installment of this article was devoted to the various controversies whirling around Meaningful Use. But there are lots of other areas of technology and regulation affecting the progress (or stasis) of health IT.

FHIR: Great Promise, But So Far Just a Promise

After a decade or so of trying to make incompatible formats based on obsolete technology serve modern needs such as seamless data exchange, the health IT industry made a sudden turn to a standard invented by a few entrepreneurial developers. With FHIR they pulled on a thread that will unravel the whole garment of current practices and standards, while forming the basis for a beautiful new tapestry. FHIR will support modern data exchange (through a RESTful API), modern security, modern health practices such as using patient-generated data, and common standards that can be extended in a structured manner by different disciplines and communities.

When it’s done, that is. FHIR is still at version 0.82. Any version number less than 1, in the computer field, signals that all sorts of unanticipated changes may still be made and that anyone coding around the standard risks having to rip out their work and starting over. Furthermore, FHIR is a garment deliberately designed with big holes to be filled by others:

  • Many fields are defined precisely, but elements of the contents are left open, such as the units in which medicine is measured. This is obviously a pretty important detail to tie down.

  • Security relies on standards in the OpenID/OAuth area, which are dependable and well known by developers through their popularity on the Web. Still, somebody has to build this security in to health IT products.

  • Because countries and medical disciplines vary so greatly, the final word on FHIR data is left to “profiles” to be worked out by those communities.

One health data expert I talked to expressed the hope that market forces would compel the vendors to cooperate and make sure these various patches are interoperable as they are pieced into the garment. I would rather design a standard with firm support for these things.

Some of the missing pieces can be supplied relatively painlessly through SMART, an open API that predates FHIR but has been ported to it. An impressive set of major vendors and provider organizations have formed the Argonaut project to carry out some tasks with quick pay-offs, like making security work and implementing some high-value profiles. Let’s hope that FHIR and its accompanying projects start to have an impact soon.

The ONC has repeatedly expressed enthusiasm for FHIR, and CMS has alerted vendors that they need to start working on implementations. Interestingly, the Meaningful Use Stage 3 recommendation from CMS announces the opinion that health care providers shouldn’t charge their patients for access to their data through an API. An end to this scandalous exploitation of patients by both vendors and health care providers might have an impact on providers’ income.

Accountable Care Organizations: Walls Still Up

CMS created ACOs as a regulatory package delivering the gifts of coordinated care and pay-for-value. This was risky. ACOs require data exchange to effect smooth transfers of care, but data exchange was a rare occurrence as late as 2013, and the technical conditions have not changed since then so I can’t imagine it’s much better.

Pay-for-value also calls for analytics so providers can stratify populations and make rational choices. Finally, the degree of risk that CMS has asked ACOs to take on is pretty low, so they are not being pushed too hard to make the necessary culture changes to benefit from pay-for-value.

All that said, ACOs aren’t doing too badly. New ones are coming on board, albeit slowly, and cost savings have been demonstrated. An article titled “Poor interoperability, exchange hinders ACOs” actually reports much more positive results than the title suggests. There may be good grounds for ONC’s pronouncement that they will push more providers to form ACOs.

Still, ACOs are making a slow tack toward interoperability and coordinated care. The walls between health care settings are gradually lowering, but providers still huddle behind the larger walls of incompatible software that has trouble handling analytics.

I’ll wrap up this look at progress and its adversaries in the next installment of this article.

Open Source Electronic Health Records: Will They Support Clinical Data Needs of the Future? (Part 2 of 2)

Posted on November 18, 2014 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article provided a view of the current data needs in health care and asked whether open source electronic health records could solve those needs. I’ll pick up here with a look at how some open source products deal with the two main requirements I identified: interoperability and analytics.

Interoperability, in health care as in other areas of software, is supported better by open source products than by proprietary ones. The problem with interoperability is that it takes two to tango, and as long as standards remain in a fuzzy state, no one can promise in isolation to be interoperable.

The established standard for exchanging data is the C-CDA, but a careful examination of real-life C-CDA documents showed numerous incompatibilities, some left open by the ambiguous definition of the standard and others introduced by flawed implementations. Blue Button, invented by the Department of Veterans Affairs, is a simpler standard with much promise, but is also imperfectly specified.

Deanne Clark, vxVistA Program Manager at DSS, Inc., told me that VistA supports the C-CDA. The open source Mirth HIE software, which I have covered before, is used by vxVistA, OpenVista (the MedSphere VistA offering), and Tolven. Proprietary health exchange products are also used by many VistA customers.

Things may get better if vendors adopt an emerging HL7 standard called FHIR, as I suggested in an earlier article, which may also enable the incorporation of patient-generated data into EHRs. OpenMRS is one open source EHR that has started work on FHIR support.

Tolven illustrates how open source enables interoperability. According to lead developer Tom Jones, Tolven was always designed around care coordination, which is not the focus of proprietary EHRs. He sees no distinction between electronic health records and health information exchange (HIE), which most of the health IT field views as separate functions and products.

From its very start in 2006, Tolven was designed around helping to form a caring community. This proved useful four years later with the release of Meaningful Use requirements, which featured interoperability. APIs allow the easy development of third-party applications. Tovlen was also designed with the rights of the patient to control information flow in mind, although not all implementations respect this decision by putting data directly in the hands of the patient.

In addition to formats that other EHRs can recognize, data exchange is necessary for interoperability. One solution is an API such as FHIR. Another is a protocol for sending and receiving documents. Direct is the leading standard, and has been embraced by open source projects such as OpenEMR.

The second requirement I looked at, support for analytics, is best met by opening a platform to third parties. This assumes interoperability. To combine analytics from different organizations, a program must be able to access data through application programming interfaces (APIs). The open API is the natural complement of open source, handing power over data to outsiders who write programs accessing that data. (Normal access precautions can still be preserved through security keys.)

VistA appears to be the EHR with the most support for analytics, at least in the open source space. Edmund Billings, MD, CMO of MedSphere, pointed out that VistA’s internal interfaces (known as remote procedure calls, a slightly old-fashioned but common computer term for distributed programming) are totally exposed to other developers because the code is open source. VistA’s remote procedure calls are the basis for numerous current projects to create APIs for various languages. Some are RESTful, which supports the most popular current form of distributed programming, while others support older standards widely known as service-oriented architectures (SOA).

An example of the innovation provided by this software evolution is the mobile apps being built by Agilex on VistA. Seong K. Mun, President and CEO of OSEHRA, says that it now supports hundreds of mobile apps.

MedSphere builds commercial applications that plug into its version of Vista. These include multidisciplinary treatment planning tools, flow sheets, and mobile rounding tools so doctor can access information on the floor. MedSphere is also working with analytic groups to access both structured and unstructured information from the EHR.

DSS also adds value to VistA. Clark said that VistA’s native tools are useful for basic statistics, such as how many progress notes have not been signed in a timely fashion. An SQL interface has been in VistA for a long time, DSS’s enhancements include a graphical interface, a hook for Jaspersoft, which is an open source business intelligence tool, and a real-time search tool that spiders through text data throughout all elements of a patient’s chart and brings to the surface conditions that might otherwise be overlooked.

MedSphere and DSS also joined the historical OSEHRA effort to unify the code base across all VistA offerings, from both Veterans Affairs and commercial vendors. MedSphere has added major contributions to Fileman, a central part of VistA. DSS has contributed all its VistA changes to OSEHRA, including the search tool mentioned earlier.

OpenMRS contributor Suranga Kasthurirathne told me that an OpenMRS module exposes its data to DHIS 2, an open source analytics tool supporting visualizations and other powerful features.

I would suggest to the developers of open source health tools that they increase their emphasis on the information tools that industry observers predict are going to be central to healthcare. An open architecture can make it easy to solicit community contributions, and the advances made in these areas can be selling points along with the low cost and easy customizability of the software.