Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Correlations and Research Results: Do They Match Up? (Part 2 of 2)

Posted on May 27, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous part of this article described the benefits of big data analysis, along with some of the formal, inherent risks of using it. We’ll go even more into the problems of real-life use now.

More hidden bias

Jeffrey Skopek pointed out that correlations can perpetuate bias as much as they undermine it. Everything in data analysis is affected by bias, ranging from what we choose to examine and what data we collect to who participates, what tests we run, and how we interpret results.

The potential for seemingly objective data analysis to create (or at least perpetuate) discrimination on the basis of race and other criteria was highlighted recently by a Bloomberg article on Amazon Price deliveries. Nobody thinks that any Amazon.com manager anywhere said, “Let’s not deliver Amazon Prime packages to black neighborhoods.” But that was the natural outcome of depending on data about purchases, incomes, or whatever other data was crunched by the company to produce decisions about deliveries. (Amazon.com quickly promised to eliminate the disparity.)

At the conference, Sarah Malanga went over the comparable disparities and harms that big data can cause in health care. Think of all the ways modern researchers interact with potential subjects over mobile devices, and how much data is collected from such devices for data analytics. Such data is used to recruit subjects, to design studies, to check compliance with treatment, and for epidemiology and the new Precision Medicine movement.

In all the same ways that the old, the young, the poor, the rural, ethnic minorities, and women can be left out of commerce, they can be left out of health data as well–with even worse impacts on their lives. Malanga reeled out some statistics:

  • 20% of Americans don’t go on the Internet at all.

  • 57% of African-Americans don’t have Internet connections at home.

  • 70% of Americans over 65 don’t have a smart phone.

Those are just examples of ways that collecting data may miss important populations. Often, those populations are sicker than the people we reach with big data, so they need more help while receiving less.

The use of electronic health records, too, is still limited to certain populations in certain regions. Thus, some patients may take a lot of medications but not have “medication histories” available to research. Ameet Sarpatwari said that the exclusion of some populations from research make post-approval research even more important; there we can find correlations that were missed during trials.

A crucial source of well-balanced health data is the All Payer Claims Databases that 18 states have set up to collect data across the board. But a glitch in employment law, highlighted by Carmel Shachar, releases self-funding employers from sending their health data to the databases. This will most likely take a fix from Congress. Unless they do so, researchers and public health will lack the comprehensive data they need to improve health outcomes, and the 12 states that have started their own APCD projects may abandon them.

Other rectifications cited by Malanga include an NIH requirement for studies funded by it to include women and minorities–a requirement Malanga would like other funders to adopt–and the FCC’s Lifeline program, which helps more low-income people get phone and Internet connections.

A recent article at the popular TechCrunch technology site suggests that the inscrutability of big data analytics is intrinsic to artificial intelligence. We must understand where computers outstrip our intuitive ability to understand correlations.

Correlations and Research Results: Do They Match Up? (Part 1 of 2)

Posted on May 26, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Eight years ago, a widely discussed issue of WIRED Magazine proclaimed cockily that current methods of scientific inquiry, dating back to Galileo, were becoming obsolete in the age of big data. Running controlled experiments on limited samples just have too many limitations and take too long. Instead, we will take any data we have conveniently at hand–purchasing habits for consumers, cell phone records for everybody, Internet-of-Things data generated in the natural world–and run statistical methods over them to find correlations.

Correlations were spotlighted at the annual conference of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. Although the speakers expressed a healthy respect for big data techniques, they pinpointed their limitations and affirmed the need for human intelligence in choosing what to research, as well as how to use the results.

Petrie-Flom annual 2016 conference

Petrie-Flom annual 2016 conference

A word from our administration

A new White House report also warns that “it is a mistake to assume [that big data techniques] are objective simply because they are data-driven.” The report highlights the risks of inherent discrimination in the use of big data, including:

  • Incomplete and incorrect data (particularly common in credit rating scores)

  • “Unintentional perpetuation and promotion of historical biases,”

  • Poorly designed algorithmic matches

  • “Personaliziaton and recommendation services that narrow instead of expand user options”

  • Assuming that correlation means causation

The report recommends “bias mitigation” (page 10) and “algorithmic systems accountability” (page 23) to overcome some of these distortions, and refers to a larger FTC report that lays out the legal terrain.

Like the WIRED articles mentioned earlier, this gives us some background for discussions of big data in health care.

Putting the promise of analytical research under the microscope

Conference speaker Tal Zarsky offered both fulsome praise and specific cautions regarding correlations. As the WIRED Magazine issue suggested, modern big data analysis can find new correlations between genetics, disease, cures, and side effects. The analysis can find them much cheaper and faster than randomized clinical trials. This can lead to more cures, and has the other salutory effect of opening a way for small, minimally funded start-up companies to enter health care. Jeffrey Senger even suggested that, if analytics such as those used by IBM Watson are good enough, doing diagnoses without them may constitute malpractice.

W. Nicholson Price, II focused on the danger of the FDA placing too many strict limits on the use of big data for developing drugs and other treatments. Instead of making data analysts back up everything with expensive, time-consuming clinical trials, he suggested that the FDA could set up models for the proper use of analytics and check that tools and practices meet requirements.

One of exciting impacts of correlations is that they bypass our assumptions and can uncover associations we never would have expected. The poster child for this effect is the notorious beer-and-diapers connection found by one retailer. This story has many nuances that tend to get lost in the retelling, but perhaps the most important point to note is that a retailer can depend on a correlation without having to ascertain the cause. In health, we feel much more comfortable knowing the cause of the correlation. Price called this aspect of big data search “black box” medicine.” Saying that something works, without knowing why, raises a whole list of ethical concerns.

A correlation stomach pain and disease can’t tell us whether the stomach pain led to the disease, the disease caused the stomach pain, or both are symptoms of a third underlying condition. Causation can make a big difference in health care. It can warn us to avoid a treatment that works 90% of the time (we’d like to know who the other 10% of patients are before they get a treatment that fails). It can help uncover side effects and other long-term effects–and perhaps valuable off-label uses as well.

Zarsky laid out several reasons why a correlation might be wrong.

  • It may reflect errors in the collected data. Good statisticians control for error through techniques such as discarding outliers, but if the original data contains enough apples, the barrel will go rotten.

  • Even if the correlation is accurate for the collected data, it may not be accurate in the larger population. The correlation could be a fluke, or the statistical sample could be unrepresentative of the larger world.

Zarsky suggests using correlations as a starting point for research, but backing them up by further randomized trials or by mathematical proofs that the correlation is correct.

Isaac Kohane described, from the clinical side, some of the pros and cons of using big data. For instance, data collection helps us see that choosing a gender for intersex patients right after birth produces a huge amount of misery, because the doctor guesses wrong half the time. However, he also cited times when data collection can be confusing for the reasons listed by Zarsky and others.

Senger pointed out that after drugs and medical devices are released into the field, data collected on patients can teach developers more about risks and benefits. But this also runs into the classic risks of big data. For instance, if a patient dies, did the drug or device contribute to death? Or did he just succumb to other causes?

We already have enough to make us puzzle over whether we can use big data at all–but there’s still more, as the next part of this article will describe.

Healthcare Consent and its Discontents (Part 3 of 3)

Posted on May 18, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article rated the pros and cons of new approaches to patient consent and control over data. Here we’ll look at emerging risks.

Privacy solidarity

Genetics present new ethical challenges–not just in the opportunity to change genes, but even just when sequencing them. These risks affect not only the individual: other members of her family and ethnic group can face discrimination thanks to genetic weaknesses revealed. Isaac Kohane said that the average person has 40 genetic markers indicating susceptibility to some disease or other. Furthermore, we sometimes disagree on what we consider a diseased condition.

Big data, particularly with genomic input, can lead to group harms, so Brent Mittelstadt called for moving beyond an individual view of privacy. Groups also have privacy needs (a topic I explored back in 1998). It’s not enough for an individual to consider the effect of releasing data on his own future, but on the future of family members, members of his racial group, etc. Similarly, Barbara Evans said we have to move from self-consciousness to social consciousness. But US and European laws consider privacy and data protection only on the basis of the individual.

The re-identification bogey man

A good many references were made at the conference to the increased risk of re-identifying patients from supposedly de-identified data. Headlines are made when some researcher manages to uncover a person who thought himself anonymous (and who database curators thought was anonymous when they released their data sets). In a study conducted by a team that included speaker Catherine M. Hammack, experts admitted that there is eventually a near 100% probability of re-identifying each person’s health data. The culprit in all this is burgeoning set of data collected from people as they purchase items and services, post seemingly benign news about themselves on social media, and otherwise participate in modern life.

I think the casual predictions of the end of anonymity we hear so often are unnecessarily alarmist. The field of anonymity has progressed a great deal since Latanya Sweeney famously re-identified a patient record for Governor William Weld of Massachusetts. Re-identifications carried out since then, by Sweeney and others, have taken advantage of data that was not anonymized (people just released it with an intuitive assumption that they could not be re-identified) or that was improperly anonymized, not using recommended methods.

Unfortunately, the “safe harbor” in HIPAA (designed precisely for medical sites lacking the skills to de-identify data properly) enshrines bad practices. Still, in a HIPAA challenge cited by Ameet Sarpatwari,only two of 15,000 individuals were re-identified. The mosaic effect is still more of a theoretical weakness, not an immediate threat.

I may be biased, because I edited a book on anonymization, but I would offer two challenges to people who cavalierly dismiss anonymization as a useful protection. First, if we threw up our hands and gave up on anonymization, we couldn’t even carry out a census, which is mandated in the U.S. Constitution.

Second, anonymization is comparable to encryption. We all know that computer speeds are increasing, just as are the sophistication of re-identification attacks. The first provides a near-guarantee that, eventually, our current encrypted conversations will be decrypted. The second, similarly, guarantees that anonymized data will eventually be re-identified. But we all still visit encrypted web sites and use encryption for communications. Why can’t we similarly use the best in anonymization?

A new article in the Journal of the American Medical Association exposes a gap between what doctors consider adequate consent and what’s meaningful for patients, blaming “professional indifference” and “organizational inertia” for the problem. In research, the “reasonable-patient standard” is even harder to define and achieve.

Patient consent doesn’t have to go away. But it’s getting harder and harder for patients to anticipate the uses of their data, or even to understand what data is being used to match and measure them. However, precisely because we don’t know how data will be used or how patients can tolerate it, I believe that incremental steps would be most useful in teasing out what will work for future research projects.

Healthcare Consent and its Discontents (Part 2 of 3)

Posted on May 17, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article laid out what is wrong with informed consent today. We’ll continue now to look at possible remedies.

Could we benefit from more opportunities for consent?

Donna Gitter said that the Common Rule governing research might be updated to cover de-identified data as well as personally identifiable information. The impact of this on research, of course, would be incalculable. But it might lead to more participation in research, because 72% of patients say they would like to be asked for permission before their data is shared even in de-identified form. Many researchers, such as conference speaker Liza Dawson, would rather give researchers the right to share de-identified data without consent, but put protections in place.

To link multiple data sets, according to speaker Barbara Evans, we need an iron-clad method of ensuring that the data for a single individual is accurately linked. This requirement butts up against the American reluctance to assign a single ID to a patient. The reluctance is well-founded, because tracking individuals throughout their lives can lead to all kinds of seamy abuses.

One solution would be to give each individual control over a repository where all of her data would go. That solution implies that the individual would also control each release of the data. A lot of data sets could easily vanish from the world of research, as individuals die and successors lose interest in their data. We must also remember that public health requires the collection of certain types of data even if consent is not given.

Another popular reform envisioned by health care technologists, mentioned by Evans, is a market for health information. This scenario is part of a larger movement known as Vendor Relationship Management, which I covered several years ago. There is no doubt that individuals generate thousands of dollars worth of information, in health care records and elsewhere. Speaker Margaret Foster Riley claimed that the data collected from your loyalty card by the grocer is worth more than the money you spend there.

So researchers could offer incentives to share information instead of informed consent. Individuals would probably hire brokers to check that the requested uses conform to the individuals’ ethics, and that the price offered is fair.

Giving individuals control and haggling over data makes it harder, unfortunately, for researchers to assemble useful databases. First of all, modern statistical techniques (which fish for correlations) need huge data sets. Even more troubling is that partial data sets are likely to be skewed demographically. Perhaps only people who need some extra cash will contribute their data. Or perhaps only highly-educated people. Someone can get left out.

These problems exist even today, because our clinical trials and insurance records are skewed by income, race, age, and gender. Theoretically, it could get even worse if we eliminate the waiver that lets researchers release de-identified data without patient consent. Disparities in data sets and research were heavily covered at the Petrie-Flom conference, as I discuss in a companion article.

Privacy, discrimination, and other legal regimes

Several speakers pointed out that informed consent loses much of its significance when multiple data sets can be combined. The mosaic effect adds another layer of uncertainty about what will happen to data and what people are consenting to when they release it.

Nicolas Terry pointed out that American law tends to address privacy on a sector-by-sector basis, having one law for health records, another for student records, and so forth. He seemed to indicate that the European data protection regime, which is comprehensive, would be more appropriate nowadays where the boundary between health data and other forms of data is getting blurred. Sharona Hoffman said that employers and insurers can judge applicants’ health on the basis of such unexpected data sources as purchases at bicycle stores, voting records (healthy people have more energy to get involved in politics), and credit scores.

Mobile apps notoriously bring new leaks to personal data. Mobile operating systems fastidiously divide up access rights and require apps to request these rights during installation, but most of us just click Accept for everything, including things the apps have no right to need, such as our contacts and calendar. After all, there’s no way to deny an app one specific access right while still installing it.

And lots of these apps abuse their access to data. So we remain in a contradictory situation where certain types of data (such as data entered by doctors into records) are strongly protected, and other types that are at least as sensitive lack minimal protections. Although the app developers are free to collect and sell our information, they often promise to aggregate and de-identify it, putting them at the same level as traditional researchers. But no one requires the app developers to be complete and accurate.

To make employers and insurers pause before seeking out personal information, Hoffman suggested requiring that data brokers, and those who purchase their data, to publish the rules and techniques they employ to make use of the data. She pointed to the precedent of medical tests for employment and insurance coverage, where such disclosure is necessary. But I’m sure this proposal would be fought so heavily, by those who currently carry out their data spelunking under cover of darkness, that we’d never get it into law unless some overwhelming scandal prompted extreme action. Adrian Gropper called for regulations requiring transparency in every use of health data, and for the use of open source algorithms.

Several speakers pointed out that privacy laws, which tend to cover the distribution of data, can be supplemented by laws regarding the use of data, such as anti-discrimination and consumer protection laws. For instance, Hoffman suggested extending the Americans with Disabilities Act to cover people with heightened risk of suffering from a disability in the future. The Genetic Information Nondiscrimination Act (GINA) of 2008 offers a precedent. Universal health insurance coverage won’t solve the problem, Hoffman said, because businesses may still fear the lost work time and need for workplace accommodations that spring from health problems.

Many researchers are not sure whether their use of big data–such as “data exhaust” generated by people in everyday activities–would be permitted under the Common Rule. In a particularly wonky presentation (even for this conference) Laura Odwazny suggested that the Common Rule could permit the use of data exhaust because the risks it presents are no greater than “daily life risks,” which are the keystone for applying the Common Rule.

The final section of this article will look toward emerging risks that we are just beginning to understand.

Healthcare Consent and its Discontents (Part 1 of 3)

Posted on May 16, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Not only is informed consent a joke flippantly perpetrated on patients; I expect that it has inspired numerous other institutions to shield themselves from the legal consequences of misbehavior by offering similar click-through “terms of service.” We now have a society where powerful forces can wring from the rest of us the few rights we have with a click. So it’s great to see informed consent reconsidered from the ground up at the annual conference of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School.

Petrie-Flom annual 2016 conference

Petrie-Flom annual 2016 conference

By no means did the speakers and audience at this conference agree on what should be done to fix informed consent (only that it needs fixing). The question of informed consent opens up a rich dialog about the goals of medical research, the relationship between researchers and patients, and what doctors have a right to do. It also raises questions for developers and users of electronic health records, such as:

  • Is it ethical to save all available data on a person?

  • If consent practices get more complex, how are the person’s wishes represented in the record?

  • If preferences for the data released get more complex, can we segment and isolate different types of data?

  • Can we find and notify patients of research results that might affect them, if they choose to be notified?

  • Can we make patient matching and identification more robust?

  • Can we make anonymization more robust?

A few of these topics came up at the conference. The rest of this article summarizes the legal and ethical topics discussed there.

The end of an era: IRBs under attack

The annoying and opaque informed consent forms we all have to sign go back to the 1970s and the creation of Institutional Review Boards (IRBs). Before that lay the wild-west era of patient relationships documented in Rebecca Skloot’s famous Immortal Life of Henrietta Lacks.

IRBs were launched in a very different age, based on assumptions that are already being frayed and will probably no longer hold at all a few years from now:

  • Assumption: Research and treatment are two different activities. Challenge: Now they are being combined in many institutions, and the ideal of a “learning heath system” will make them inextricable.

  • Assumption: Each research project takes place within the walls of a single institution, governed by its IRB. Challenge: Modern research increasingly involves multiple institutions with different governance, as I have reported before.

  • Assumption: A research project is a time-limited activity, lasting generally only about a year. Challenge: Modern research can be longitudinal and combine data sets that go back decades.

  • Assumption: The purpose for which data is collected can be specified by the research project. Challenge: Big data generally runs off of data collected for other purposes, and often has unclear goals.

  • Assumption: Inclusion criteria for each project are narrow. Challenge: Big data ranges over widely different sets of people, often included arbitrarily in data sets.

  • Assumption: Rules are based on phenotypal data: diagnoses, behavior, etc. Challenge: Genetics introduces a whole new set of risks and requirements, including the “right not to know” if testing turns up an individual’s predisposition to disease.

  • Assumption: The risks of research are limited to the individuals who participate. Challenge: As we shall see, big data affects groups as well as individuals.

  • Assumption: Properly de-identified data has an acceptably low risk of being re-identified. Challenge: Privacy researchers are increasingly discovering new risks from combining multiple data sources, a trend called the “mosaic effect.” I will dissect the immediacy of this risk later in the article.

Now that we have a cornucopia of problems, let’s look at possible ways forward.

Chinese menu consent

In the Internet age, many hope, we can provide individuals with a wider range of ethical decisions than the binary, thumbs-up-thumbs-down choice thrust before them by an informed consent form.

What if you could let your specimens or test results be used only for cancer research, or stipulate that they not be used for stem cell research, or even ask for your contributions to be withdrawn from experiments that could lead to discrimination on the basis of race? The appeal of such fine-grained consent springs from our growing realization that (as in the Henrietta Lacks case) our specimens and data may travel far. What if a future government decides to genetically erase certain racial or gender traits? Eugenics is not a theoretical risk; it has been pursued before, and not just by Nazis.

As Catherine M. Hammack said, we cannot anticipate future uses for medical research–especially in the fast-evolving area of genetics, whose possibilities alternate between exciting and terrifying–so a lot of individuals would like to draw their own lines in the sand.

I don’t personally believe we could implement such personalized ethical statements. It’s a problem of ontology. Someone has to list all the potential restrictions individuals may want to impose–and the list has to be updated globally at all research sites when someone adds a new restriction. Then we need to explain the list and how to use it to patients signing up for research. Researchers must finally be trained in the ontology so they can gauge whether a particular use meets the requirements laid down by the patient, possibly decades earlier. This is not a technological problem and isn’t amenable to a technological solution.

More options for consent and control over data will appear in the next part of this article.

ZibdyHealth Adapts to Sub-Optimal Data Exchange Standards for a Personal Health Record

Posted on May 10, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Reformers in the health care field, quite properly, emphasize new payment models and culture changes to drive improvements in outcomes. But we can’t ignore the barriers that current technology puts in the way of well-meaning reformers. This article discusses one of the many companies offering a patient health record (PHR) and the ways they’ve adapted to a very flawed model for data storage and exchange.

I had the honor to be contacted by Dr. Hirdey Bhathal, CEO/Founder of ZibdyHealth. Like many companies angling to develop a market for PHRs, ZibdyHealth offers a wide range of services to patients. Unlike, say, Google Health (of blessed memory) or Microsoft HealthVault, ZibdyHealth doesn’t just aspire to store your data, but to offer additional services that make it intensely valuable to you. Charts and visualizations. for instance, will let you see your progress with laboratory and device data over time. They call this a “Smart HIE.” I’ll look a bit at what they offer, and then at the broken model for data exchange that they had to overcome in the health care industry.

The ZibdyHealth application

Setting up an account with ZibdyHealth is as easy as joining Facebook. Once you’re there, you can create health information manually. The company is working with fitness device makers to allow automatic uploads of device data, which can then be saved as a standard Continuity of Care Document (CCD) and offered to doctors.

You can also upload information from your physician via their health care portal–with a degree of ease or difficulty depending on your provider–and share it with other clinicians or family members (Figure 1). You have fine-grained control over which medications, diagnoses, and other information to share, a form of control called segmentation in health care.

Figure 1. Zibdy discharge summary displayed on mobile device

Figure 1. Summary of visit in Zibdy

Dr. Bhathal would like his application to serve whole families and teams, not just individuals. Whether you are caring for your infant or your aging grandmother, they want their platform to meet your needs. In fact, they are planning to deploy their application in some developing nations as an electronic medical record for rural settings, where one healthcare provider will be able to manage the health data for an entire village.

Currently, ZibdyHealth allows speciality clinics to share information with the patient’s regular doctor, helps identify interactions between drugs provided by different doctors, and allows parents to share their children’s health information with schools. This consolidation and quick sharing of medical information will work well with minute clinics or virtual MD visits.

ZibdyHealth is HIPAA-compliant, and support highly secure 256-bit AES encryption for data exchange. Like health care providers, they may share data with partners for operational purposes, but they promise never to sell your data–unlike many popular patient networks. Although they sometimes aggregate anonymized data, they do so to offer you better services, not to sell it on the market or to sell you other services themselves.

In some ways, ZibdyHealth is like a health information exchange (HIE), and as we shall see, they face some of the same problems. But current HIEs connect only health care providers, and are generally limited to large health care systems with ample resources. PHR applications such as ZibdyHealth aim to connect physicians and patients with others, such as family members, therapists, nursing homes, assisted care facilities, and independent living facilities. In addition, most HIEs only work within small states or regions, whereas ZibdyHealth is global. They plan to follow a business model where they provide the application for free to individuals, without advertisements, but charge enterprises who choose the application in order to reach and serve their patients.

Tackling the data dilemma

We’d see a lot more services like ZibdyHealth (and they’d be more popular with patients, providers, and payers) if data exchange worked like it does in the travel industry or other savvy market sectors. Interoperability will enable the “HIE of one” I introduced in an earlier article. In the meantime, ZibdyHealth has carried out a Herculean effort to do the best they can in today’s health exchange setting.

What do they use to get data from patient portals and clinicians’ EHRs? In a phrase, every recourse possible.

  • Many organizations now offer portals that allow patients to download their records in CCD format. ZibdyHealth works with a number of prominent institutions to make uploading easy (Figure 2). Or course, the solution is always a contingent one, because the provider still owns your data. After your next visit, you have to download it again. ZibdyHealth is working on automating this updating process so that providers can feed this information to the patient routinely and, by uploading the discharge CCD as part of a patient’s discharge process, ensure an easy and accurate transition of care.

  • Figure 2. List of electronic records uploaded to Zibdy through their CCD output

    Figure 2. List of uploaded CCDs

  • If providers aren’t on ZibdyHealth’s list of partners, but still offer a CCD, you can download it yourself using whatever mechanism your provider offers, then upload it to ZibdyHealth. ZibdyHealth has invested an enormous amount to parse the various fields of different EHRs and figure out where information is, because the CCD is a very imperfect standard and EHRs differ greatly. I tried the download/upload technique with my own primary care provider and found that ZibdyHealth handled it gracefully.

  • ZibdyHealth also supports Blue Button, the widely adopted XML format that originated at the VA as a text file.

I see ZibdyHealth as one of the early explorers who have to hew a path through the forest to reach their goal. As more individuals come to appreciate the benefits of such services, roads will be paved. Each patient who demands that their doctor make it easy to connect with an application like ZibdyHealth will bring closer the day when we won’t have to contort ourselves to share data.

When Providing a Health Service, the Infrastructure Behind the API is Equally Important

Posted on May 2, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

In my ongoing review of application programming interfaces (APIs) as a technical solution for offering rich and flexible services in health care, I recently ran into two companies who showed as much enthusiasm for their internal technologies behind the APIs as for the APIs themselves. APIs are no longer a novelty in health services, as they were just five years ago. As the field gets crowded, maintenance and performance take on more critical roles in offering a successful business–so let’s see how Orion Health and Mana Health back up their very different offerings.

Orion Health

This is a large analytics firm that has staked a a major claim in the White House’s Precision Medicine Initiative. Orion Health’s data platform, Amadeus, addresses population health management as well as “considering how they can better tailor care for each chronically ill individual,” as put by Dave Bennett, executive vice president for Product & Strategy. “We like to say that population health is the who and precision medicine is the how.” Thus, Amadeus can harmonize a huge variety of inputs, such as how many steps a patient takes each day at home, to prevent readmissions.

Orion Health has a cloud service, a capacity for handling huge data sets such as genomes, and a selection of tools for handling such varied sources as clinical, claims, pharmacy, genetic, and consumer device or other patient-generated data. Environmental and social data are currently being added. It has more than 90 million patient records in its systems worldwide.

Patient matching links up data sets from different providers. All this data is ingested, normalized, and made accessible through APIs to authorized parties. Customers can write their own applications, visualizations, and SQL queries. Amadeus is used by the Centers for Disease Control, and many hospitals join the chorus to submit data to the CDC.

So far, Orion Health resembles some other big initiatives that major companies in the health care space are offering. I covered services from Philips in a recent article, and another site talks about GE. Bennett says that Orion Health really distinguishes itself through the computing infrastructure that drives the analytics and data access.

Many companies use conventional relational database as their canonical data store. Relational databases are 1980s-era technology, unmatched in their robustness and sophistication in querying (through the SQL language), but becoming a bottleneck for the data sizes that health analytics deals with.

Over the past decade, every industry that needs to handle enormous, streaming sets of data has turned to a variety of data stores known collectively as NoSQL. Ironically, these are often conceptually simpler than SQL databases and have roots going much farther back in computing history (such as key/value stores). But these data stores let organizations run a critical subset of queries in real time over huge data sets. In addition, analytics are carried out by newer MapReduce algorithms and in-memory services such as Spark. As an added impetus for development, these new technologies are usually free and open source software.

Amadeus itself stores data in Cassandra, one of the most mature NoSQL data stores, and uses Spark for processing. According to Bennett, “Spark enables Amadeus to future proof healthcare organizations for long term innovation. Bringing data and analytics together in the cloud allows our customers to generate deeper insights efficiently and with increased relevancy, due to the rapidity of the analytics engine and the streaming of current data in Amadeus. All this can be done at a lower cost than traditional healthcare analytics that move the data from various data warehouses that are still siloed.” Elastic Search is also used. In short, the third-party tools used within Orion Health are ordinary and commonly found. It is simply modern in the same way as computing facilities in other industries–così fan tutte.

Mana Health

This company integrates device data into EHRs and other data stores. It achieved fame when it was chosen for the New York State patient portal. According to Raj Amin, co-founder and Executive Chairman, the company won over the judges with the convenient and slick tile concept in their user interface. Each tile could be clicked to reveal a deeper level of detail in the data. The company tries to serve clinicians, patients, and data analysts alike. Clients include HIEs, health systems, medical device manufacturers, insurers, and app developers.

Like Orion Health, Mana Health is very conscious of staying on the leading edge of technology. They are mobile-friendly and architect their solutions using microservices, a popular form of modular development that attempts to maximize flexibility in coding and deploying new services. On a lark, they developed a VR engine compatible with the Oculus Rift to showcase what can creatively be built on their API. Although this Rift project has no current uses, the development effort helps them stay flexible so that they can adapt to whatever new technologies come down the pike.

Because Mana Health developed their API some eighteen months ago, they pre-dated some newer approaches and standards. They plan to offer compatibility with emerging standards such as FHIR that see industry adoption. The company recently was announced as a partner in the Commonwell Alliance, a project formed by a wide selection of major EHR vendors to pursue interoperability.

To support machine learning, Mana Health stores data in an open source database called Neo4j. This is a very unusual technology called a graph database, whose history and purposes I described two years ago.

Graphs are familiar to anyone who has seen airline maps showing the flights between cities. Graphs are also common for showing social connections, such as your friends-of-friends on Facebook. In health care, as well, graphs are very useful tools. They show relationships, but in a very different way from relational databases. Graphs are better than relational databases at tracing connections between people or other entities. For instance, a team led by health IT expert Fred Trotter used Neo4J to store and query the data in DocGraph, linking primary care physicians to the specialists to which they refer patients.

In their unique ways, Mana Health and Orion Health follow trends in the computing industry and judiciously choose tools that offer new forms of access to data, while being proven in the field. Although commenters in health IT emphasize the importance of good user interfaces, infrastructure matters too.

Our Uncontrolled Health Care Costs Can Be Traced to Data and Communication Failures (Part 2 of 2)

Posted on April 13, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article provided whatever detail I could find on the costs of poor communications and data exchange among health care providers. But in truth, it’s hard to imagine the toll taken by communications failures beyond certain obvious consequences, such as repeated tests and avoidable medical errors. One has to think about how the field operates and what we would be capable of with proper use of data.

As patients move from PCP to specialist, from hospital to rehab facility, and from district to district, their providers need not only discharge summaries but intensive coordination to prevent relapses. Our doctors are great at fixing a diabetic episode or heart-related event. Where we fall down is on getting the patient the continued care she needs, ensuring she obtains and ingests her medication, and encouraging her to make the substantial life-style changes that can prevent reoccurrences. Modern health really is all about collaboration–but doctors are decades behind the times.

Clinicians were largely unprepared to handle the new patients brought to them by the Affordable Care Act. Examining the impact of new enrollees, who “have higher rates of disease and received significantly more medical care,” an industry spokesperson said, “The findings underscore the need for all of us in the health care system, and newly insured consumers, to work together to make sure that people get the right health care service in the right care setting and at the right time…Better communication and coordination is needed so that everyone understands how to avoid unnecessary emergency room visits, make full use of primary care and preventive services and learn how to properly adhere to their medications.” Just where the health providers fall short.

All these failures to communicate may explain the disappointing performance of patient centered medical homes and Accountable Care Organizations. While many factors go into the success or failure of such complex practices, a high rate of failure suggests that they’re not really carrying out the coordinated care they were meant to deliver. Naturally, problems persist in getting data from one vendor’s electronic health record to another.

Urgent care clinics, and other alternative treatment facilities offered in places such as pharmacies, can potentially lower costs, but not if the regular health system fails to integrate them.

Successes in coordinated care show how powerful it can be. Even so simple a practice as showing medical records to patients can improve care, but most clinicians still deny patients access to their data.

One care practice drastically lowered ER admissions through a notably low-tech policy–refering their patients to a clinic for follow-up care. This is only the beginning of what we could achieve. If modern communications were in place, hospitals would be linked so that a CDC warning could go to all of them instantly. And if clinicians and their record systems were set up to handle patient-generated data, they could discover a lot more about the patients and monitor behavior change.

How are the hospitals and clinics responding to this crisis and the public pressure to shape up? They push back as if it was not their problem. They claim they are moving toward better information sharing and teamwork, but never get there.

One of their favorite gambits is to ask the government to reward them for achieving interoperability 90 days out of the year. They make this request with no groveling, no tears of shame, no admission that they have failed in their responsibility to meet reasonable goals set seven years ago. If I delivered my projects only 25% of the time, I’d have trouble justifying myself to my employer, especially if I received my compensation plan seven years ago. Could the medical industry imagine that it owes us a modicum of effort?

Robert Schultz, a writer and entrepreneur in health care, says, “Underlying the broken communications model is a lack of empathy for the ultimate person affected–the patient. Health care is one of the few industries where the user is not necessarily the party paying for the product or service. Electronic health records and health information exchanges are designed around the insurance companies, accountable care organizations, or providers, instead of around understanding the challenges and obstacles that patients face on a daily basis. (There are so many!) The innovators who understand the role of the patient in this new accountable care climate will be winners. Those who suffer from the burden of legacy will continue to see the same problems and will become eclipsed by other organizations who can sustain patient engagement and prove value within accountable care contracts.”

Alternative factors

Of course, after such a provocative accusation, I should consider the other contributors that are often blamed for increasing health care costs.

An aging population

Older people have more chronic diseases, a trend that is straining health care systems from Cuba to Japan. This demographic reality makes intelligent data use even more important: remote monitoring for chronic conditions, graceful care transitions, and patient coordination.

The rising cost of drugs

Dramatically increasing drug prices are certainly straining our payment systems. Doctors who took research seriously could be pushing back against patient requests for drugs that work more often in TV ads than in real life. Doctors could look at holistic pain treatments such as yoga and biofeedback, instead of launching the worst opiate addiction crisis America has ever had.

Government bureaucracy

This seems to be a condition of life we need to deal with, like death and taxes. True, the Centers for Medicare & Medicaid Services (CMS) keeps adding requirements for data to report. But much of it could be automated if clinical settings adopted modern programming practices. Furthermore, this data appears to be a burden only because it isn’t exploited. Most of it is quite useful, and it just takes agile organizations to query it.

Intermediaries

Reflecting the Byzantine complexity of our payment systems, a huge number of middlemen–pharmacy benefits managers, medical billing clearinghouses, even the insurers themselves–enter the system, each taking its cut of the profits. Single-payer insurance has long been touted as a solution, but I’d rather push for better and cheaper treatments than attack the politically entrenched payment system.

Under-funded public health

Poverty, pollution, stress, and other external factors have huge impacts on health. This problem isn’t about clinicians, of course, it’s about all of us. But clinicians could be doing more to document these and intervene to improve them.

Clinicians like to point to barriers in their way of adopting information-based reforms, and tell us to tolerate the pace of change. But like the rising seas of climate change, the bite of health care costs will not tolerate complacency. The hard part is that merely wagging fingers and imposing goals–the ONC’s primary interventions–will not produce change. I think that reform will happen in pockets throughout the industry–such as the self-insured employers covered in a recent article–and eventually force incumbents to evolve or die.

The precision medicine initiative, and numerous databases being built up around the country with public health data, may contribute to a breakthrough by showing us the true quality of different types of care, and helping us reward clinicians fairly for treating patients of varying needs and risk. The FHIR standard may bring electronic health records in line. Analytics, currently a luxury available only to major health conglomerates, will become more commoditized and reach other providers.

But clinicians also have to do their part, and start acting like the future is here now. Those who make a priority of data sharing and communication will set themselves up for success long-term.

Our Uncontrolled Health Care Costs Can Be Traced to Data and Communication Failures (Part 1 of 2)

Posted on April 12, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

A host of scapegoats, ranging from the Affordable Care Act to unscrupulous pharmaceutical companies, have been blamed for the rise in health care costs that are destroying our financial well-being, our social fabric, and our political balance. In this article I suggest a more appropriate target: the inability of health care providers to collaborate and share information. To some extent, our health care crisis is an IT problem–but with organizational and cultural roots.

It’s well known that large numbers of patients have difficulty with costs, and that employees’ share of the burden is rising. We’re going to have to update the famous Rodney Dangerfield joke:

My doctor said, “You’re going to be sick.” I said I wanted a second opinion. He answered, “OK, you’re going to be poor too.”

Most of us know about the insidious role of health care costs in holding down wages, in the fight by Wisconsin Governor Scott Walker over pensions that tore the country apart, in crippling small businesses, and in narrowing our choice of health care providers. Not all realize, though, that the crisis is leaching through the health care industry as well, causing hospitals to fail, insurers to push costs onto subscribers and abandon the exchanges where low-income people get their insurance, co-ops to close, and governments to throw people off of subsidized care, threatening the very universal coverage that the ACA aimed to achieve.

Lessons from a ground-breaking book by T.R. Reid, The Healing of America, suggests that we’re undergoing a painful transition that every country has traversed to achieve a rational health care system. Like us, other countries started by committing themselves to universal health care access. This then puts on the pressure to control costs, as well as the opportunities for coordination and economies of scale that eventually institute those controls. Solutions will take time, but we need to be smart about where to focus our efforts.

Before even the ACA, the 2009 HITECH act established goals of data exchange and coordinated patient care. But seven years later, doctors still lag in:

  • Coordinating with other providers treating the patients.

  • Sending information that providers need to adequately treat the patients.

  • Basing treatment decisions on evidence from research.

  • Providing patients with their own health care data.

We’ll look next at the reports behind these claims, and at the effects of the problems.

Why doctors don’t work together effectively

A recent report released by the ONC, and covered by me in a recent article, revealed the poor state of data sharing, after decades of Health Information Exchanges and four years of Meaningful Use. Health IT observers expect interoperability to continue being a challenge, even as changes in technology, regulations, and consumer action push providers to do it.

If merely exchanging documents is so hard–and often unachieved–patient-focused, coordinated care is clearly impossible. Integrating behavioral care to address chronic conditions will remain a fantasy.

Evidence-based medicine is also more of an aspiration than a reality. Research is not always trustworthy, but we must have more respect for the science than hospitals were found to have in a recent GAO report. They fail to collect data either on the problems leading to errors or on the efficacy of solutions. There are incentive programs from payers, but no one knows whether they help. Doctors are still ordering far too many unnecessary tests.

Many companies in the health analytics space offer services that can bring more certainty to the practice of medicine, and I often cover them in these postings. Although increasingly cited as a priority, analytical services are still adopted by only a fraction of health care providers.

Patients across the country are suffering from disrupted care as insurers narrow their networks. It may be fair to force patients to seek less expensive providers–but not when all their records get lost during the transition. This is all too likely in the current non-interoperable environment. Of course, redundant testing and treatment errors caused by ignorance could erase the gains of going to low-cost providers.

Some have bravely tallied up the costs of waste and lack of care coordination in health care. Some causes, such as fraud and price manipulation, are not attributable to the health IT failures I describe. But an enormous chunk of costs directly implicate communications and data handling problems, including administrative overhead. The next section of this article will explore what this means in day-to-day health care.

How Twine Health Found a Successful Niche for a Software Service in Health Care

Posted on April 1, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Apps and software services for health care are proliferating–challenges and hackathons come up with great ideas week after week, and the app store contains hundreds of thousands of apps. The hard thing is creating a business model that sustains a good idea. To this end, health care incubators bring in clinicians to advise software developers. Numerous schemes of questionable ethics abound among apps (such as collecting data on users and their contacts). In this article, I’ll track how Twine Health tried different business models and settled on the one that is producing impressive growth for them today.

Twine Health is a comprehensive software platform where patients and their clinicians can collaborate efficiently between visits to achieve agreed-upon goals. Patients receive support in a timely manner, including motivation for lifestyle changes and expertise for medication adjustments. I covered the company in a recent article that showed how the founders ran some clinical studies demonstrating the effectiveness of their service. Validation is perhaps the first step for any developer with a service or app they think could be useful. Randomized controlled trials may not be necessary, but you need to find out from potential users what they want to see before they feel secure prescribing, paying for, and using your service. Validation will differentiate you from the hoards of look-alike competitors with whom you’ll share your market.

Dr. John Moore, co-founder of Twine Health, felt in 2013 that it was a good time to start a company, because the long-awaited switch in US medicine from fee-for-service to value-based care was starting to take root. Blue Cross and Blue Shield were encouraging providers to switch to Alternative Quality Contracts. The Affordable Care act of 2010 created the Medicare Shared Savings Program, which led to Accountable Care Organizations.

The critical role played by value-based-care has been explained frequently in the health care space. Current fee-for-service programs pay only for face-to-face visits and occasionally for telehealth visits. The routine daily interventions of connected health, such as text messages, checks of vital signs, and supportive prompts, receive no remuneration. The long-term improvements of connected health receive no support in the fee-for-value model, as much as individual clinicians may with to promote positive behavior among their patients.

Thus, Twine Health launched in 2014 with a service for clinicians. What they found, unfortunately, is that the hype about value-based care had gotten way ahead of its actual progress. The risk sharing by Accountable Care Organizations, such as under the Medicare Shared Savings Program, weren’t full commitments to delivering value, as when clinicians receive full capitation for a population and are required to deliver minimum health outcomes. Instead, the organizations were still billing fee-for-service. Medicare compared their spending to a budget at the end of the year, and, if the organization accrued less fee-for-service billing than Medicare expected, the organization got back 50-60% of the savings In the lowest track of the program, the organization didn’t even get penalized for exceeding costs–it was just rewarded for beating the estimates.

In short, Twine Health found that clinicians in ACOs in 2014 were following the old fee-for-service model and that Twine Health’s service was not optimal for their everyday practices. A recent survey from the NEJM Catalyst Insights Council showed that risk sharing and quality improvement are employed in a minority of health care settings, and are especially low in hospitals.

Collaborative care requires a complete rededication of time and resources. One must be willing to track one’s entire patient panel on a regular basis, guiding them toward ongoing behavior modification in the context of their everyday lives, with periodic office visits every few months. One also needs to go beyond treating symptoms and learn several skills of a very different type that traditional clinicians haven’t been taught: motivational interviewing, shared decision making, and patient self-management.

Over a period of months, a new model for Twine’s role in healthcare delivery started to become apparent: progressive, self-insured employers were turning their attention to value-based care and taking matters into their own hands because of escalating healthcare costs. They were moving much quicker than ACOs and taking on much greater risk.

The employers were contracting with innovative healthcare delivery organizations, which were building on-site primary care clinics (exclusive to that employer and located right at the place of work), near-site primary care clinics (shared across multiple employers), wellness and chronic disease coaching programs, etc. Unlike traditional healthcare providers, the organizations providing services to self-insured employers were taking fully capitated payments and, therefore, full risk for their set of services. Ironically, some of the self-insured employers were actually large health systems whose own business models still involved mostly fee-for-service payments.

With on-site clinics, wellness and chronic disease coaching organizations, and self-insured employers, Twine Health has found a firm and growing customer base. Dr. Moore is confident that the healthcare industry is on the cusp of broadly adopting value-based care. Twine Health and other connected health providers will be able to increase their markets vastly by working with traditional providers and insurers. But the Twine Health story is a lesson in how each software developer must find the right angle, the right time, and the right foothold to succeed.