Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

How Twine Health Found a Successful Niche for a Software Service in Health Care

Posted on April 1, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Apps and software services for health care are proliferating–challenges and hackathons come up with great ideas week after week, and the app store contains hundreds of thousands of apps. The hard thing is creating a business model that sustains a good idea. To this end, health care incubators bring in clinicians to advise software developers. Numerous schemes of questionable ethics abound among apps (such as collecting data on users and their contacts). In this article, I’ll track how Twine Health tried different business models and settled on the one that is producing impressive growth for them today.

Twine Health is a comprehensive software platform where patients and their clinicians can collaborate efficiently between visits to achieve agreed-upon goals. Patients receive support in a timely manner, including motivation for lifestyle changes and expertise for medication adjustments. I covered the company in a recent article that showed how the founders ran some clinical studies demonstrating the effectiveness of their service. Validation is perhaps the first step for any developer with a service or app they think could be useful. Randomized controlled trials may not be necessary, but you need to find out from potential users what they want to see before they feel secure prescribing, paying for, and using your service. Validation will differentiate you from the hoards of look-alike competitors with whom you’ll share your market.

Dr. John Moore, co-founder of Twine Health, felt in 2013 that it was a good time to start a company, because the long-awaited switch in US medicine from fee-for-service to value-based care was starting to take root. Blue Cross and Blue Shield were encouraging providers to switch to Alternative Quality Contracts. The Affordable Care act of 2010 created the Medicare Shared Savings Program, which led to Accountable Care Organizations.

The critical role played by value-based-care has been explained frequently in the health care space. Current fee-for-service programs pay only for face-to-face visits and occasionally for telehealth visits. The routine daily interventions of connected health, such as text messages, checks of vital signs, and supportive prompts, receive no remuneration. The long-term improvements of connected health receive no support in the fee-for-value model, as much as individual clinicians may with to promote positive behavior among their patients.

Thus, Twine Health launched in 2014 with a service for clinicians. What they found, unfortunately, is that the hype about value-based care had gotten way ahead of its actual progress. The risk sharing by Accountable Care Organizations, such as under the Medicare Shared Savings Program, weren’t full commitments to delivering value, as when clinicians receive full capitation for a population and are required to deliver minimum health outcomes. Instead, the organizations were still billing fee-for-service. Medicare compared their spending to a budget at the end of the year, and, if the organization accrued less fee-for-service billing than Medicare expected, the organization got back 50-60% of the savings In the lowest track of the program, the organization didn’t even get penalized for exceeding costs–it was just rewarded for beating the estimates.

In short, Twine Health found that clinicians in ACOs in 2014 were following the old fee-for-service model and that Twine Health’s service was not optimal for their everyday practices. A recent survey from the NEJM Catalyst Insights Council showed that risk sharing and quality improvement are employed in a minority of health care settings, and are especially low in hospitals.

Collaborative care requires a complete rededication of time and resources. One must be willing to track one’s entire patient panel on a regular basis, guiding them toward ongoing behavior modification in the context of their everyday lives, with periodic office visits every few months. One also needs to go beyond treating symptoms and learn several skills of a very different type that traditional clinicians haven’t been taught: motivational interviewing, shared decision making, and patient self-management.

Over a period of months, a new model for Twine’s role in healthcare delivery started to become apparent: progressive, self-insured employers were turning their attention to value-based care and taking matters into their own hands because of escalating healthcare costs. They were moving much quicker than ACOs and taking on much greater risk.

The employers were contracting with innovative healthcare delivery organizations, which were building on-site primary care clinics (exclusive to that employer and located right at the place of work), near-site primary care clinics (shared across multiple employers), wellness and chronic disease coaching programs, etc. Unlike traditional healthcare providers, the organizations providing services to self-insured employers were taking fully capitated payments and, therefore, full risk for their set of services. Ironically, some of the self-insured employers were actually large health systems whose own business models still involved mostly fee-for-service payments.

With on-site clinics, wellness and chronic disease coaching organizations, and self-insured employers, Twine Health has found a firm and growing customer base. Dr. Moore is confident that the healthcare industry is on the cusp of broadly adopting value-based care. Twine Health and other connected health providers will be able to increase their markets vastly by working with traditional providers and insurers. But the Twine Health story is a lesson in how each software developer must find the right angle, the right time, and the right foothold to succeed.

Harvard Law Conference Surveys Troubles With Health Care

Posted on March 30, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

It is salubrious to stretch oneself and regularly attend a conference in a related field. At the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics, one can bask in the wisdom of experts who are truly interdisciplinary (as opposed to people like me, who is simply undisciplined). Their Tenth Anniversary Conference drew about 120 participants. The many topics–which included effects of the Supreme Court rulings on the Affordable Care Act and other cases, reasons that accountable care and other efforts haven’t lowered costs, stresses on the pharmaceutical industry, and directions in FDA regulation–contained several insights for health IT professionals.

From my perspective, the center of the conference was the panel titled “Health Innovation Policy and Regulating Novel Technology.” A better title might have been “How to Make Pharma Profitable Again,” because most of the panelists specialized in pharmaceuticals or patents. They spun out long answers to questions about how well patents can protect innovation (recognizing a controversy); the good, the bad, and the ugly of pricing; and how to streamline clinical trials, possibly adding risk. Their pulses really rose when they were asked a question about off-label drug use. But they touched on health IT and suggested many observations that could apply to it as well.

It is well known that drug development and regulatory approval take years–perhaps up to 20 years–and that high-tech companies developing fitness devices or software apps have a radically different product cycle. As one panelist pointed out, it would kill innovation to require renewed regulatory approval for each software upgrade. He suggested that the FDA define different tiers of changes, and that minor ones with little risk of disrupting care be allowed automatically.

I look even farther. It is well known also that disruptive inventions displace established technologies. Just as people with mobile devices get along without desktop computers and even TV sets, medicines have displaced many surgical procedures. Now the medicines themselves (particularly, controversial mental health medicines) can sometimes be replaced by interactive apps and online services. Although rigorous testing is still lacking for most of these alternatives, the biggest barrier to their adoption is lack of reimbursement in our antiquated health payment system.

Instead of trying to individually fix each distortion in payment, value-based care is the reformer’s solution to the field’s inefficient use of treatment options. Value-based care requires more accurate information on quality and effectiveness, as I recently pointed out. And this in turn may lead to the more flexible regulations suggested by the panelist, with a risk that is either unchanged or raised by an amount we can tolerate.

Comparisons between information and other medical materials can be revealing. For instance, as the public found out in the Henrietta Lacks controversy, biospecimens are treated as freely tradable information (so long as the specimen is de-identified) with no patient consent required. It’s assumed that we should treat de-identified patient information the same way, but in fact there’s a crucial difference. No one would expect the average patient to share and copy his own biospecimens, but doing so with information is trivially easy. Therefore, patients should have more of a say about how their information is used, even if biospecimens are owned by the clinician.

Some other insights I picked up from this conference were:

  • Regulations and policies by payers drive research more than we usually think. Companies definitely respond to what payers are interested in, not just to the needs of the patients. One panelist pointed out that the launch of Medicare Part D, covering drugs for the first time, led to big new investments in pharma.

  • Hotels and other service-oriented industries can provide a positive experience efficiently because they tightly control the activities of all the people they employ. Accountable Care Organizations, in contrast, contain loose affiliations and do not force their staff to coordinate care (even though that was the ideal behind their formation), and therefore cannot control costs.

  • Patents, which the pharma companies consider so important to their business model, are not normally available to diagnostic tests. (The attempt by Myriad Genetics to patent the BRACA1 gene in order to maintain a monopoly over testing proves this point: the Supreme Court overturned the patent.) However, as tests get more complex, the FDA has started regulating them. This has the side effect of boosting the value of tests that receive approval, an advantage over competitors.

Thanks to Petrie-Flom for generously letting the public in on events with such heft. Perhaps IT can make its way deeper into next year’s conference.

Research Shows that Problems with Health Information Exchange Resist Cures (Part 2 of 2)

Posted on March 23, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this paper introduced problems found in HIE by two reports: one from the Office of the National Coordinator and another from experts at the Oregon Health & Science University. Tracing the causes of these problems is necessarily somewhat speculative, but the research helps to confirm impressions I have built up over the years.

The ONC noted that developing HIE is very resource intensive, and not yet sustainable. (p. 6) I attribute these problems to the persistence of the old-fashioned, heavyweight model of bureaucratic, geographically limited organizations hooking together clinicians. (If you go to another state, better carry your medical records with you.) Evidence of their continued drag on the field appeared in the report:

Grantees found providers did not want to login to “yet another system” to access data, for example; if information was not easily accessible, providers were not willing to divert time and attention from patients. Similarly, if the system was not user friendly and easy to navigate, or if it did not effectively integrate data into existing patient records, providers abandoned attempts to obtain data through the system. (pp. 76-77)

The Oregon researchers in the AHRQ webinar also confirmed that logging in tended to be a hassle.

Hidden costs further jacked up the burden of participation (p. 72). But even though HIEs already suck up unsustainable amounts of money for little benefit, “Informants noted that it will take many years and significantly more funding and resources to fully establish HIE.” (p. 62) “The paradox of HIE activities is that they need participants but will struggle for participants until the activities demonstrate value. More evidence and examples of HIE producing value are needed to motivate continued stakeholder commitment and investment.” (p. 65)

The adoption of the Direct protocol apparently hasn’t fixed these ongoing problems; hopefully FHIR will. The ONC hopes that, “Open standards, interfaces, and protocols may help, as well as payment structures rewarding HIE.” (p. 7) Use of Direct did increase exchange (p. 56), and directory services are also important (pp. 59-60). But “Direct is used mostly for ADT notifications and similar transitional documents.” (p. 35)

One odd complaint was, “While requirements to meet Direct standards were useful for some, those standards detracted attention from the development of query-based exchange, which would have been more useful.” (p. 77) I consider this observation to be a red herring, because Direct is simply a protocol, and the choice to use it for “push” versus “pull” exchanges is a matter of policy.

But even with better protocols, we’ll still need to fix the mismatch of the data being exchanged: “…the majority of products and provider processes do not support LOINC and SNOMED CT. Instead, providers tended to use local codes, and the process of mapping these local codes to LOINC and SNOMED CT codes was beyond the capacity of most providers and their IT departments.” (p. 77) This shows that the move to FHIR won’t necessarily improve semantic interoperability, unless FHIR requires the use of standard codes.

Trust among providers remains a problem (p. 69) as does data quality (pp. 70-71). But some informants put attitude about all: “Grantees questioned whether HIE developers and HIE participants are truly ready for interoperability.” (p. 71)

It’s bad enough that core health care providers–hospitals and clinics–make little use of HIE. But a wide range of other institutions who desperately need HIE have even less of it. “Providers not eligible for MU incentives consistently lag in HIE connectivity. These setting include behavioral health, substance abuse, long-term care, home health, public health, school-based settings, corrections departments, and emergency medical services.” (p. 75) The AHRQ webinar found very limited use of HIE for facilities outside the Meaningful Use mandate, such as nursing homes (Long Term and Post Acute Care, or LTPAC). Health information exchange was used 10% to 40% of the time in those settings.

The ONC report includes numerous recommendations for continuing the growth of health information exchange. Most of these are tweaks to bureaucratic institutions responsible for promoting HIE. These are accompanied by the usual exhortations to pay for value and improve interoperability.

But six years into the implementation of HITECH–and after the huge success of its initial goal of installing electronic records, which should have served as the basis for HIE–one gets the impression that the current industries are not able to take to the dance floor together. First, ways of collecting and sharing data are based on a 1980s model of health care. And even by that standard, none of the players in the space–vendors, clinicians, and HIE organizations–are thinking systematically.

Research Shows that Problems with Health Information Exchange Resist Cures (Part 1 of 2)

Posted on March 22, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Given that Office of the National Coordinator for Health Information Technology (ONC) received 564 million dollars in the 2009 HITECH act to promote health information exchange, one has to give them credit for carrying out a thorough evaluation of progress in that area. The results? You don’t want to know.

There are certainly glass-full as well as glass-empty indications in the 98-page report that the ONC just released. But I feel that failure dominated. Basically, there has been a lot of relative growth in the use of HIE, but the starting point was so low that huge swaths of the industry remain untouched by HIE.

Furthermore, usage is enormously skewed:

In Q2 2012, for example, three states (Indiana, Colorado, and New York) accounted for over 85 percent of total directed transactions; in Q4 2013, five states (Michigan, Colorado, Indiana, New York, Michigan, and Vermont) accounted for over 85 percent of the total. Similarly, in Q2 a single state (Indiana) accounted for over 65 percent of total directed transactions; in Q4 2013, four states (California, Indiana, Texas, and New York) accounted for over 65 percent of the total. (p. 42)

This is a pretty empty glass, with the glass-full aspect being that if some states managed to achieve large numbers of participation, we should be able to do it everywhere. But we haven’t done it yet.

Why health information exchange is crucial

As readers know, health costs are eating up more and more of our income (in the US as well as elsewhere, thanks to aging populations and increasing chronic disease). Furthermore, any attempt to stem the problem requires coordinated care and long-term thinking. But the news in these areas has been disappointing as well. For instance:

  • Patient centered medical homes (PCMH) are not leading to better outcomes. One reason may be the limited use of health information exchange, because the success of treating a person in his own habitat depends on careful coordination.

  • Accountable Care Organizations are losing money and failing to attract new participants. A cynical series of articles explores their disappointing results. I suspect that two problems account for this: first, they have not made good use of health information exchange, and second, risk sharing is minimal and not extensive enough to cause a thoroughgoing change to long-term care.

  • Insurers are suffering too, because they have signed up enormous numbers of sick patients under the Affordable Care Act. The superficial adoption of fee-for-value and the failure of clinicians to achieve improvements in long-term outcomes are bankrupting the payers and pushing costs more and more onto ordinary consumers.

With these dire thoughts in mind, let’s turn to HIE.

HIE challenges and results

The rest of this article summarizes the information I find most salient in the ONC report, along with some research presented in a recent webinar by the Agency for Healthcare Research and Quality (AHRQ) on this timely topic. (The webinar itself hasn’t been put online yet.)

The ONC report covers the years 2011-2014, so possibly something momentous has happened over the past year to change the pattern. But I suspect that substantial progress will have to wait for widespread implementation of FHIR, which is too new to appear in the report.

You can read the report and parse the statistics until you get a headache, but I will cite just one more passage about the rate of HIE adoption in order to draw a broad conclusion.

As of 2015, the desire for actionable data, focus on MU 2 priorities, and exchange related to delivery system reform is in evidence. Care summary exchange rates facilitated through HIOs are high—for example, care record summaries (89%); discharge summaries (78%); and ambulatory clinical summaries (67%). Exchange rates are also high for test results (89%), ADT alerts (69%), and inpatient medication lists (68%). (p. 34)

What I find notable in the previous quote is that all the things where HIE use improved were things that clinicians have always done anyway. There is nothing new about sending out discharge summaries or reporting test results. (Nobody would take a test if the results weren’t reported–although I found it amusing to receive an email message recently from my PCP telling me to log into their portal to see results, and to find nothing on the portal but “See notes.” The notes, you might have guessed, were not on the portal.)

One hopes that using HIE instead of faxes and phone calls will lower costs and lead to faster action on urgent conditions. But a true leap in care will happen only when HIE is used for close team coordination and patient reporting–things that don’t happen routinely now. One sentence in the report hints at this: “Providers exchanged information, but they did not necessarily use it to support clinical decision-making.” (p. 77) One wonders what good the exchange is.

In the AHRQ webinar, experts from the Oregon Health & Science University reported results of a large literature review, including:

  • HIE reduces the use lab and radiology tests, as well emergency department use. This should lead to improved outcomes as well as lower costs, although the literature couldn’t confirm that.

  • Disappointingly, there was little evidence that hospital admissions were reduced, or that medication adherence improved.

  • Two studies claimed that HIE was “associated with improved quality of care” (a very vague endorsement).

In the next section of this article, I’ll return to the ONC report for some clues as to the reasons HIE isn’t working well.

What is Quality in Health Care? (Part 2 of 2)

Posted on February 10, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article described different approaches to quality–and in fact to different qualities. In this part, I’ll look at the problems with quality measures, and at emerging solutions.

Difficulties of assessing quality

The Methods chapter of a book from the National Center for Biotechnology Information at NIH lays out many of the hurdles that researchers and providers face when judging the quality of clinical care. I’ll summarize a few of the points from the Methods chapter here, but the chapter is well worth a read. The review showed how hard it is to measure accurately many of the things we’d like to know about.

For instance, if variations within a hospital approach (or exceed) the variations between hospitals, there is little benefit to comparing hospitals using that measure. If the same physician gets wildly different scores from year to year, the validity of the measure is suspect. When care is given by multiple doctors and care teams, it is unjust to ascribe the outcome to patient’s principal caretaker. If random variations outweigh everything, the measure is of no use at all. One must also keep in mind practical considerations, such as making sure the process of collecting data would not cost too much.

Many measures apply to a narrow range of patients (for instance, those with pneumonia) and therefore may be skewed for doctors with a relatively small sample of those patients. And a severe winter could elevate mortality from pneumonia, particularly if patients have trouble getting adequate shelter and heat. In general, “For most outcomes, the impacts of random variation and patient factors beyond providers’ control often overwhelm differences attributable to provider quality.” ACMQ quality measures “most likely cannot definitively distinguish poor quality providers from high quality providers, but rather may illuminate potential quality problems for consideration of further investigation.”

The chapter helps explain why many researchers fall back on standard of care. Providers don’t trust outcome-based measures because of random variations and factors beyond their control, including poverty and other demographics. It’s hard even to know what contributed to a death, because in the final months it may not have been feasible to complete the diagnoses of a patient. Thus, doctors prefer “process measures.”

Among the criteria for evaluating quality indicators we see, “Does the indicator capture an aspect of quality that is widely regarded as important?” and more subtly, “subject to provider or public health system control?” The latter criterion heed physicians who say, “We don’t want to be blamed for bad habits or other reasons for noncompliance on the part of our patients, or for environmental factors such as poverty that resist quick fixes.”

The book’s authors are certainly aware of the bias created by gaming the reimbursement system: “systematic biases in documentation and coding practices introduced by awareness that risk-adjustment and reimbursement are related to the presence of particular complications.” The paper points out that diagnosis data is more trustworthy when it is informed by clinical information, not just billing information.

One of the most sensitive–and important–factors in quality assessment is risk adjustment, which means recognizing which patients have extra problems making their care more difficult and their recovery less certain. I have heard elsewhere the claim that CMS doesn’t cut physicians enough slack when they take on more risky patients. Although CMS tries to take poverty into account, hospital administrators suspect that institutions serving low-income populations–and safety-net hospitals in particular–are penalized for doing so.

Risk adjustment criteria are sometimes unpublished. But the most perverse distortion in the quality system comes when hospitals fail to distinguish iatrogenic complications (those introduced by medical intervention, such as infections incurred in the hospital) from the original diseases that the patient brought. CMS recognizes this risk in efforts such as penalties for hospital-acquired conditions. Unless these are flagged correctly, hospitals can end up being rewarded for treating sicker patients–patients that they themselves made sicker.

Distinguishing layers of quality

Theresa Cullen,associate director of the Regenstrief Institute’s Global Health Informatics Program, suggests that we think of quality measures as a stack, like those offered by software platforms:

  1. The bottom of the stack might simply measure whether a patient receive the proper treatment for a diagnosed condition. For instance, is the hemoglobin A1C of each diabetic patient taken regularly?

  2. The next step up is to measure the progress of the first measure. How many patients’ A1C was under control for their stage of the disease?

  3. Next we can move to measuring outcomes: improvements in diabetic status, for instance, or prevention of complications from diabetes

  4. Finally, we can look at the quality of the patient’s life–quality-adjusted life years.

Ultimately, to judge whether a quality measure is valid, one has to compare it to some other quality measure that is supposedly trustworthy. We are still searching for measures that we can rely on to prove quality–and as I have already indicated, there may be too many different “qualities” to find ironclad measures. McCallum offers the optimistic view that the US is just beginning to collect the outcomes data that will hopefully give us robust quality measures, Patient ratings serve as a proxy in the interim.

When organizations claim to use quality measures for accountable care, ratings, or other purposes, they should have their eyes open about the validity of the validation measures, and how applicable they are. Better data collection and analysis over time should allow more refined and useful quality measures. We can celebrate each advance in the choices we have for measures and their meanings.

What is Quality in Health Care? (Part 1 of 2)

Posted on February 9, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Assessing the quality of medical care is one of the biggest analytical challenges in health today. Every patient expects–and deserves–treatment that meets the highest standards. Moreover, it is hard to find an aspect of health care reform that does not depend on accurate quality measurement. Without a firm basis for assessing quality, how can the government pay Accountable Care Organizations properly? How can consumer choice (the great hope of many reformers) become viable? How can hospitals and larger bodies of researchers become “learning health systems” and implement continuous improvement?

Ensuring quality, of course, is crucial in a fee-for-value system to ensure that physicians don’t cut costs just by withholding necessary care. But a lot of people worry that quality-based reimbursement plans won’t work. As this article will show, determining what works and who is performing well are daunting tasks.

A recent op-ed claims that quality measures are adding unacceptable stress to doctors, that the metrics don’t make a difference to ultimate outcomes, that the variability of individual patients can’t be reflected in the measures, that the assessments don’t take external factors adequately into account, and that the essential element of quality is unmeasurable.

Precision medicine may eventually allow us to tailor treatments to individual patients with unique genetic prints. But in the meantime, we’re guessing a lot of the time we prescribe drugs.

The term quality originally just distinguished things of different kinds, like the Latin word qualis from which it is derived. So there are innumerable different qualities (as in “The quality of mercy is not strained”). It took a while for quality to be seen as a single continuum, as in an NIH book I’ll cite later, which reduces all quality measures to a single number by weighting different measures and combining them. Given the lack of precision in individual measures and the subjective definitions of quality, it may be a fool’s quest to seek a single definition of quality in health care.

Many qualities in play
Some of the ways to measure quality and outcomes include:

  • Longitudinal research: this tracks a group of patients over many years, like the famous Framingham Heart Study that changed medical care. Modern “big data” research carries on this tradition, using data about patients in the field to supplement or validate conventional clinical research. In theory, direct measurement is the most reliable source of data about what works in public health and treatment. Obvious drawbacks include:

    • the time such studies take to produce reliable results

    • the large numbers of participants needed (although technology makes it more feasible to contact and monitor subjects)

    • the risk that unknown variations in populations will produce invalid results

    • inaccuracies introduced by the devices used to gather patient information

  • Standard of care: this is rooted in clinical research, which in turn tries to ensure rigor through double-blind randomized trials. Clinical trials, although the gold standard in research, are hampered by numerous problems of their own, which I have explored in another article. Reproducibility is currently being challenged in health care, as in many other areas of science.

  • Patient ratings: these are among the least meaningful quality indicators, as I recently explored. Patients can offer valuable insights into doctor/patient interactions and other subjective elements of their experience moving through the health care system–insights to which I paid homage in another article–but they can’t dissect the elements of quality care that went into producing their particular outcome, which in any case may require months or years to find out. Although the patient’s experience determines her perception of quality, it does not necessarily reflect the overall quality of care. The most dangerous aspect of patient ratings, as Health IT business consultant Janice McCallum points out, comes when patients’ views of quality depart from best practices. Many patients are looking for a quick fix, whether through pain-killers, antibiotics, or psychotropic medications, when other interventions are called for on the basis of both cost and outcome. So the popularity of ratings among patients just underscores how little we actually know about clinical quality.

Quality measures by organizations such as the American College of Medical Quality (ACMQ) and National Committee for Quality Assurance (NCQA) depend on a combination of the factors just listed. I’ll look more closely at these in the next part of this article.

#HIMSS16: Some Questions I Plan To Ask

Posted on February 1, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

As most readers know, health IT’s biggest annual event is just around the corner, and the interwebz are heating up with discussions about what #HIMSS16 will bring. The show, which will take place in Las Vegas from February 29 to March 4, offers a ludicrously rich opportunity to learn about new HIT developments — and to mingle with more than 40,000 of the industry’s best and brightest (You may want to check out the session Healthcare Scene is taking part in and the New Media Meetup).

While you can learn virtually anything healthcare IT related at HIMSS, it helps to have an idea of what you want to take away from the big event. In that spirit, I’d like to offer some questions that I plan to ask, as follows:

  • How do you plan to support the shift to value-based healthcare over the next 12 months? The move to value-based payment is inevitable now, be it via ACOs or Medicare incentive programs under the Medicare Access and CHIP Reauthorization Act. But succeeding with value-based payment is no easy task. And one of the biggest challenges is building a health IT infrastructure that supports data use to manage the cost of care. So how do health systems and practices plan to meet this technical challenge, and what vendor solutions are they considering? And how do key vendors — especially those providing widely-used EMRs — expect to help?
  • What factors are you considering when you upgrade your EMR? Signs increasingly suggest that this may be the year of the forklift upgrade for many hospitals and health systems. Those that have already invested in massiveware EMRs like Cerner and Epic may be set, but others are ripping out their existing systems (notably McKesson). While in previous years the obvious blue-chip choice was Epic, it seems that some health systems are going with other big-iron vendors based on factors like usability and lower long-term cost of ownership. So, given these trends, how are health systems’ HIT buying decisions shaping up this year, and why?
  • How much progress can we realistically expect to make with leveraging population health technology over the next 12 months? I’m sure that when I travel the exhibit hall at HIMSS16, vendor banners will be peppered with references to their population health tools. In the past, when I’ve asked concrete questions about how they could actually impact population health management, vendor reps got vague quickly. Health system leaders, for their part, generally admit that PHM is still more a goal than a concrete plan.  My question: Is there likely to be any measurable progress in leveraging population health tech this year? If so, what can be done, and how will it help?
  • How much impact will mobile health have on health organizations this year? Mobile health is at a fascinating moment in its evolution. Most health systems are experimenting with rolling out their own apps, and some are working to integrate those apps with their enterprise infrastructure. But to date, it seems that few (if any) mobile health efforts have made a real impact on key areas like management of chronic conditions, wellness promotion and clinical quality improvement. Will 2016 be the year mobile health begins to deliver large-scale, tangible health results? If so, what do vendors and health leaders see as the most promising mHealth models?

Of course, these questions reflect my interests and prejudices. What are some of the questions that you hope to answer when you go to Vegas?

Meaningful Use Holdover Could Be Good News For Healthcare

Posted on January 25, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

I know all of us are a flutter about the pending regulatory changes which will phase out Meaningful Use as we know it. And yes, without a doubt, the changes underway will have an impact that extends well beyond the HIT world. But while big shifts are underway in federal incentives programs, it’s worth noting that it could be a while before these changes actually fall into place.

As readers may know, the healthcare industry will be transitioning to working under value-based payment under the Medicare Access and CHIP Reauthorization Act, which passed last year. But as ONC’s Karen DeSalvo noted last week, the transition could take a while In fact, proposed draft regulations for MACRA rollout will be released this spring for public comment. When you toss in the time needed for those comments to be submitted, and for the feds to digest those comments and respond, my guess is that MACRA regs won’t go live until late this year at the earliest.

The truth is, this is probably a very good thing. While I don’t have to tell you folks that everyone and their cousin has a Meaningful Use gripe, the truth is that the industry has largely adapted to the MU mindset. Maybe Meaningful Use Stage 3 wouldn’t have provided a lot of jollies, but on the whole, arguably, most providers have come to terms with the level of process documentation required — and have bought their big-bucks EMRs, committing once and for all to the use of digital health records.

Value-based payment, on the other hand, is another thing entirely. From what I’ve read and researched to date, few health organizations have really sunk their teeth into VBP, though many are dabbling. When MACRA regs finally combine the Physician Quality Reporting System, the Value-based Payment Modifier and the Medicare EHR incentive program into a single entity, providers will face some serious new challenges.

Sure, on the surface the idea of providers being paid for the quality and efficiency they deliver sounds good. Rather than using a strict set of performance measures as proxies for quality, the new MACRA-based programs will focus on a mix of quality, resource use and clinical practice use measures, along with measuring meaningful use of certified EHR technology. Under these terms, health systems could conceivably enjoy both greater freedom and better payoffs.

However, given health systems’ experiences to date, particularly with ACOs, I’m skeptical that they’ll be able to pick up the ball and run with the new incentives off the bat. For example, health systems have been abandoning CMS’s value-based Pioneer ACO model at a brisk clip, after finding it financially unworkable. One recent case comes from Dartmouth-Hitchcock Medical Center, which dropped out of the program in October of last year after losing more than $3 million over the previous two years.

I’m not suggesting that health systems can afford to ignore VBP models, or that sticking to MU incentives as previously structured would make sense. But if the process of implementing MACRA gives the industry a chance to do more preparing for value-based payment, it’s probably a good thing.

Why Wouldn’t Doctors Be Happy?

Posted on January 13, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Imagine someone comes to your job and tells you that if you didn’t start participating in a bunch of government programs then you’re going to get a 9% pay cut. Plus, those government programs add little value to the work you do and it’s going to cost you time and money to meet the government requirements. How would you feel?

To add on top of that, we’re going to create a new system for how you’re going to get paid too. In fact, it’s actually going to be two new systems. One that applies to the old system of payment (which has been declining for years) and a new one which isn’t well defined yet.

Also, to add to the fun, you’re going to have become a collection agency as well since your usual A/R is going to go up as your payment portfolio changes from large reliable payers to a wide variety of small, less reliable people.

I forgot to mention that in order to get access to these new government programs and avoid the penalties you’re going to have to likely use technology built in the 80’s. Yes, that means that it’s built before we even knew what the cloud or mobile was going to be and used advanced technologies like MUMPS.

In case you missed the connection, I’m describing the life of a doctor today. The 9% penalties have arrived. ICD-10 is upon us. ACOs and value based reimbursement is starting, but is not well defined yet. High deductible plans are shifting physician A/R from payers to patients. EHR software still generally doesn’t leverage technologies like the cloud and mobile devices.

All of this makes for the perfect storm. Is it any wonder physician dissatisfaction is at an all time high? It’s not to me. It seems like even CMS’ Andy Slavitt finally realized it with the announcement that meaningful use is dead and going to be replaced. It’s a good first step, but the devil is in the details. I hope he’s able to execute, but let’s not be surprised that so many doctors are unhappy about what’s happening to healthcare.

Significant Articles in the Health IT Community in 2015

Posted on December 15, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Have you kept current with changes in device connectivity, Meaningful Use, analytics in healthcare, and other health IT topics during 2015? Here are some of the articles I find significant that came out over the past year.

The year kicked off with an ominous poll about Stage 2 Meaningful Use, with implications that came to a head later with the release of Stage 3 requirements. Out of 1800 physicians polled around the beginning of the year, more than half were throwing in the towel–they were not even going to try to qualify for Stage 2 payments. Negotiations over Stage 3 of Meaningful Use were intense and fierce. A January 2015 letter from medical associations to ONC asked for more certainty around testing and certification, and mentioned the need for better data exchange (which the health field likes to call interoperability) in the C-CDA, the most popular document exchange format.

A number of expert panels asked ONC to cut back on some requirements, including public health measures and patient view-download-transmit. One major industry group asked for a delay of Stage 3 till 2019, essentially tolerating a lack of communication among EHRs. The final rules, absurdly described as a simplification, backed down on nothing from patient data access to quality measure reporting. Beth Israel CIO John Halamka–who has shuttled back and forth between his Massachusetts home and Washington, DC to advise ONC on how to achieve health IT reform–took aim at Meaningful Use and several other federal initiatives.

Another harbinger of emerging issues in health IT came in January with a speech about privacy risks in connected devices by the head of the Federal Trade Commission (not an organization we hear from often in the health IT space). The FTC is concerned about the security of recent trends in what industry analysts like to call the Internet of Things, and medical devices rank high in these risks. The speech was a lead-up to a major report issued by the FTC on protecting devices in the Internet of Things. Articles in WIRED and Bloomberg described serious security flaws. In August, John Halamka wrote own warning about medical devices, which have not yet started taking security really seriously. Smart watches are just as vulnerable as other devices.

Because so much medical innovation is happening in fast-moving software, and low-budget developers are hankering for quick and cheap ways to release their applications, in February, the FDA started to chip away at its bureaucratic gamut by releasing guidelines releasing developers from FDA regulation medical apps without impacts on treatment and apps used just to transfer data or do similarly non-transformative operations. They also released a rule for unique IDs on medical devices, a long-overdue measure that helps hospitals and researchers integrate devices into monitoring systems. Without clear and unambiguous IDs, one cannot trace which safety problems are associated with which devices. Other forms of automation may also now become possible. In September, the FDA announced a public advisory committee on devices.

Another FDA decision with a potential long-range impact was allowing 23andMe to market its genetic testing to consumers.

The Department of Health and Human Services has taken on exceedingly ambitious goals during 2015. In addition to the daunting Stage 3 of Meaningful Use, they announced a substantial increase in the use of fee-for-value, although they would still leave half of providers on the old system of doling out individual payments for individual procedures. In December, National Coordinator Karen DeSalvo announced that Health Information Exchanges (which limit themselves only to a small geographic area, or sometimes one state) would be able to exchange data throughout the country within one year. Observers immediately pointed out that the state of interoperability is not ready for this transition (and they could well have added the need for better analytics as well). HHS’s five-year plan includes the use of patient-generated and non-clinical data.

The poor state of interoperability was highlighted in an article about fees charged by EHR vendors just for setting up a connection and for each data transfer.

In the perennial search for why doctors are not exchanging patient information, attention has turned to rumors of deliberate information blocking. It’s a difficult accusation to pin down. Is information blocked by health care providers or by vendors? Does charging a fee, refusing to support a particular form of information exchange, or using a unique data format constitute information blocking? On the positive side, unnecessary imaging procedures can be reduced through information exchange.

Accountable Care Organizations are also having trouble, both because they are information-poor and because the CMS version of fee-for-value is too timid, along with other financial blows and perhaps an inability to retain patients. An August article analyzed the positives and negatives in a CMS announcement. On a large scale, fee-for-value may work. But a key component of improvement in chronic conditions is behavioral health which EHRs are also unsuited for.

Pricing and consumer choice have become a major battleground in the current health insurance business. The steep rise in health insurance deductibles and copays has been justified (somewhat retroactively) by claiming that patients should have more responsibility to control health care costs. But the reality of health care shopping points in the other direction. A report card on state price transparency laws found the situation “bleak.” Another article shows that efforts to list prices are hampered by interoperability and other problems. One personal account of a billing disaster shows the state of price transparency today, and may be dangerous to read because it could trigger traumatic memories of your own interactions with health providers and insurers. Narrow and confusing insurance networks as well as fragmented delivery of services hamper doctor shopping. You may go to a doctor who your insurance plan assures you is in their network, only to be charged outrageous out-of-network costs. Tools are often out of date overly simplistic.

In regard to the quality ratings that are supposed to allow intelligent choices to patients, A study found that four hospital rating sites have very different ratings for the same hospitals. The criteria used to rate them is inconsistent. Quality measures provided by government databases are marred by incorrect data. The American Medical Association, always disturbed by public ratings of doctors for obvious reasons, recently complained of incorrect numbers from the Centers for Medicare & Medicaid Services. In July, the ProPublica site offered a search service called the Surgeon Scorecard. One article summarized the many positive and negative reactions. The New England Journal of Medicine has called ratings of surgeons unreliable.

2015 was the year of the intensely watched Department of Defense upgrade to its health care system. One long article offered an in-depth examination of DoD options and their implications for the evolution of health care. Another article promoted the advantages of open-source VistA, an argument that was not persuasive enough for the DoD. Still, openness was one of the criteria sought by the DoD.

The remote delivery of information, monitoring, and treatment (which goes by the quaint term “telemedicine”) has been the subject of much discussion. Those concerned with this development can follow the links in a summary article to see the various positions of major industry players. One advocate of patient empowerment interviewed doctors to find that, contrary to common fears, they can offer email access to patients without becoming overwhelmed. In fact, they think it leads to better outcomes. (However, it still isn’t reimbursed.)

Laws permitting reimbursement for telemedicine continued to spread among the states. But a major battle shaped up around a ruling in Texas that doctors have a pre-existing face-to-face meeting with any patient whom they want to treat remotely. The spread of telemedicine depends also on reform of state licensing laws to permit practices across state lines.

Much wailing and tears welled up over the required transition from ICD-9 to ICD-10. The AMA, with some good arguments, suggested just waiting for ICD-11. But the transition cost much less than anticipated, making ICD-10 much less of a hot button, although it may be harmful to diagnosis.

Formal studies of EHR strengths and weaknesses are rare, so I’ll mention this survey finding that EHRs aid with public health but are ungainly for the sophisticated uses required for long-term, accountable patient care. Meanwhile, half of hospitals surveyed are unhappy with their EHRs’ usability and functionality and doctors are increasingly frustrated with EHRs. Nurses complained about technologies’s time demands and the eternal lack of interoperability. A HIMSS survey turned up somewhat more postive feelings.

EHRs are also expensive enough to hurt hospital balance sheets and force them to forgo other important expenditures.

Electronic health records also took a hit from ONC’s Sentinel Events program. To err, it seems, is not only human but now computer-aided. A Sentinel Event Alert indicated that more errors in health IT products should be reported, claiming that many go unreported because patient harm was avoided. The FDA started checking self-reported problems on PatientsLikeMe for adverse drug events.

The ONC reported gains in patient ability to view, download, and transmit their health information online, but found patient portals still limited. Although one article praised patient portals by Epic, Allscripts, and NextGen, an overview of studies found that patient portals are disappointing, partly because elderly patients have trouble with them. A literature review highlighted where patient portals fall short. In contrast, giving patients full access to doctors’ notes increases compliance and reduces errors. HHS’s Office of Civil Rights released rules underlining patients’ rights to access their data.

While we’re wallowing in downers, review a study questioning the value of patient-centered medical homes.

Reuters published a warning about employee wellness programs, which are nowhere near as fair or accurate as they claim to be. They are turning into just another expression of unequal power between employer and employee, with tendencies to punish sick people.

An interesting article questioned the industry narrative about the medical device tax in the Affordable Care Act, saying that the industry is expanding robustly in the face of the tax. However, this tax is still a hot political issue.

Does anyone remember that Republican congressmen published an alternative health care reform plan to replace the ACA? An analysis finds both good and bad points in its approach to mandates, malpractice, and insurance coverage.

Early reports on use of Apple’s open ResearchKit suggested problems with selection bias and diversity.

An in-depth look at the use of devices to enhance mental activity examined where they might be useful or harmful.

A major genetic data mining effort by pharma companies and Britain’s National Health Service was announced. The FDA announced a site called precisionFDA for sharing resources related to genetic testing. A recent site invites people to upload health and fitness data to support research.

As data becomes more liquid and is collected by more entities, patient privacy suffers. An analysis of web sites turned up shocking practices in , even at supposedly reputable sites like WebMD. Lax security in health care networks was addressed in a Forbes article.

Of minor interest to health IT workers, but eagerly awaited by doctors, was Congress’s “doc fix” to Medicare’s sustainable growth rate formula. The bill did contain additional clauses that were called significant by a number of observers, including former National Coordinator Farzad Mostashari no less, for opening up new initiatives in interoperability, telehealth, patient monitoring, and especially fee-for-value.

Connected health took a step forward when CMS issued reimbursement guidelines for patient monitoring in the community.

A wonky but important dispute concerned whether self-insured employers should be required to report public health measures, because public health by definition needs to draw information from as wide a population as possible.

Data breaches always make lurid news, sometimes under surprising circumstances, and not always caused by health care providers. The 2015 security news was dominated by a massive breach at the Anthem health insurer.

Along with great fanfare in Scientific American for “precision medicine,” another Scientific American article covered its privacy risks.

A blog posting promoted early and intensive interactions with end users during app design.

A study found that HIT implementations hamper clinicians, but could not identify the reasons.

Natural language processing was praised for its potential for simplifying data entry, and to discover useful side effects and treatment issues.

CVS’s refusal to stock tobacco products was called “a major sea-change for public health” and part of a general trend of pharmacies toward whole care of the patient.

A long interview with FHIR leader Grahame Grieve described the progress of the project, and its the need for clinicians to take data exchange seriously. A quiet milestone was reached in October with a a production version from Cerner.

Given the frequent invocation of Uber (even more than the Cheesecake Factory) as a model for health IT innovation, it’s worth seeing the reasons that model is inapplicable.

A number of hot new sensors and devices were announced, including a tiny sensor from Intel, a device from Google to measure blood sugar and another for multiple vital signs, enhancements to Microsoft products, a temperature monitor for babies, a headset for detecting epilepsy, cheap cameras from New Zealand and MIT for doing retinal scans, a smart phone app for recognizing respiratory illnesses, a smart-phone connected device for detecting brain injuries and one for detecting cancer, a sleep-tracking ring, bed sensors, ultrasound-guided needle placement, a device for detecting pneumonia, and a pill that can track heartbeats.

The medical field isn’t making extensive use yet of data collection and analysis–or uses analytics for financial gain rather than patient care–the potential is demonstrated by many isolated success stories, including one from Johns Hopkins study using 25 patient measures to study sepsis and another from an Ontario hospital. In an intriguing peek at our possible future, IBM Watson has started to integrate patient data with its base of clinical research studies.

Frustrated enough with 2015? To end on an upbeat note, envision a future made bright by predictive analytics.