Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Our Uncontrolled Health Care Costs Can Be Traced to Data and Communication Failures (Part 1 of 2)

Posted on April 12, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

A host of scapegoats, ranging from the Affordable Care Act to unscrupulous pharmaceutical companies, have been blamed for the rise in health care costs that are destroying our financial well-being, our social fabric, and our political balance. In this article I suggest a more appropriate target: the inability of health care providers to collaborate and share information. To some extent, our health care crisis is an IT problem–but with organizational and cultural roots.

It’s well known that large numbers of patients have difficulty with costs, and that employees’ share of the burden is rising. We’re going to have to update the famous Rodney Dangerfield joke:

My doctor said, “You’re going to be sick.” I said I wanted a second opinion. He answered, “OK, you’re going to be poor too.”

Most of us know about the insidious role of health care costs in holding down wages, in the fight by Wisconsin Governor Scott Walker over pensions that tore the country apart, in crippling small businesses, and in narrowing our choice of health care providers. Not all realize, though, that the crisis is leaching through the health care industry as well, causing hospitals to fail, insurers to push costs onto subscribers and abandon the exchanges where low-income people get their insurance, co-ops to close, and governments to throw people off of subsidized care, threatening the very universal coverage that the ACA aimed to achieve.

Lessons from a ground-breaking book by T.R. Reid, The Healing of America, suggests that we’re undergoing a painful transition that every country has traversed to achieve a rational health care system. Like us, other countries started by committing themselves to universal health care access. This then puts on the pressure to control costs, as well as the opportunities for coordination and economies of scale that eventually institute those controls. Solutions will take time, but we need to be smart about where to focus our efforts.

Before even the ACA, the 2009 HITECH act established goals of data exchange and coordinated patient care. But seven years later, doctors still lag in:

  • Coordinating with other providers treating the patients.

  • Sending information that providers need to adequately treat the patients.

  • Basing treatment decisions on evidence from research.

  • Providing patients with their own health care data.

We’ll look next at the reports behind these claims, and at the effects of the problems.

Why doctors don’t work together effectively

A recent report released by the ONC, and covered by me in a recent article, revealed the poor state of data sharing, after decades of Health Information Exchanges and four years of Meaningful Use. Health IT observers expect interoperability to continue being a challenge, even as changes in technology, regulations, and consumer action push providers to do it.

If merely exchanging documents is so hard–and often unachieved–patient-focused, coordinated care is clearly impossible. Integrating behavioral care to address chronic conditions will remain a fantasy.

Evidence-based medicine is also more of an aspiration than a reality. Research is not always trustworthy, but we must have more respect for the science than hospitals were found to have in a recent GAO report. They fail to collect data either on the problems leading to errors or on the efficacy of solutions. There are incentive programs from payers, but no one knows whether they help. Doctors are still ordering far too many unnecessary tests.

Many companies in the health analytics space offer services that can bring more certainty to the practice of medicine, and I often cover them in these postings. Although increasingly cited as a priority, analytical services are still adopted by only a fraction of health care providers.

Patients across the country are suffering from disrupted care as insurers narrow their networks. It may be fair to force patients to seek less expensive providers–but not when all their records get lost during the transition. This is all too likely in the current non-interoperable environment. Of course, redundant testing and treatment errors caused by ignorance could erase the gains of going to low-cost providers.

Some have bravely tallied up the costs of waste and lack of care coordination in health care. Some causes, such as fraud and price manipulation, are not attributable to the health IT failures I describe. But an enormous chunk of costs directly implicate communications and data handling problems, including administrative overhead. The next section of this article will explore what this means in day-to-day health care.

How Twine Health Found a Successful Niche for a Software Service in Health Care

Posted on April 1, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Apps and software services for health care are proliferating–challenges and hackathons come up with great ideas week after week, and the app store contains hundreds of thousands of apps. The hard thing is creating a business model that sustains a good idea. To this end, health care incubators bring in clinicians to advise software developers. Numerous schemes of questionable ethics abound among apps (such as collecting data on users and their contacts). In this article, I’ll track how Twine Health tried different business models and settled on the one that is producing impressive growth for them today.

Twine Health is a comprehensive software platform where patients and their clinicians can collaborate efficiently between visits to achieve agreed-upon goals. Patients receive support in a timely manner, including motivation for lifestyle changes and expertise for medication adjustments. I covered the company in a recent article that showed how the founders ran some clinical studies demonstrating the effectiveness of their service. Validation is perhaps the first step for any developer with a service or app they think could be useful. Randomized controlled trials may not be necessary, but you need to find out from potential users what they want to see before they feel secure prescribing, paying for, and using your service. Validation will differentiate you from the hoards of look-alike competitors with whom you’ll share your market.

Dr. John Moore, co-founder of Twine Health, felt in 2013 that it was a good time to start a company, because the long-awaited switch in US medicine from fee-for-service to value-based care was starting to take root. Blue Cross and Blue Shield were encouraging providers to switch to Alternative Quality Contracts. The Affordable Care act of 2010 created the Medicare Shared Savings Program, which led to Accountable Care Organizations.

The critical role played by value-based-care has been explained frequently in the health care space. Current fee-for-service programs pay only for face-to-face visits and occasionally for telehealth visits. The routine daily interventions of connected health, such as text messages, checks of vital signs, and supportive prompts, receive no remuneration. The long-term improvements of connected health receive no support in the fee-for-value model, as much as individual clinicians may with to promote positive behavior among their patients.

Thus, Twine Health launched in 2014 with a service for clinicians. What they found, unfortunately, is that the hype about value-based care had gotten way ahead of its actual progress. The risk sharing by Accountable Care Organizations, such as under the Medicare Shared Savings Program, weren’t full commitments to delivering value, as when clinicians receive full capitation for a population and are required to deliver minimum health outcomes. Instead, the organizations were still billing fee-for-service. Medicare compared their spending to a budget at the end of the year, and, if the organization accrued less fee-for-service billing than Medicare expected, the organization got back 50-60% of the savings In the lowest track of the program, the organization didn’t even get penalized for exceeding costs–it was just rewarded for beating the estimates.

In short, Twine Health found that clinicians in ACOs in 2014 were following the old fee-for-service model and that Twine Health’s service was not optimal for their everyday practices. A recent survey from the NEJM Catalyst Insights Council showed that risk sharing and quality improvement are employed in a minority of health care settings, and are especially low in hospitals.

Collaborative care requires a complete rededication of time and resources. One must be willing to track one’s entire patient panel on a regular basis, guiding them toward ongoing behavior modification in the context of their everyday lives, with periodic office visits every few months. One also needs to go beyond treating symptoms and learn several skills of a very different type that traditional clinicians haven’t been taught: motivational interviewing, shared decision making, and patient self-management.

Over a period of months, a new model for Twine’s role in healthcare delivery started to become apparent: progressive, self-insured employers were turning their attention to value-based care and taking matters into their own hands because of escalating healthcare costs. They were moving much quicker than ACOs and taking on much greater risk.

The employers were contracting with innovative healthcare delivery organizations, which were building on-site primary care clinics (exclusive to that employer and located right at the place of work), near-site primary care clinics (shared across multiple employers), wellness and chronic disease coaching programs, etc. Unlike traditional healthcare providers, the organizations providing services to self-insured employers were taking fully capitated payments and, therefore, full risk for their set of services. Ironically, some of the self-insured employers were actually large health systems whose own business models still involved mostly fee-for-service payments.

With on-site clinics, wellness and chronic disease coaching organizations, and self-insured employers, Twine Health has found a firm and growing customer base. Dr. Moore is confident that the healthcare industry is on the cusp of broadly adopting value-based care. Twine Health and other connected health providers will be able to increase their markets vastly by working with traditional providers and insurers. But the Twine Health story is a lesson in how each software developer must find the right angle, the right time, and the right foothold to succeed.

Harvard Law Conference Surveys Troubles With Health Care

Posted on March 30, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

It is salubrious to stretch oneself and regularly attend a conference in a related field. At the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics, one can bask in the wisdom of experts who are truly interdisciplinary (as opposed to people like me, who is simply undisciplined). Their Tenth Anniversary Conference drew about 120 participants. The many topics–which included effects of the Supreme Court rulings on the Affordable Care Act and other cases, reasons that accountable care and other efforts haven’t lowered costs, stresses on the pharmaceutical industry, and directions in FDA regulation–contained several insights for health IT professionals.

From my perspective, the center of the conference was the panel titled “Health Innovation Policy and Regulating Novel Technology.” A better title might have been “How to Make Pharma Profitable Again,” because most of the panelists specialized in pharmaceuticals or patents. They spun out long answers to questions about how well patents can protect innovation (recognizing a controversy); the good, the bad, and the ugly of pricing; and how to streamline clinical trials, possibly adding risk. Their pulses really rose when they were asked a question about off-label drug use. But they touched on health IT and suggested many observations that could apply to it as well.

It is well known that drug development and regulatory approval take years–perhaps up to 20 years–and that high-tech companies developing fitness devices or software apps have a radically different product cycle. As one panelist pointed out, it would kill innovation to require renewed regulatory approval for each software upgrade. He suggested that the FDA define different tiers of changes, and that minor ones with little risk of disrupting care be allowed automatically.

I look even farther. It is well known also that disruptive inventions displace established technologies. Just as people with mobile devices get along without desktop computers and even TV sets, medicines have displaced many surgical procedures. Now the medicines themselves (particularly, controversial mental health medicines) can sometimes be replaced by interactive apps and online services. Although rigorous testing is still lacking for most of these alternatives, the biggest barrier to their adoption is lack of reimbursement in our antiquated health payment system.

Instead of trying to individually fix each distortion in payment, value-based care is the reformer’s solution to the field’s inefficient use of treatment options. Value-based care requires more accurate information on quality and effectiveness, as I recently pointed out. And this in turn may lead to the more flexible regulations suggested by the panelist, with a risk that is either unchanged or raised by an amount we can tolerate.

Comparisons between information and other medical materials can be revealing. For instance, as the public found out in the Henrietta Lacks controversy, biospecimens are treated as freely tradable information (so long as the specimen is de-identified) with no patient consent required. It’s assumed that we should treat de-identified patient information the same way, but in fact there’s a crucial difference. No one would expect the average patient to share and copy his own biospecimens, but doing so with information is trivially easy. Therefore, patients should have more of a say about how their information is used, even if biospecimens are owned by the clinician.

Some other insights I picked up from this conference were:

  • Regulations and policies by payers drive research more than we usually think. Companies definitely respond to what payers are interested in, not just to the needs of the patients. One panelist pointed out that the launch of Medicare Part D, covering drugs for the first time, led to big new investments in pharma.

  • Hotels and other service-oriented industries can provide a positive experience efficiently because they tightly control the activities of all the people they employ. Accountable Care Organizations, in contrast, contain loose affiliations and do not force their staff to coordinate care (even though that was the ideal behind their formation), and therefore cannot control costs.

  • Patents, which the pharma companies consider so important to their business model, are not normally available to diagnostic tests. (The attempt by Myriad Genetics to patent the BRACA1 gene in order to maintain a monopoly over testing proves this point: the Supreme Court overturned the patent.) However, as tests get more complex, the FDA has started regulating them. This has the side effect of boosting the value of tests that receive approval, an advantage over competitors.

Thanks to Petrie-Flom for generously letting the public in on events with such heft. Perhaps IT can make its way deeper into next year’s conference.

Research Shows that Problems with Health Information Exchange Resist Cures (Part 2 of 2)

Posted on March 23, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this paper introduced problems found in HIE by two reports: one from the Office of the National Coordinator and another from experts at the Oregon Health & Science University. Tracing the causes of these problems is necessarily somewhat speculative, but the research helps to confirm impressions I have built up over the years.

The ONC noted that developing HIE is very resource intensive, and not yet sustainable. (p. 6) I attribute these problems to the persistence of the old-fashioned, heavyweight model of bureaucratic, geographically limited organizations hooking together clinicians. (If you go to another state, better carry your medical records with you.) Evidence of their continued drag on the field appeared in the report:

Grantees found providers did not want to login to “yet another system” to access data, for example; if information was not easily accessible, providers were not willing to divert time and attention from patients. Similarly, if the system was not user friendly and easy to navigate, or if it did not effectively integrate data into existing patient records, providers abandoned attempts to obtain data through the system. (pp. 76-77)

The Oregon researchers in the AHRQ webinar also confirmed that logging in tended to be a hassle.

Hidden costs further jacked up the burden of participation (p. 72). But even though HIEs already suck up unsustainable amounts of money for little benefit, “Informants noted that it will take many years and significantly more funding and resources to fully establish HIE.” (p. 62) “The paradox of HIE activities is that they need participants but will struggle for participants until the activities demonstrate value. More evidence and examples of HIE producing value are needed to motivate continued stakeholder commitment and investment.” (p. 65)

The adoption of the Direct protocol apparently hasn’t fixed these ongoing problems; hopefully FHIR will. The ONC hopes that, “Open standards, interfaces, and protocols may help, as well as payment structures rewarding HIE.” (p. 7) Use of Direct did increase exchange (p. 56), and directory services are also important (pp. 59-60). But “Direct is used mostly for ADT notifications and similar transitional documents.” (p. 35)

One odd complaint was, “While requirements to meet Direct standards were useful for some, those standards detracted attention from the development of query-based exchange, which would have been more useful.” (p. 77) I consider this observation to be a red herring, because Direct is simply a protocol, and the choice to use it for “push” versus “pull” exchanges is a matter of policy.

But even with better protocols, we’ll still need to fix the mismatch of the data being exchanged: “…the majority of products and provider processes do not support LOINC and SNOMED CT. Instead, providers tended to use local codes, and the process of mapping these local codes to LOINC and SNOMED CT codes was beyond the capacity of most providers and their IT departments.” (p. 77) This shows that the move to FHIR won’t necessarily improve semantic interoperability, unless FHIR requires the use of standard codes.

Trust among providers remains a problem (p. 69) as does data quality (pp. 70-71). But some informants put attitude about all: “Grantees questioned whether HIE developers and HIE participants are truly ready for interoperability.” (p. 71)

It’s bad enough that core health care providers–hospitals and clinics–make little use of HIE. But a wide range of other institutions who desperately need HIE have even less of it. “Providers not eligible for MU incentives consistently lag in HIE connectivity. These setting include behavioral health, substance abuse, long-term care, home health, public health, school-based settings, corrections departments, and emergency medical services.” (p. 75) The AHRQ webinar found very limited use of HIE for facilities outside the Meaningful Use mandate, such as nursing homes (Long Term and Post Acute Care, or LTPAC). Health information exchange was used 10% to 40% of the time in those settings.

The ONC report includes numerous recommendations for continuing the growth of health information exchange. Most of these are tweaks to bureaucratic institutions responsible for promoting HIE. These are accompanied by the usual exhortations to pay for value and improve interoperability.

But six years into the implementation of HITECH–and after the huge success of its initial goal of installing electronic records, which should have served as the basis for HIE–one gets the impression that the current industries are not able to take to the dance floor together. First, ways of collecting and sharing data are based on a 1980s model of health care. And even by that standard, none of the players in the space–vendors, clinicians, and HIE organizations–are thinking systematically.

Research Shows that Problems with Health Information Exchange Resist Cures (Part 1 of 2)

Posted on March 22, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Given that Office of the National Coordinator for Health Information Technology (ONC) received 564 million dollars in the 2009 HITECH act to promote health information exchange, one has to give them credit for carrying out a thorough evaluation of progress in that area. The results? You don’t want to know.

There are certainly glass-full as well as glass-empty indications in the 98-page report that the ONC just released. But I feel that failure dominated. Basically, there has been a lot of relative growth in the use of HIE, but the starting point was so low that huge swaths of the industry remain untouched by HIE.

Furthermore, usage is enormously skewed:

In Q2 2012, for example, three states (Indiana, Colorado, and New York) accounted for over 85 percent of total directed transactions; in Q4 2013, five states (Michigan, Colorado, Indiana, New York, Michigan, and Vermont) accounted for over 85 percent of the total. Similarly, in Q2 a single state (Indiana) accounted for over 65 percent of total directed transactions; in Q4 2013, four states (California, Indiana, Texas, and New York) accounted for over 65 percent of the total. (p. 42)

This is a pretty empty glass, with the glass-full aspect being that if some states managed to achieve large numbers of participation, we should be able to do it everywhere. But we haven’t done it yet.

Why health information exchange is crucial

As readers know, health costs are eating up more and more of our income (in the US as well as elsewhere, thanks to aging populations and increasing chronic disease). Furthermore, any attempt to stem the problem requires coordinated care and long-term thinking. But the news in these areas has been disappointing as well. For instance:

  • Patient centered medical homes (PCMH) are not leading to better outcomes. One reason may be the limited use of health information exchange, because the success of treating a person in his own habitat depends on careful coordination.

  • Accountable Care Organizations are losing money and failing to attract new participants. A cynical series of articles explores their disappointing results. I suspect that two problems account for this: first, they have not made good use of health information exchange, and second, risk sharing is minimal and not extensive enough to cause a thoroughgoing change to long-term care.

  • Insurers are suffering too, because they have signed up enormous numbers of sick patients under the Affordable Care Act. The superficial adoption of fee-for-value and the failure of clinicians to achieve improvements in long-term outcomes are bankrupting the payers and pushing costs more and more onto ordinary consumers.

With these dire thoughts in mind, let’s turn to HIE.

HIE challenges and results

The rest of this article summarizes the information I find most salient in the ONC report, along with some research presented in a recent webinar by the Agency for Healthcare Research and Quality (AHRQ) on this timely topic. (The webinar itself hasn’t been put online yet.)

The ONC report covers the years 2011-2014, so possibly something momentous has happened over the past year to change the pattern. But I suspect that substantial progress will have to wait for widespread implementation of FHIR, which is too new to appear in the report.

You can read the report and parse the statistics until you get a headache, but I will cite just one more passage about the rate of HIE adoption in order to draw a broad conclusion.

As of 2015, the desire for actionable data, focus on MU 2 priorities, and exchange related to delivery system reform is in evidence. Care summary exchange rates facilitated through HIOs are high—for example, care record summaries (89%); discharge summaries (78%); and ambulatory clinical summaries (67%). Exchange rates are also high for test results (89%), ADT alerts (69%), and inpatient medication lists (68%). (p. 34)

What I find notable in the previous quote is that all the things where HIE use improved were things that clinicians have always done anyway. There is nothing new about sending out discharge summaries or reporting test results. (Nobody would take a test if the results weren’t reported–although I found it amusing to receive an email message recently from my PCP telling me to log into their portal to see results, and to find nothing on the portal but “See notes.” The notes, you might have guessed, were not on the portal.)

One hopes that using HIE instead of faxes and phone calls will lower costs and lead to faster action on urgent conditions. But a true leap in care will happen only when HIE is used for close team coordination and patient reporting–things that don’t happen routinely now. One sentence in the report hints at this: “Providers exchanged information, but they did not necessarily use it to support clinical decision-making.” (p. 77) One wonders what good the exchange is.

In the AHRQ webinar, experts from the Oregon Health & Science University reported results of a large literature review, including:

  • HIE reduces the use lab and radiology tests, as well emergency department use. This should lead to improved outcomes as well as lower costs, although the literature couldn’t confirm that.

  • Disappointingly, there was little evidence that hospital admissions were reduced, or that medication adherence improved.

  • Two studies claimed that HIE was “associated with improved quality of care” (a very vague endorsement).

In the next section of this article, I’ll return to the ONC report for some clues as to the reasons HIE isn’t working well.

What is Quality in Health Care? (Part 2 of 2)

Posted on February 10, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article described different approaches to quality–and in fact to different qualities. In this part, I’ll look at the problems with quality measures, and at emerging solutions.

Difficulties of assessing quality

The Methods chapter of a book from the National Center for Biotechnology Information at NIH lays out many of the hurdles that researchers and providers face when judging the quality of clinical care. I’ll summarize a few of the points from the Methods chapter here, but the chapter is well worth a read. The review showed how hard it is to measure accurately many of the things we’d like to know about.

For instance, if variations within a hospital approach (or exceed) the variations between hospitals, there is little benefit to comparing hospitals using that measure. If the same physician gets wildly different scores from year to year, the validity of the measure is suspect. When care is given by multiple doctors and care teams, it is unjust to ascribe the outcome to patient’s principal caretaker. If random variations outweigh everything, the measure is of no use at all. One must also keep in mind practical considerations, such as making sure the process of collecting data would not cost too much.

Many measures apply to a narrow range of patients (for instance, those with pneumonia) and therefore may be skewed for doctors with a relatively small sample of those patients. And a severe winter could elevate mortality from pneumonia, particularly if patients have trouble getting adequate shelter and heat. In general, “For most outcomes, the impacts of random variation and patient factors beyond providers’ control often overwhelm differences attributable to provider quality.” ACMQ quality measures “most likely cannot definitively distinguish poor quality providers from high quality providers, but rather may illuminate potential quality problems for consideration of further investigation.”

The chapter helps explain why many researchers fall back on standard of care. Providers don’t trust outcome-based measures because of random variations and factors beyond their control, including poverty and other demographics. It’s hard even to know what contributed to a death, because in the final months it may not have been feasible to complete the diagnoses of a patient. Thus, doctors prefer “process measures.”

Among the criteria for evaluating quality indicators we see, “Does the indicator capture an aspect of quality that is widely regarded as important?” and more subtly, “subject to provider or public health system control?” The latter criterion heed physicians who say, “We don’t want to be blamed for bad habits or other reasons for noncompliance on the part of our patients, or for environmental factors such as poverty that resist quick fixes.”

The book’s authors are certainly aware of the bias created by gaming the reimbursement system: “systematic biases in documentation and coding practices introduced by awareness that risk-adjustment and reimbursement are related to the presence of particular complications.” The paper points out that diagnosis data is more trustworthy when it is informed by clinical information, not just billing information.

One of the most sensitive–and important–factors in quality assessment is risk adjustment, which means recognizing which patients have extra problems making their care more difficult and their recovery less certain. I have heard elsewhere the claim that CMS doesn’t cut physicians enough slack when they take on more risky patients. Although CMS tries to take poverty into account, hospital administrators suspect that institutions serving low-income populations–and safety-net hospitals in particular–are penalized for doing so.

Risk adjustment criteria are sometimes unpublished. But the most perverse distortion in the quality system comes when hospitals fail to distinguish iatrogenic complications (those introduced by medical intervention, such as infections incurred in the hospital) from the original diseases that the patient brought. CMS recognizes this risk in efforts such as penalties for hospital-acquired conditions. Unless these are flagged correctly, hospitals can end up being rewarded for treating sicker patients–patients that they themselves made sicker.

Distinguishing layers of quality

Theresa Cullen,associate director of the Regenstrief Institute’s Global Health Informatics Program, suggests that we think of quality measures as a stack, like those offered by software platforms:

  1. The bottom of the stack might simply measure whether a patient receive the proper treatment for a diagnosed condition. For instance, is the hemoglobin A1C of each diabetic patient taken regularly?

  2. The next step up is to measure the progress of the first measure. How many patients’ A1C was under control for their stage of the disease?

  3. Next we can move to measuring outcomes: improvements in diabetic status, for instance, or prevention of complications from diabetes

  4. Finally, we can look at the quality of the patient’s life–quality-adjusted life years.

Ultimately, to judge whether a quality measure is valid, one has to compare it to some other quality measure that is supposedly trustworthy. We are still searching for measures that we can rely on to prove quality–and as I have already indicated, there may be too many different “qualities” to find ironclad measures. McCallum offers the optimistic view that the US is just beginning to collect the outcomes data that will hopefully give us robust quality measures, Patient ratings serve as a proxy in the interim.

When organizations claim to use quality measures for accountable care, ratings, or other purposes, they should have their eyes open about the validity of the validation measures, and how applicable they are. Better data collection and analysis over time should allow more refined and useful quality measures. We can celebrate each advance in the choices we have for measures and their meanings.

What is Quality in Health Care? (Part 1 of 2)

Posted on February 9, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Assessing the quality of medical care is one of the biggest analytical challenges in health today. Every patient expects–and deserves–treatment that meets the highest standards. Moreover, it is hard to find an aspect of health care reform that does not depend on accurate quality measurement. Without a firm basis for assessing quality, how can the government pay Accountable Care Organizations properly? How can consumer choice (the great hope of many reformers) become viable? How can hospitals and larger bodies of researchers become “learning health systems” and implement continuous improvement?

Ensuring quality, of course, is crucial in a fee-for-value system to ensure that physicians don’t cut costs just by withholding necessary care. But a lot of people worry that quality-based reimbursement plans won’t work. As this article will show, determining what works and who is performing well are daunting tasks.

A recent op-ed claims that quality measures are adding unacceptable stress to doctors, that the metrics don’t make a difference to ultimate outcomes, that the variability of individual patients can’t be reflected in the measures, that the assessments don’t take external factors adequately into account, and that the essential element of quality is unmeasurable.

Precision medicine may eventually allow us to tailor treatments to individual patients with unique genetic prints. But in the meantime, we’re guessing a lot of the time we prescribe drugs.

The term quality originally just distinguished things of different kinds, like the Latin word qualis from which it is derived. So there are innumerable different qualities (as in “The quality of mercy is not strained”). It took a while for quality to be seen as a single continuum, as in an NIH book I’ll cite later, which reduces all quality measures to a single number by weighting different measures and combining them. Given the lack of precision in individual measures and the subjective definitions of quality, it may be a fool’s quest to seek a single definition of quality in health care.

Many qualities in play
Some of the ways to measure quality and outcomes include:

  • Longitudinal research: this tracks a group of patients over many years, like the famous Framingham Heart Study that changed medical care. Modern “big data” research carries on this tradition, using data about patients in the field to supplement or validate conventional clinical research. In theory, direct measurement is the most reliable source of data about what works in public health and treatment. Obvious drawbacks include:

    • the time such studies take to produce reliable results

    • the large numbers of participants needed (although technology makes it more feasible to contact and monitor subjects)

    • the risk that unknown variations in populations will produce invalid results

    • inaccuracies introduced by the devices used to gather patient information

  • Standard of care: this is rooted in clinical research, which in turn tries to ensure rigor through double-blind randomized trials. Clinical trials, although the gold standard in research, are hampered by numerous problems of their own, which I have explored in another article. Reproducibility is currently being challenged in health care, as in many other areas of science.

  • Patient ratings: these are among the least meaningful quality indicators, as I recently explored. Patients can offer valuable insights into doctor/patient interactions and other subjective elements of their experience moving through the health care system–insights to which I paid homage in another article–but they can’t dissect the elements of quality care that went into producing their particular outcome, which in any case may require months or years to find out. Although the patient’s experience determines her perception of quality, it does not necessarily reflect the overall quality of care. The most dangerous aspect of patient ratings, as Health IT business consultant Janice McCallum points out, comes when patients’ views of quality depart from best practices. Many patients are looking for a quick fix, whether through pain-killers, antibiotics, or psychotropic medications, when other interventions are called for on the basis of both cost and outcome. So the popularity of ratings among patients just underscores how little we actually know about clinical quality.

Quality measures by organizations such as the American College of Medical Quality (ACMQ) and National Committee for Quality Assurance (NCQA) depend on a combination of the factors just listed. I’ll look more closely at these in the next part of this article.

#HIMSS16: Some Questions I Plan To Ask

Posted on February 1, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

As most readers know, health IT’s biggest annual event is just around the corner, and the interwebz are heating up with discussions about what #HIMSS16 will bring. The show, which will take place in Las Vegas from February 29 to March 4, offers a ludicrously rich opportunity to learn about new HIT developments — and to mingle with more than 40,000 of the industry’s best and brightest (You may want to check out the session Healthcare Scene is taking part in and the New Media Meetup).

While you can learn virtually anything healthcare IT related at HIMSS, it helps to have an idea of what you want to take away from the big event. In that spirit, I’d like to offer some questions that I plan to ask, as follows:

  • How do you plan to support the shift to value-based healthcare over the next 12 months? The move to value-based payment is inevitable now, be it via ACOs or Medicare incentive programs under the Medicare Access and CHIP Reauthorization Act. But succeeding with value-based payment is no easy task. And one of the biggest challenges is building a health IT infrastructure that supports data use to manage the cost of care. So how do health systems and practices plan to meet this technical challenge, and what vendor solutions are they considering? And how do key vendors — especially those providing widely-used EMRs — expect to help?
  • What factors are you considering when you upgrade your EMR? Signs increasingly suggest that this may be the year of the forklift upgrade for many hospitals and health systems. Those that have already invested in massiveware EMRs like Cerner and Epic may be set, but others are ripping out their existing systems (notably McKesson). While in previous years the obvious blue-chip choice was Epic, it seems that some health systems are going with other big-iron vendors based on factors like usability and lower long-term cost of ownership. So, given these trends, how are health systems’ HIT buying decisions shaping up this year, and why?
  • How much progress can we realistically expect to make with leveraging population health technology over the next 12 months? I’m sure that when I travel the exhibit hall at HIMSS16, vendor banners will be peppered with references to their population health tools. In the past, when I’ve asked concrete questions about how they could actually impact population health management, vendor reps got vague quickly. Health system leaders, for their part, generally admit that PHM is still more a goal than a concrete plan.  My question: Is there likely to be any measurable progress in leveraging population health tech this year? If so, what can be done, and how will it help?
  • How much impact will mobile health have on health organizations this year? Mobile health is at a fascinating moment in its evolution. Most health systems are experimenting with rolling out their own apps, and some are working to integrate those apps with their enterprise infrastructure. But to date, it seems that few (if any) mobile health efforts have made a real impact on key areas like management of chronic conditions, wellness promotion and clinical quality improvement. Will 2016 be the year mobile health begins to deliver large-scale, tangible health results? If so, what do vendors and health leaders see as the most promising mHealth models?

Of course, these questions reflect my interests and prejudices. What are some of the questions that you hope to answer when you go to Vegas?

Meaningful Use Holdover Could Be Good News For Healthcare

Posted on January 25, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

I know all of us are a flutter about the pending regulatory changes which will phase out Meaningful Use as we know it. And yes, without a doubt, the changes underway will have an impact that extends well beyond the HIT world. But while big shifts are underway in federal incentives programs, it’s worth noting that it could be a while before these changes actually fall into place.

As readers may know, the healthcare industry will be transitioning to working under value-based payment under the Medicare Access and CHIP Reauthorization Act, which passed last year. But as ONC’s Karen DeSalvo noted last week, the transition could take a while In fact, proposed draft regulations for MACRA rollout will be released this spring for public comment. When you toss in the time needed for those comments to be submitted, and for the feds to digest those comments and respond, my guess is that MACRA regs won’t go live until late this year at the earliest.

The truth is, this is probably a very good thing. While I don’t have to tell you folks that everyone and their cousin has a Meaningful Use gripe, the truth is that the industry has largely adapted to the MU mindset. Maybe Meaningful Use Stage 3 wouldn’t have provided a lot of jollies, but on the whole, arguably, most providers have come to terms with the level of process documentation required — and have bought their big-bucks EMRs, committing once and for all to the use of digital health records.

Value-based payment, on the other hand, is another thing entirely. From what I’ve read and researched to date, few health organizations have really sunk their teeth into VBP, though many are dabbling. When MACRA regs finally combine the Physician Quality Reporting System, the Value-based Payment Modifier and the Medicare EHR incentive program into a single entity, providers will face some serious new challenges.

Sure, on the surface the idea of providers being paid for the quality and efficiency they deliver sounds good. Rather than using a strict set of performance measures as proxies for quality, the new MACRA-based programs will focus on a mix of quality, resource use and clinical practice use measures, along with measuring meaningful use of certified EHR technology. Under these terms, health systems could conceivably enjoy both greater freedom and better payoffs.

However, given health systems’ experiences to date, particularly with ACOs, I’m skeptical that they’ll be able to pick up the ball and run with the new incentives off the bat. For example, health systems have been abandoning CMS’s value-based Pioneer ACO model at a brisk clip, after finding it financially unworkable. One recent case comes from Dartmouth-Hitchcock Medical Center, which dropped out of the program in October of last year after losing more than $3 million over the previous two years.

I’m not suggesting that health systems can afford to ignore VBP models, or that sticking to MU incentives as previously structured would make sense. But if the process of implementing MACRA gives the industry a chance to do more preparing for value-based payment, it’s probably a good thing.

Why Wouldn’t Doctors Be Happy?

Posted on January 13, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Imagine someone comes to your job and tells you that if you didn’t start participating in a bunch of government programs then you’re going to get a 9% pay cut. Plus, those government programs add little value to the work you do and it’s going to cost you time and money to meet the government requirements. How would you feel?

To add on top of that, we’re going to create a new system for how you’re going to get paid too. In fact, it’s actually going to be two new systems. One that applies to the old system of payment (which has been declining for years) and a new one which isn’t well defined yet.

Also, to add to the fun, you’re going to have become a collection agency as well since your usual A/R is going to go up as your payment portfolio changes from large reliable payers to a wide variety of small, less reliable people.

I forgot to mention that in order to get access to these new government programs and avoid the penalties you’re going to have to likely use technology built in the 80’s. Yes, that means that it’s built before we even knew what the cloud or mobile was going to be and used advanced technologies like MUMPS.

In case you missed the connection, I’m describing the life of a doctor today. The 9% penalties have arrived. ICD-10 is upon us. ACOs and value based reimbursement is starting, but is not well defined yet. High deductible plans are shifting physician A/R from payers to patients. EHR software still generally doesn’t leverage technologies like the cloud and mobile devices.

All of this makes for the perfect storm. Is it any wonder physician dissatisfaction is at an all time high? It’s not to me. It seems like even CMS’ Andy Slavitt finally realized it with the announcement that meaningful use is dead and going to be replaced. It’s a good first step, but the devil is in the details. I hope he’s able to execute, but let’s not be surprised that so many doctors are unhappy about what’s happening to healthcare.