Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

An Example Of ACO Deals Going Small And Local

Posted on January 2, 2018 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Until recently, ACOs have largely focused on creating large, sprawling structures linking giant providers together across multiple states. However, a news item that popped up on my radar screen reminded me that providers are quietly striking smaller local deals with hospitals and insurance companies as well.

In this case, cardiologists in Tupelo have begun to collaborate with Blue Cross & Blue Shield of Mississippi. Specifically, Cardiology Associates of North Mississippi will with Blue plan associate Magellan Health to create Accountable Cardiac Care of Mississippi.

It’s easy to see why the two agreed to the deal. The cardiology group has outpatient clinics across a wide region, including centers in Tupelo, Starkville, Columbus, Oxford and Corinth, along with a hospital practice at North Mississippi Medical Center-Tupelo. That offers a nice range of coverage for the health plan by a much sought-after specialty.

Meanwhile, the cardiology group should get a great deal of help with using data mining to deliver more cost-effective care. Its new partner, Magellan Health, specializes in managing complex conditions using data analytics. “We think we have been practicing this way all along, [but] this will allow us to confirm it,” said Dr. Roger Williams, Cardiology Associates’ president.

Williams told the News Leader that the deal will help his group improve its performance and manage costs. So far it’s been difficult to dig into data which he can use to support these goals. “It’s hard for us as physicians to monitor data,” he told the paper.

The goals of the collaboration with Blue Cross include early diagnosis of conditions and management of patient risk factors. The new payment model the ACO partners are using will offer the cardiology practices bonuses for keeping people healthy and out of expensive ED and hospital settings. Blue Cross and the Accountable Cardiac Care entity will share savings generated by the program.

To address key patient health concerns, Cardiology Associates plans to use both case managers and a Chronic Care program to monitor less stable patients more closely between doctor visits. This tracking program includes protocols which will send out text messages asking questions that detect early warning signs.  The group’s EMR then flags patients who need a case management check-in.

What makes this neat is that the cardiologists won’t be in the dark about how these strategies have worked. Magellan will analyze group data which will measure how effective these interventions have been for the Blue Cross population. Seems like a good idea. I’d suggest that more should follow this ACO’s lead.

Embracing Quality: What’s Next in the Shift to Value-Based Care, and How to Prepare

Posted on June 13, 2017 I Written By

The following is a guest blog post by Brad Hill, Chief Revenue Officer at RemitDATA.

Whatever the future holds for the Affordable Care Act (ACA), the shift to value-based care is likely here to stay. The number of providers and payers implementing value-based reimbursement contracts has grown steadily over the past few years. A survey of 465 payers and hospitals conducted in 2016 by ORC International and McKesson revealed that 58 percent are moving forward with incorporating value-based reimbursement protocols. The study, “Journey to Value: The State of Value-Based Reimbursements in 2016” further revealed that as healthcare continues to adopt full value-based reimbursement, bundled payments are the fastest growing with projections that they will continue to grow the fastest over the next five years, and that network strategies are changing, becoming narrower and more selective, creating challenges among many payers and hospitals as they struggle to scale these complex strategies.

Given the growth of adoption of value-based care, there are certainly many hurdles to clear in the near future as policymakers decide on how they plan to repeal and replace the ACA. A January 2017 report by the Urban Institute funded by the Robert Wood Johnson Foundation revealed that some of the top concerns with some potential scenarios being floated by policymakers include concerns over an immediate repeal of the individual mandate with delayed repeal of financial subsidies; delayed repeal of the ACA without its concurrent replacement; and a cutoff of cost-sharing subsidies in 2017.

With the assumption that value-based healthcare is here to stay, what steps can you take to continue to prepare for value-based payments? The best advice would be to continue on with a “business as usual” mindset, stay focused and ensure all business processes are ready for this shift by continuing to:

  1. Help providers establish baselines and understand their true cost of conducting business as a baseline for assuming risk.
  2. Analyze your revenue cycle. Look at the big picture for your practice to analyze service costs and reimbursements for each – determine if margins are in-line with peers.  Identify internal staff processing time and turnaround times by payer. Evaluate whether there are any glaring issues or problems that need to be addressed to reduce A/R days and improve reimbursement rates.
  3. Determine whether there are reimbursement issues for specific payers or if the problem is broader in nature. Are your peers experiencing the same issues with the same payers?
  4. Capture data analysis for practice improvement. With emerging payment models, hospitals and practices will need expertise in evaluating data and knowledge in how to make business adjustments to keep the organization profitable.
  5. Determine how you can scale and grow specific payment models. Consider, for example, a provider group that maintains 4 different payment models and 10 different payers. The provider group will need to determine whether this system is sustainable once payment models shift.
  6. Break down department silos in determining cost allocation rules. Providers need a cost accounting system that can help determine exact costs needed to provide care and to identify highest cost areas. Cost accounting systems are typically managed by the finance team. There needs to be clinical and operational input from all departments to make a difference. Collaborate across all departments to determine costs, and design rules and methodologies that take each into account.
  7. Compare your financial health to that of your peers. Comparative analytics can help by giving you insights and data to determine your practice’s operational health. Determine whether you are taking longer to submit claims than your peers, have a higher percentage of denied claims for a specific service, percentage of billed to allowed amounts and more.

Though change is a part of the healthcare industry’s DNA, ensuring business processes are in line, and leveraging data to do so will help organizations adapt to anything that comes their way.

External Incentives Key Factor In HIT Adoption By Small PCPs

Posted on January 25, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

A new study appearing in The American Journal of Managed Care concludes that one of the key factors influencing health IT adoption by small primary care practices is the availability of external incentives.

To conduct the study, researchers surveyed 566 primary care groups with eight or fewer physicians on board. Their key assumption, based on previous studies, was that PCPs were more likely to adopt HIT if they had both external incentives to change and sufficient internal capabilities to move ahead with such plans.

Researchers did several years’ worth of research, including one survey period between 2007 and 2010 and a second from 2012 to 2013. The proportion of practices reporting that they used only paper records fell by half from one time period to the other, from 66.8% to 32.3%. Meanwhile, the practices adopted higher levels of non-EMR health technology.

The mean health IT summary index – which tracks the number of positive responses to 18 questions on usage of health IT components – grew from 4.7 to 7.3. In other words, practices implemented an average of 2.6 additional health IT functions between the two periods.

Utilization rates for specific health IT technologies grew across 16 of the 18 specific technologies listed. For example, while just 25% of practices reported using e-prescribing tech during the first period of the study, 70% reported doing so during the study’s second wave. Another tech category showing dramatic growth was the proportion of practices letting patients view their medical record, which climbed from one percent to 19% by the second wave of research.

Researchers also took a look at the impact factors like practice size, ownership and external incentives had on the likelihood of health IT use. As expected, practices owned by hospitals instead of doctors had higher mean health IT scores across both waves of the survey. Also, practices with 3 to 8 physicians onboard had higher scores than those were one or two doctors.

In addition, external incentives were another significant factor predicting PCP technology use. Researchers found that greater health IT adoption was associated with pay-for-performance programs, participation in public reporting of clinical quality data and a greater proportion of revenue from Medicare. (Researchers assumed that the latter meant they had greater exposure to CMS’s EHR Incentive Program.)

Along the way, the researchers found areas in which PCPs could improve their use of health IT, such as the use of email of online medical records to connect with patients. Only one-fifth of practices were doing so at the time of the second wave of surveys.

I would have liked to learn more about the “internal capabilies” primary care practices would need, other than having access to hospital dollars, to get the most of health IT tools. I’d assume that elements such as having a decent budget, some internal IT expertise and management support or important, but I’m just speculating. This does give us some interesting lessons on what future adoption on new technology in healthcare will look like and require.

Why We Store Data in an EHR

Posted on April 27, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Shereese Maynard offered this interesting stat about the data inside an EHR and how that data is used.


I then made up this statistic which isn’t validated, but I believe is directionally accurate:


Colin Hung then validated my tweet with his comment:

It’s a tricky world we live in, but the above discussion is not surprising. EHRs were created to make an office more efficient (many have largely failed at that goal) and to help a practice bill at the highest level. In the US, you get paid based on how you document. It’s safe to say that EHR software has made it easier to document at a higher level and get paid more.

Notice that the goals of EHR software weren’t to improve health outcomes or patient care. Those goals might have been desired by many, but it wasn’t the bill of goods sold to the practice. Now we’re trying to back all this EHR data into health outcomes and improved patient care. Is it any wonder it’s a challenge for us to accomplish these goals?

When was the last time a doctor chose an EHR based on how it could improve patient care? I think most were fine purchasing an EHR that they believed wouldn’t hurt patient care. Sadly, I can’t remember ever seeing a section of a RFP that talks about an EHRs ability to improve patient care and clinical outcomes.

No, we store data in an EHR so we can improve our billing. We store data in the EHR to avoid liability. We store data in the EHR because we need appropriate documentation of the visit. Can and should that data be used to improve health outcomes and improve the quality of care provided? Yes, and most are heading that way. Although, it’s trailing since customers never demanded it. Plus, customers don’t really see an improvement in their business by focusing on it (we’ll see if that changes in a value based and high deductible plan world).

In my previous post about medical practice innovation, Dr. Nieder commented on the need for doctors to have “margin in their lives” which allows them to explore innovation. Medical billing documentation is one of the things that sucks the margins out of a doctor’s life. We need to simplify the billing requirements. That would provide doctors more margins to innovate and explore ways EHR and other technology can improve patient care and clinical outcomes.

In response to yesterday’s post about Virtual ACO’s, Randall Oates, MD and Founder of SOAPware (and a few other companies), commented “Additional complexity will not solve healthcare crises in spite of intents.” He, like I, fear that all of this value based reimbursement and ACO movement is just adding more billing complexity as opposed to simplifying things so that doctors have more margin in their lives to improve healthcare. More complexity is not the answer. More room to innovate is the answer.

Our Uncontrolled Health Care Costs Can Be Traced to Data and Communication Failures (Part 2 of 2)

Posted on April 13, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article provided whatever detail I could find on the costs of poor communications and data exchange among health care providers. But in truth, it’s hard to imagine the toll taken by communications failures beyond certain obvious consequences, such as repeated tests and avoidable medical errors. One has to think about how the field operates and what we would be capable of with proper use of data.

As patients move from PCP to specialist, from hospital to rehab facility, and from district to district, their providers need not only discharge summaries but intensive coordination to prevent relapses. Our doctors are great at fixing a diabetic episode or heart-related event. Where we fall down is on getting the patient the continued care she needs, ensuring she obtains and ingests her medication, and encouraging her to make the substantial life-style changes that can prevent reoccurrences. Modern health really is all about collaboration–but doctors are decades behind the times.

Clinicians were largely unprepared to handle the new patients brought to them by the Affordable Care Act. Examining the impact of new enrollees, who “have higher rates of disease and received significantly more medical care,” an industry spokesperson said, “The findings underscore the need for all of us in the health care system, and newly insured consumers, to work together to make sure that people get the right health care service in the right care setting and at the right time…Better communication and coordination is needed so that everyone understands how to avoid unnecessary emergency room visits, make full use of primary care and preventive services and learn how to properly adhere to their medications.” Just where the health providers fall short.

All these failures to communicate may explain the disappointing performance of patient centered medical homes and Accountable Care Organizations. While many factors go into the success or failure of such complex practices, a high rate of failure suggests that they’re not really carrying out the coordinated care they were meant to deliver. Naturally, problems persist in getting data from one vendor’s electronic health record to another.

Urgent care clinics, and other alternative treatment facilities offered in places such as pharmacies, can potentially lower costs, but not if the regular health system fails to integrate them.

Successes in coordinated care show how powerful it can be. Even so simple a practice as showing medical records to patients can improve care, but most clinicians still deny patients access to their data.

One care practice drastically lowered ER admissions through a notably low-tech policy–refering their patients to a clinic for follow-up care. This is only the beginning of what we could achieve. If modern communications were in place, hospitals would be linked so that a CDC warning could go to all of them instantly. And if clinicians and their record systems were set up to handle patient-generated data, they could discover a lot more about the patients and monitor behavior change.

How are the hospitals and clinics responding to this crisis and the public pressure to shape up? They push back as if it was not their problem. They claim they are moving toward better information sharing and teamwork, but never get there.

One of their favorite gambits is to ask the government to reward them for achieving interoperability 90 days out of the year. They make this request with no groveling, no tears of shame, no admission that they have failed in their responsibility to meet reasonable goals set seven years ago. If I delivered my projects only 25% of the time, I’d have trouble justifying myself to my employer, especially if I received my compensation plan seven years ago. Could the medical industry imagine that it owes us a modicum of effort?

Robert Schultz, a writer and entrepreneur in health care, says, “Underlying the broken communications model is a lack of empathy for the ultimate person affected–the patient. Health care is one of the few industries where the user is not necessarily the party paying for the product or service. Electronic health records and health information exchanges are designed around the insurance companies, accountable care organizations, or providers, instead of around understanding the challenges and obstacles that patients face on a daily basis. (There are so many!) The innovators who understand the role of the patient in this new accountable care climate will be winners. Those who suffer from the burden of legacy will continue to see the same problems and will become eclipsed by other organizations who can sustain patient engagement and prove value within accountable care contracts.”

Alternative factors

Of course, after such a provocative accusation, I should consider the other contributors that are often blamed for increasing health care costs.

An aging population

Older people have more chronic diseases, a trend that is straining health care systems from Cuba to Japan. This demographic reality makes intelligent data use even more important: remote monitoring for chronic conditions, graceful care transitions, and patient coordination.

The rising cost of drugs

Dramatically increasing drug prices are certainly straining our payment systems. Doctors who took research seriously could be pushing back against patient requests for drugs that work more often in TV ads than in real life. Doctors could look at holistic pain treatments such as yoga and biofeedback, instead of launching the worst opiate addiction crisis America has ever had.

Government bureaucracy

This seems to be a condition of life we need to deal with, like death and taxes. True, the Centers for Medicare & Medicaid Services (CMS) keeps adding requirements for data to report. But much of it could be automated if clinical settings adopted modern programming practices. Furthermore, this data appears to be a burden only because it isn’t exploited. Most of it is quite useful, and it just takes agile organizations to query it.

Intermediaries

Reflecting the Byzantine complexity of our payment systems, a huge number of middlemen–pharmacy benefits managers, medical billing clearinghouses, even the insurers themselves–enter the system, each taking its cut of the profits. Single-payer insurance has long been touted as a solution, but I’d rather push for better and cheaper treatments than attack the politically entrenched payment system.

Under-funded public health

Poverty, pollution, stress, and other external factors have huge impacts on health. This problem isn’t about clinicians, of course, it’s about all of us. But clinicians could be doing more to document these and intervene to improve them.

Clinicians like to point to barriers in their way of adopting information-based reforms, and tell us to tolerate the pace of change. But like the rising seas of climate change, the bite of health care costs will not tolerate complacency. The hard part is that merely wagging fingers and imposing goals–the ONC’s primary interventions–will not produce change. I think that reform will happen in pockets throughout the industry–such as the self-insured employers covered in a recent article–and eventually force incumbents to evolve or die.

The precision medicine initiative, and numerous databases being built up around the country with public health data, may contribute to a breakthrough by showing us the true quality of different types of care, and helping us reward clinicians fairly for treating patients of varying needs and risk. The FHIR standard may bring electronic health records in line. Analytics, currently a luxury available only to major health conglomerates, will become more commoditized and reach other providers.

But clinicians also have to do their part, and start acting like the future is here now. Those who make a priority of data sharing and communication will set themselves up for success long-term.

Our Uncontrolled Health Care Costs Can Be Traced to Data and Communication Failures (Part 1 of 2)

Posted on April 12, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

A host of scapegoats, ranging from the Affordable Care Act to unscrupulous pharmaceutical companies, have been blamed for the rise in health care costs that are destroying our financial well-being, our social fabric, and our political balance. In this article I suggest a more appropriate target: the inability of health care providers to collaborate and share information. To some extent, our health care crisis is an IT problem–but with organizational and cultural roots.

It’s well known that large numbers of patients have difficulty with costs, and that employees’ share of the burden is rising. We’re going to have to update the famous Rodney Dangerfield joke:

My doctor said, “You’re going to be sick.” I said I wanted a second opinion. He answered, “OK, you’re going to be poor too.”

Most of us know about the insidious role of health care costs in holding down wages, in the fight by Wisconsin Governor Scott Walker over pensions that tore the country apart, in crippling small businesses, and in narrowing our choice of health care providers. Not all realize, though, that the crisis is leaching through the health care industry as well, causing hospitals to fail, insurers to push costs onto subscribers and abandon the exchanges where low-income people get their insurance, co-ops to close, and governments to throw people off of subsidized care, threatening the very universal coverage that the ACA aimed to achieve.

Lessons from a ground-breaking book by T.R. Reid, The Healing of America, suggests that we’re undergoing a painful transition that every country has traversed to achieve a rational health care system. Like us, other countries started by committing themselves to universal health care access. This then puts on the pressure to control costs, as well as the opportunities for coordination and economies of scale that eventually institute those controls. Solutions will take time, but we need to be smart about where to focus our efforts.

Before even the ACA, the 2009 HITECH act established goals of data exchange and coordinated patient care. But seven years later, doctors still lag in:

  • Coordinating with other providers treating the patients.

  • Sending information that providers need to adequately treat the patients.

  • Basing treatment decisions on evidence from research.

  • Providing patients with their own health care data.

We’ll look next at the reports behind these claims, and at the effects of the problems.

Why doctors don’t work together effectively

A recent report released by the ONC, and covered by me in a recent article, revealed the poor state of data sharing, after decades of Health Information Exchanges and four years of Meaningful Use. Health IT observers expect interoperability to continue being a challenge, even as changes in technology, regulations, and consumer action push providers to do it.

If merely exchanging documents is so hard–and often unachieved–patient-focused, coordinated care is clearly impossible. Integrating behavioral care to address chronic conditions will remain a fantasy.

Evidence-based medicine is also more of an aspiration than a reality. Research is not always trustworthy, but we must have more respect for the science than hospitals were found to have in a recent GAO report. They fail to collect data either on the problems leading to errors or on the efficacy of solutions. There are incentive programs from payers, but no one knows whether they help. Doctors are still ordering far too many unnecessary tests.

Many companies in the health analytics space offer services that can bring more certainty to the practice of medicine, and I often cover them in these postings. Although increasingly cited as a priority, analytical services are still adopted by only a fraction of health care providers.

Patients across the country are suffering from disrupted care as insurers narrow their networks. It may be fair to force patients to seek less expensive providers–but not when all their records get lost during the transition. This is all too likely in the current non-interoperable environment. Of course, redundant testing and treatment errors caused by ignorance could erase the gains of going to low-cost providers.

Some have bravely tallied up the costs of waste and lack of care coordination in health care. Some causes, such as fraud and price manipulation, are not attributable to the health IT failures I describe. But an enormous chunk of costs directly implicate communications and data handling problems, including administrative overhead. The next section of this article will explore what this means in day-to-day health care.

Harvard Law Conference Surveys Troubles With Health Care

Posted on March 30, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

It is salubrious to stretch oneself and regularly attend a conference in a related field. At the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics, one can bask in the wisdom of experts who are truly interdisciplinary (as opposed to people like me, who is simply undisciplined). Their Tenth Anniversary Conference drew about 120 participants. The many topics–which included effects of the Supreme Court rulings on the Affordable Care Act and other cases, reasons that accountable care and other efforts haven’t lowered costs, stresses on the pharmaceutical industry, and directions in FDA regulation–contained several insights for health IT professionals.

From my perspective, the center of the conference was the panel titled “Health Innovation Policy and Regulating Novel Technology.” A better title might have been “How to Make Pharma Profitable Again,” because most of the panelists specialized in pharmaceuticals or patents. They spun out long answers to questions about how well patents can protect innovation (recognizing a controversy); the good, the bad, and the ugly of pricing; and how to streamline clinical trials, possibly adding risk. Their pulses really rose when they were asked a question about off-label drug use. But they touched on health IT and suggested many observations that could apply to it as well.

It is well known that drug development and regulatory approval take years–perhaps up to 20 years–and that high-tech companies developing fitness devices or software apps have a radically different product cycle. As one panelist pointed out, it would kill innovation to require renewed regulatory approval for each software upgrade. He suggested that the FDA define different tiers of changes, and that minor ones with little risk of disrupting care be allowed automatically.

I look even farther. It is well known also that disruptive inventions displace established technologies. Just as people with mobile devices get along without desktop computers and even TV sets, medicines have displaced many surgical procedures. Now the medicines themselves (particularly, controversial mental health medicines) can sometimes be replaced by interactive apps and online services. Although rigorous testing is still lacking for most of these alternatives, the biggest barrier to their adoption is lack of reimbursement in our antiquated health payment system.

Instead of trying to individually fix each distortion in payment, value-based care is the reformer’s solution to the field’s inefficient use of treatment options. Value-based care requires more accurate information on quality and effectiveness, as I recently pointed out. And this in turn may lead to the more flexible regulations suggested by the panelist, with a risk that is either unchanged or raised by an amount we can tolerate.

Comparisons between information and other medical materials can be revealing. For instance, as the public found out in the Henrietta Lacks controversy, biospecimens are treated as freely tradable information (so long as the specimen is de-identified) with no patient consent required. It’s assumed that we should treat de-identified patient information the same way, but in fact there’s a crucial difference. No one would expect the average patient to share and copy his own biospecimens, but doing so with information is trivially easy. Therefore, patients should have more of a say about how their information is used, even if biospecimens are owned by the clinician.

Some other insights I picked up from this conference were:

  • Regulations and policies by payers drive research more than we usually think. Companies definitely respond to what payers are interested in, not just to the needs of the patients. One panelist pointed out that the launch of Medicare Part D, covering drugs for the first time, led to big new investments in pharma.

  • Hotels and other service-oriented industries can provide a positive experience efficiently because they tightly control the activities of all the people they employ. Accountable Care Organizations, in contrast, contain loose affiliations and do not force their staff to coordinate care (even though that was the ideal behind their formation), and therefore cannot control costs.

  • Patents, which the pharma companies consider so important to their business model, are not normally available to diagnostic tests. (The attempt by Myriad Genetics to patent the BRACA1 gene in order to maintain a monopoly over testing proves this point: the Supreme Court overturned the patent.) However, as tests get more complex, the FDA has started regulating them. This has the side effect of boosting the value of tests that receive approval, an advantage over competitors.

Thanks to Petrie-Flom for generously letting the public in on events with such heft. Perhaps IT can make its way deeper into next year’s conference.

Another Quality Initiative Ahead of Its Time, From California

Posted on March 21, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

When people go to get health care–or any other activity–we evaluate it for both cost and quality. But health care regulators have to recognize when the ingredients for quality assessment are missing. Otherwise, assessing quality becomes like the drunk who famously looked for his key under the lamplight instead of where the key actually lay. And sadly, as I read a March 4 draft of a California initiative to rate health care insurance, I find that once again the foundations for assessing quality are not in place, and we are chasing lamplights rather than the keys that will unlock better care.

The initiative I’ll discuss in this article comes out of Covered California, one of the Unites States’ 13 state-based marketplaces for health insurance mandated by the ACA. (All the other states use a federal marketplace or some hybrid solution.) As the country’s biggest state–and one known for progressive experiments–California is worth following to see how adept they are at promoting the universally acknowledged Triple Aim of health care.

An overview of health care quality

There’s no dearth of quality measurement efforts in health care–I gave a partial overview in another article. The Covered California draft cites many of these efforts and advises insurers to hook up with them.

Alas–there are problems with all the quality control efforts:

  • Problems with gathering accurate data (and as we’ll see in California’s case, problems with the overhead and bureaucracy created by this gathering)

  • Problems finding measures that reflect actual improvements in outcomes

  • Problems separating things doctors can control (such as follow-up phone calls) with things they can’t (lack of social supports or means of getting treatment)

  • Problems turning insights into programs that improve care.

But the biggest problem in health care quality, I believe, is the intractable variety of patients. How can you say that a particular patient with a particular combination of congestive heart failure, high blood pressure, and diabetes should improve by a certain amount over a certain period of time? How can you guess how many office visits it will take to achieve a change, how many pills, how many hospitalizations? How much should an insurer pay for this treatment?

The more sophisticated payers stratify patients, classifying them by the seriousness of their conditions. And of course, doctors have learned how to game that system. A cleverly designed study by the prestigious National Bureau of Economic Research has uncovered upcoding in the U.S.’s largest quality-based reimbursement program, Medicare Advantage. They demonstrate that doctors are gaming the system in two ways. First, as the use of Medicare Advantage goes up, so do the diagnosed risk levels of patients. Second, patients who transition from private insurance into Medicare Advantage show higher risk not seen in fee-for-service Medicare.

I don’t see any fixes in the Covered California draft to the problem of upcoding. Probably, like most government reimbursement programs, California will slap on some weighting factor that rewards hospitals with higher numbers of poor and underprivileged patients. But this is a crude measure and is often suspected of underestimating the extra costs these patients bring.

A look at the Covered California draft

Covered California certainly understands what the health care field needs, and one has to be impressed with the sheer reach and comprehensiveness of their quality plan. Among other things, they take on:

  • Patient involvement and access to records (how the providers hated that in the federal Meaningful Use requirements!)

  • Racial, ethnic, and gender disparities

  • Electronic record interoperability

  • Preventive health and wellness services

  • Mental and behavioral health

  • Pharmaceutical costs

  • Telemedicine

If there are any pet initiatives of healthcare reformers that didn’t make it into the Covered California plan, I certainly am having trouble finding them.

Being so extensive, the plan suffers from two more burdens. First, the reporting requirements are enormous–I would imagine that insurers and providers would balk simply at that. The requirements are burdensome partly because Covered California doesn’t seem to trust that the major thrust of health reform–paying for outcomes instead of for individual services–will provide an incentive for providers to do other good things. They haven’t forgotten value-based reimbursement (it’s in section 8.02, page 33), but they also insist on detailed reporting about patient engagement, identifying high-risk patients, and reducing overuse through choosing treatments wisely. All those things should happen on their own if insurers and clinicians adopt payments for outcomes.

Second, many of the mandates are vague. It’s not always clear what Covered California is looking for–let alone how the reporting requirements will contribute to positive change. For instance, how will insurers be evaluated in their use of behavioral health, and how will that use be mapped to meeting the goals of the Triple Aim?

Is rescue on the horizon?

According to a news report, the Covered California plan is “drawing heavy fire from medical providers and insurers.” I’m not surprised, given all the weaknesses I found, but I’m disappointed that their objections (as stated in the article) come from the worst possible motivation: they don’t like its call for transparent pricing. Hiding the padding of costs by major hospitals, the cozy payer/provider deals, and the widespread disparities unrelated to quality doesn’t put providers and insurers on the moral high ground.

To me, the true problem is that the health care field has not learned yet how to measure quality and cost effectiveness. There’s hope, though, with the Precision Medicine initiative that recently celebrated its first anniversary. Although analytical firms seem to be focusing on processing genomic information from patients–a high-tech and lucrative undertaking, but one that offers small gains–the real benefit would come if we “correlate activity, physiological measures and environmental exposures with health outcomes.” Those sources of patient variation account for most of the variability in care and in outcomes. Capture that, and quality will be measurable.

What is Quality in Health Care? (Part 2 of 2)

Posted on February 10, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article described different approaches to quality–and in fact to different qualities. In this part, I’ll look at the problems with quality measures, and at emerging solutions.

Difficulties of assessing quality

The Methods chapter of a book from the National Center for Biotechnology Information at NIH lays out many of the hurdles that researchers and providers face when judging the quality of clinical care. I’ll summarize a few of the points from the Methods chapter here, but the chapter is well worth a read. The review showed how hard it is to measure accurately many of the things we’d like to know about.

For instance, if variations within a hospital approach (or exceed) the variations between hospitals, there is little benefit to comparing hospitals using that measure. If the same physician gets wildly different scores from year to year, the validity of the measure is suspect. When care is given by multiple doctors and care teams, it is unjust to ascribe the outcome to patient’s principal caretaker. If random variations outweigh everything, the measure is of no use at all. One must also keep in mind practical considerations, such as making sure the process of collecting data would not cost too much.

Many measures apply to a narrow range of patients (for instance, those with pneumonia) and therefore may be skewed for doctors with a relatively small sample of those patients. And a severe winter could elevate mortality from pneumonia, particularly if patients have trouble getting adequate shelter and heat. In general, “For most outcomes, the impacts of random variation and patient factors beyond providers’ control often overwhelm differences attributable to provider quality.” ACMQ quality measures “most likely cannot definitively distinguish poor quality providers from high quality providers, but rather may illuminate potential quality problems for consideration of further investigation.”

The chapter helps explain why many researchers fall back on standard of care. Providers don’t trust outcome-based measures because of random variations and factors beyond their control, including poverty and other demographics. It’s hard even to know what contributed to a death, because in the final months it may not have been feasible to complete the diagnoses of a patient. Thus, doctors prefer “process measures.”

Among the criteria for evaluating quality indicators we see, “Does the indicator capture an aspect of quality that is widely regarded as important?” and more subtly, “subject to provider or public health system control?” The latter criterion heed physicians who say, “We don’t want to be blamed for bad habits or other reasons for noncompliance on the part of our patients, or for environmental factors such as poverty that resist quick fixes.”

The book’s authors are certainly aware of the bias created by gaming the reimbursement system: “systematic biases in documentation and coding practices introduced by awareness that risk-adjustment and reimbursement are related to the presence of particular complications.” The paper points out that diagnosis data is more trustworthy when it is informed by clinical information, not just billing information.

One of the most sensitive–and important–factors in quality assessment is risk adjustment, which means recognizing which patients have extra problems making their care more difficult and their recovery less certain. I have heard elsewhere the claim that CMS doesn’t cut physicians enough slack when they take on more risky patients. Although CMS tries to take poverty into account, hospital administrators suspect that institutions serving low-income populations–and safety-net hospitals in particular–are penalized for doing so.

Risk adjustment criteria are sometimes unpublished. But the most perverse distortion in the quality system comes when hospitals fail to distinguish iatrogenic complications (those introduced by medical intervention, such as infections incurred in the hospital) from the original diseases that the patient brought. CMS recognizes this risk in efforts such as penalties for hospital-acquired conditions. Unless these are flagged correctly, hospitals can end up being rewarded for treating sicker patients–patients that they themselves made sicker.

Distinguishing layers of quality

Theresa Cullen,associate director of the Regenstrief Institute’s Global Health Informatics Program, suggests that we think of quality measures as a stack, like those offered by software platforms:

  1. The bottom of the stack might simply measure whether a patient receive the proper treatment for a diagnosed condition. For instance, is the hemoglobin A1C of each diabetic patient taken regularly?

  2. The next step up is to measure the progress of the first measure. How many patients’ A1C was under control for their stage of the disease?

  3. Next we can move to measuring outcomes: improvements in diabetic status, for instance, or prevention of complications from diabetes

  4. Finally, we can look at the quality of the patient’s life–quality-adjusted life years.

Ultimately, to judge whether a quality measure is valid, one has to compare it to some other quality measure that is supposedly trustworthy. We are still searching for measures that we can rely on to prove quality–and as I have already indicated, there may be too many different “qualities” to find ironclad measures. McCallum offers the optimistic view that the US is just beginning to collect the outcomes data that will hopefully give us robust quality measures, Patient ratings serve as a proxy in the interim.

When organizations claim to use quality measures for accountable care, ratings, or other purposes, they should have their eyes open about the validity of the validation measures, and how applicable they are. Better data collection and analysis over time should allow more refined and useful quality measures. We can celebrate each advance in the choices we have for measures and their meanings.

What is Quality in Health Care? (Part 1 of 2)

Posted on February 9, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Assessing the quality of medical care is one of the biggest analytical challenges in health today. Every patient expects–and deserves–treatment that meets the highest standards. Moreover, it is hard to find an aspect of health care reform that does not depend on accurate quality measurement. Without a firm basis for assessing quality, how can the government pay Accountable Care Organizations properly? How can consumer choice (the great hope of many reformers) become viable? How can hospitals and larger bodies of researchers become “learning health systems” and implement continuous improvement?

Ensuring quality, of course, is crucial in a fee-for-value system to ensure that physicians don’t cut costs just by withholding necessary care. But a lot of people worry that quality-based reimbursement plans won’t work. As this article will show, determining what works and who is performing well are daunting tasks.

A recent op-ed claims that quality measures are adding unacceptable stress to doctors, that the metrics don’t make a difference to ultimate outcomes, that the variability of individual patients can’t be reflected in the measures, that the assessments don’t take external factors adequately into account, and that the essential element of quality is unmeasurable.

Precision medicine may eventually allow us to tailor treatments to individual patients with unique genetic prints. But in the meantime, we’re guessing a lot of the time we prescribe drugs.

The term quality originally just distinguished things of different kinds, like the Latin word qualis from which it is derived. So there are innumerable different qualities (as in “The quality of mercy is not strained”). It took a while for quality to be seen as a single continuum, as in an NIH book I’ll cite later, which reduces all quality measures to a single number by weighting different measures and combining them. Given the lack of precision in individual measures and the subjective definitions of quality, it may be a fool’s quest to seek a single definition of quality in health care.

Many qualities in play
Some of the ways to measure quality and outcomes include:

  • Longitudinal research: this tracks a group of patients over many years, like the famous Framingham Heart Study that changed medical care. Modern “big data” research carries on this tradition, using data about patients in the field to supplement or validate conventional clinical research. In theory, direct measurement is the most reliable source of data about what works in public health and treatment. Obvious drawbacks include:

    • the time such studies take to produce reliable results

    • the large numbers of participants needed (although technology makes it more feasible to contact and monitor subjects)

    • the risk that unknown variations in populations will produce invalid results

    • inaccuracies introduced by the devices used to gather patient information

  • Standard of care: this is rooted in clinical research, which in turn tries to ensure rigor through double-blind randomized trials. Clinical trials, although the gold standard in research, are hampered by numerous problems of their own, which I have explored in another article. Reproducibility is currently being challenged in health care, as in many other areas of science.

  • Patient ratings: these are among the least meaningful quality indicators, as I recently explored. Patients can offer valuable insights into doctor/patient interactions and other subjective elements of their experience moving through the health care system–insights to which I paid homage in another article–but they can’t dissect the elements of quality care that went into producing their particular outcome, which in any case may require months or years to find out. Although the patient’s experience determines her perception of quality, it does not necessarily reflect the overall quality of care. The most dangerous aspect of patient ratings, as Health IT business consultant Janice McCallum points out, comes when patients’ views of quality depart from best practices. Many patients are looking for a quick fix, whether through pain-killers, antibiotics, or psychotropic medications, when other interventions are called for on the basis of both cost and outcome. So the popularity of ratings among patients just underscores how little we actually know about clinical quality.

Quality measures by organizations such as the American College of Medical Quality (ACMQ) and National Committee for Quality Assurance (NCQA) depend on a combination of the factors just listed. I’ll look more closely at these in the next part of this article.