How Much Patient Data Do We Truly Need?

Posted on November 23, 2015 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

As the demands placed on healthcare data increase, the drive to manage it effectively has of course grown as well. This has led to the collection of mammoth quantities of data — one trade group estimates that U.S. hospitals will manage 665 terabytes of data during 2015 alone — but not necessarily better information.

The assumption that we need to capture most, if not all, of a patient’s care history digitally is clearly driving this data accumulation process. As care moves into the digital realm, the volume of data generated by the healthcare industry is climbing 48% percent per year, according to one estimate. I can only assume that the rate of increase will grow as providers incorporate data feeds from mHealth apps, remote monitoring devices and wearables, the integration of which is not far in the future.

The thing is, most of the healthcare big data discussions I’ve followed assume that providers must manage, winnow and leverage all of this data. Few, if any, influencers seem to be considering the possibility that we need to set limits on what we manage, much less developing criteria for screening out needless data points.

As we all know, all data is not made equal.  One conversation I had with a physician in the back in the early 1990s makes the point perfectly. At the time, I asked him whether he felt it would be helpful to put a patient’s entire medical history online someday, a distant but still imaginable possibility at the time. “I don’t know what we should keep,” he said. “But I know I don’t need to know what a patient’s temperature was 20 years ago.”

On the other hand, providers may not have access to all of the data they need either. According to research by EMC, while healthcare organizations typically import 3 years of legacy data into a new EMR, many other pertinent records are not available. Given the persistence of paper, poor integration of clinical systems and other challenges, only 25% of relevant data may be readily available, the vendor reports.

Because this problem (arguably) gets too little attention, providers grappling with it are being forced to to set their own standards. Should hospitals and clinics expand that three years of legacy data integration to five years? 10 years? The patient’s entire lifetime? And how should institutions make such a decision? To my knowledge, there’s still no clear-cut way to make such decisions.

But developing best practices for data integration is critical. Given the costs of managing needless patient data — which may include sub-optimal outcomes due to data fog — it’s critical to develop some guidelines for setting limits on clinical data accumulation. While failing to collect relevant patient data has consequences, turning big data into astronomically big data does as well.

By all means, let’s keep our eye on how to leverage new patient-centric data sources like wearable health  trackers. It seems clear that such data has a role in stepping up patient care, at least once we understand what part of it is wheat and which part chaff.

That being said, continuing to amass data at exponential rates is unsustainable and ultimately, harmful. Sometimes, setting limits is the only way that you can be sure that what remains is valuable.