Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Using Healthcare Analytics to Achieve Strong Financial Performance

Posted on September 25, 2015 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Everyone is talking about analytics, but I’ve been looking for the solutions that take analytics and package it nicely. This is what I hoped for when I found this whitepaper called How Healthcare Providers Can Leverage Advanced Analytics to Achieve Strong Financial Performance. This is a goal that I think most of us in healthcare IT would like to achieve. We want healthcare providers to be able to leverage analytics to improve their business.

However, this illustration from the whitepaper shows exactly why we’re not seeing the results we want from our healthcare analytics efforts:
Advanced Analytics Impact on Healthcare

That’s a complex beast if I’ve ever seen one. Most providers I talk to want the results that this chart espouses, but they want it just to happen. They want all the back end processing of data to happen inside a black box and they just want to feed in data like they’ve always done and have the results spit out to them in a format they can use.

This is the challenge of the next century of healthcare IT. EHR is just the first step in the process of getting data. Now we have the hard work of turning that data into something more useful than the paper chart provided.

The whitepaper does suggest these three steps we need to take to get value from our analytics efforts:
1. Data capture, storage, and access
2. Big data and analytics
3. Cognitive computing

If you read the whitepaper they talk more about all three of these things. However, it’s very clear that most organizations are still at step 1 with only a few starting to dabble in step 2. Some might see this as frustrating or depressing. I see it as exciting since it means that the best uses of healthcare IT are still to come. However, we’re going to need these solutions to be packaged in a really easy to use package. Otherwise no one will adopt them.

Is Claims Data Really So Bad For Health Care Analytics?

Posted on June 12, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site ( and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Two commonplaces heard in the health IT field are that the data in EHRs is aimed at billing, and that billing data is unreliable input to clinical decision support or other clinically related analytics. These statements form two premises to a syllogism for which you can fill in the conclusion. But at two conferences last week–the Health Datapalooza and the Health Privacy Summit–speakers indicated that smart analysis can derive a lot of value from claims data.

The Healthcare Cost and Utilization Project (HCUP), run by the government’s Agency for Healthcare Research and Quality (AHRQ), is based on hospital release data. Major elements include the payer, diagnoses, procedures, charges, length of stay, etc. along with potentially richer information such as patients’ ages, genders, and income levels. A separate Clinical Content Enhancement Toolkit does allow states to add clinical data, while American Hospital Association Linkage Files let hospitals upload data about their facilities.

But basically. HCUP data revolves around the claims from all-payer databases. It is collected currently from 47 states, and varies on a state-by-state basis depending on what data they allow to be released. HCUP goes back to 2006 and powers a lot of research, notably to improve outreach to underserved racial and ethnic groups.

During an interview at the Health Privacy Summit, Lucia Savage, Chief Privacy Officer at ONC, mentioned that one can use claims data to determine what treatments doctors offer for various conditions (such as mammograms, which tend to be underused, and antibiotics, which tend to be overused). Thus, analysts can target providers who fail to adhere to standards of care and theoretically improve outcomes.

M1, a large data analytics company serving a number of industries, bases a number of products in the health care space on claims data. For instance, medical device companies contract with M1 to find out which devices doctors are ordering. Insurance companies use it to sniff out fraud.

M1’s business model, incidentally, is a bit different from that pursued by most analytics organizations in the health care arena. Most firms contract with some institution–an insurer, for instance–to analyze its data and provide it with unique findings. But M1 goes around buying up data from multiple institutions and combining it for deeper insights. It then sells results back to these institutions, often paying out taking in payment from the same company.

In short, smart organizations are shelling out money for data about billing and claims. It looks like, if you have a lot of this data, you can reliably lower costs, improve marketing, and–most important of all–improve care. But we mustn’t lose sight of the serious limitations and weaknesses of this data.

  • A scandalously amount of it is clinical just wrong. Doctors “upcode” to extract the largest possible reimbursement for what they treat. A number of them go further and assign codes that have no justification whatsoever. And that doesn’t even count outright fraud, which reaches into the billions of dollars each year and therefore must leave a lot of bad data in the system.

  • Data is atomized, each claim standing on its own. A researcher will find it difficult to impossible (if patient identifiers are totally stripped out) to trace a sequence of visits that tell you about the progress of treatment.

  • Data is relatively impoverished. Clinical records flesh out the diagnosis with related conditions, demographic information, and other things that make the difference between correct and incorrect treatments.

But on the other hand, to go beyond billing data and reach the data utopia that reformers dream about, we’d have to slurp up a lot of complex and sensitive patient data. This has pitfalls of its own. Little clinical data is structured, and the doctors who do take the effort to enter it into structured fields do so inconsistently. Privacy concerns also raise their threatening heads when you get deep into patient conditions and demographics. So perhaps we should see how far we can get with claims data.

Unlocking EHR Data to Accelerate Clinical Quality Reporting & Enhance Renal Care Management

Posted on March 18, 2015 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

The following is a guest blog post by Christina Ai Chang from DaVita and Vaishali Nambiar from CitiusTech Inc.
Christina and Vaishali
When healthcare providers began achieving Meaningful Use (MU) — the set of standards, defined by CMS, that allows for providers to earn incentive dollars by complying with a set of specific criteria — a health IT paradox emerged. The reports required for incentive payments are built on data the EHR captures, however, EHRs don’t typically have built-in support for automated reporting. This places a time-intensive manual burden on physicians as they report for MU quality measures. In other words, a program intended to increase the use of technology inadvertently created a new, non-technical, burden. The need to manually assemble information for reports also extended to the CMS Physician Quality Reporting System (PQRS) incentive program. As with many providers, EHR reporting shortcomings for these CMS programs severely impacted the kidney care provider, DaVita Healthcare Partners, Inc. (DaVita).

As one of the largest and most successful kidney care companies in the United States, DaVita has constantly focused on clinical outcomes to enhance the quality of care that it provides to its patients. In its U.S. operations that include 550 physicians, DaVita provides dialysis services to over 163,000 patients each year at more than 2,000 outpatient dialysis centers. These centers run Falcon Physician, DaVita’s nephrology-focused solution that largely eliminates paper charting by capturing data electronically and providing a shared patient view to caregivers within the DaVita network.

Falcon Physician serves DaVita very well in its design: renal-care specific EHR capabilities and workflows to support patients with chronic kidney disease (CKD). However, federal incentive programs like MU and Physician Quality Reporting System posed their own challenges. Falcon, like most EHRs, did not have the sophisticated data processing and analytics capabilities needed to meet the complex clinical quality reporting mandated by these programs. With limited built-in support for automated reporting, DaVita physicians had to manually calculate denominators and complete forms for submission to CMS for quality measures reporting, typically taking five to six days per report. With the organization averaging 800 encounters per physician each month, this placed a highly time-intensive and manual burden on physician offices. In addition, manual reporting often resulted in errors, since physician offices had to manage ten or more pieces of data to arrive at a single measure calculation, and do that over and over again.

The Need to Automate Reporting – But How?

To address the time and accuracy issues, DaVita recognized it would need to unlock the data captured by the EHR and use an effective data analytics and reporting tool. To begin evaluating options, the organization put together a team to explore two potential paths: creating a proprietary reporting capability within the EHR, or integrating a third-party solution.

It became clear that proprietary development would be challenging, mainly because of the technological expertise that would be needed to build and maintain sufficiently advanced analytics capabilities. It would require special skillsets to build the rules engine, the data mapping tools, and the visualizations for reporting. In addition, DaVita would need to maintain a clinical informatics and data validation team to assess the complex clinical quality measures, develop these measures, and test the overall application on an ongoing basis. Further, DaVita would also need to get this functionality certified by CMS and other regulatory agencies on a periodic basis.

While looking for a third-party solution that could easily integrate with Falcon, DaVita came across CitiusTech, whose offerings include the BI-Clinical healthcare business intelligence and analytics platform. This platform comes with pre-built apps for multiple reporting functions, including MU and PQRS. Its application programming interface (API) simplifies integration into software like Falcon. The platform aligned closely with DaVita’s needs, and with a high interest in avoiding the expense, time and skillset hiring needed to build a proprietary reporting function, the organization decided to move forward with third-party integration.

Accelerated Implementation and Integration

Implementation began with a small proof of concept that delivered a readily scalable integration in fewer than six weeks. DaVita provided the database views and related data according to the third-party solution’s specifications. This freed DaVita not just from development, but also from testing, installation, and configuration of the platform; thereby, saving time and money, and creating a more robust analytics platform for DaVita’s physicians. In the end, going with an off-the-shelf solution reduced implementation time and cost by as much as two-thirds.

Integration with the third-party platform enabled DaVita’s Falcon EHR system to completely automate the collection and reporting of clinical quality measures, freeing up tremendous physician time while improving report accuracy. With additional capabilities that go beyond solving the reporting problem, the new solution translates EHR data into meaning performance dashboards that assist DaVita physicians in the transition to pay-for-performance medicine.

The platform with which DaVita integrated is ONC-certified for all MU measures for eligible professionals (EPs) and eligible hospitals (EHs). Falcon was able to leverage these certifications and achieve both MU Stage 1 and Stage 2 certification in record time. This also enabled Falcon to accelerate its PQRS program and offer PQRS reporting and data submission capabilities.

Automated Reporting and Dashboards in Action        

Today, hundreds of DaVita physicians use the upgraded EHR, and the integrated business intelligence and analytics function eliminates the need for these doctors to perform manual calculations for MU and PQRS measures. Where manually creating reports used to take five to six days, pre-defined measure sets now complete reports and submit data almost instantly.

With the manual reporting problem solved, DaVita’s physicians now take automation for granted. What they see on a daily basis are the quality-performance dashboards. These dashboards give them a visual, easily understood picture of how they’re doing relative to quality measures, and the feedback has been extremely positive. Many powerful reporting features are highly appreciated, such as key measurements appearing in red when it’s time to change course in care provision to meet a particular measure. Such information, provided in real-time with updates on a daily basis, has led to very strong adoption of the new reporting capabilities among physicians.

Currently, DaVita is working to develop a benchmarking tool that can rate all physicians within a location. The focus on quality-measurement rankings relative to their peers, with drill-downs to specific indicators such as hypertension and chronic kidney disease progression, will allow physicians to focus on enhancing care delivery.

Unlocking data located in the EHR has helped DaVita comply with MU and PQRS. In the coming years, the upgraded EHR will help physicians comply with evidence-based guidelines and optimize increasingly complex reimbursement requirements.

How Quick Can We Analyze Health IT Data?

Posted on October 9, 2014 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

While at the AHIMA Annual convention, I had a chance to sit down with Dr. Jon Elion, President and CEO of ChartWise, where we had a really interesting discussion about healthcare data. You might remember this video interview of Dr. Elion that I did a few years back. He’s a smart man with some interesting insights.

In our discussion, Dr. Elion led me on an oft repeated data warehouse discussion that most data warehouses have data that’s a day (or more) old since most data warehouses batch their data load function nightly. While I think this is beginning to evolve, it’s still true for many data warehouses. There’s good reason why the export to a data warehouse needs to occur. An EHR system (or other IT system) is a transactional system that’s build on a transactional database. This makes it difficult to do really good data analysis. Thus the need to move the data from a transactional system to a data store designed for crunching data. Plus, most hospitals also combine data from a wide variety of systems into their data warehouse.

Dr. Elion then told me about how they’d worked hard to change this model and that their ChartWise system had been able to update a hospital’s data warehouse (I think they may call it something different) every 5 minutes. Think about how much more you can do with 5 minute old data than you can do with day old data. It makes a huge difference.

Data that’s this fresh becomes actionable data. A hospital’s risk management department could leverage this data to identify at risk patients that need a little extra attention. Unfortunately, if that data is a day old, it might be too late for you to be able to act and prevent the issue from getting worse. That’s just one simple example of how the fresh data can be analyzed and improve the care a patient receives. I’m sure you can come up with many others.

No doubt there are a bunch of other companies that are working to solve this problem as well. Certainly, day old healthcare data is valuable as well, but fresh data in your data warehouse is so much more actionable than day old data. I’m excited to see what really smart people will be able to do with all this fresh data in their data warehouse.

Digital Health: How to Make Every Clinician the Smartest in the Room

Posted on August 21, 2014 I Written By

The following is a guest blog post by Dr. Mike Zalis, practicing MGH Radiologist and co-founder of QPID Health.
Zalis Headshot
Remember the “World Wide Web” before search engines? Less than two decades ago, you had to know exactly what you were looking for and where it was located in order to access information. There was no Google—no search engine that would find the needle in the haystack for you. Curated directories of URLs were a start, but very quickly failed to keep up with the explosion in growth of the Web. Now our expectation is that we will be led down the path of discovery by simply entering what’s on our mind into a search box. Ill-formed, half-baked questions quickly crystalize into a line of intelligent inquiry. Technology assists us by bringing the experience of others right to our screens.

Like the Internet, EHRs are a much-needed Web of information whose time has come. For a long time, experts preached the need to migrate from a paper-based documentation systems – aka old school charts—to electronic records. Hats off to the innovators and the federal government who’ve made this migration a reality. We’ve officially arrived: the age of electronic records is here. A recent report in Health Affairs showed that 58.9% of hospital have now adopted either a basic or comprehensive EHR—this is a four-fold increase since 2010 and the number of adoptions is still growing. So, EHRs are here to stay. Now, we’re now left to answer the question of what’s next? How can we make this data usable in a timely, efficient way?

My career as a radiologist spanned a similar, prior infrastructure change and has provided perspective on what many practitioners need—what I need—to make the move to an all-electronic patient record most useful: the ability to quickly get my hands on the patient’s current status and relevant past history at the point-of-care and apply this intelligence to make the best decision possible. In addition to their transactional functions (e.g., order creation), EHRs are terrific repositories of information and they’ve created the means but not the end. But today’s EHRs are just that—repositories. They’re designed for storage, not discovery.

20 years ago, we radiologists went through a similar transition of infrastructure in the move to the PACS systems that now form the core of all modern medical imaging. Initially, these highly engineered systems attempted to replicate the storage, display, and annotation functions that radiologists had until then performed on film. Initially, they were clunky and in many ways, inefficient to use. And it wasn’t until several years after that initial digital transition that technological improvements yielded the value-adding capabilities that have since dramatically improved capability, efficiency, and value of imaging services.

Something similar is happening to clinicians practicing in the age of EHRs. Publications from NEJM through InformationWeek have covered the issues of lack of usability, and increased administrative burden. The next frontier in Digital Health is for systems to find and deliver what you didn’t even know you were looking for. Systems that allow doctors to merge clinical experience with the technology, which is tireless and leaves no stone unturned. Further, technology that lets the less-experienced clinician benefit from the know-how of the more experienced.

To me, Digital Health means making every clinician the smartest in the room. It’s filtering the right information—organized fluidly according to the clinical concepts and complex guidelines that organize best practice—to empower clinicians to best serve our patients. Further, when Digital Health matures, the technology won’t make us think less—it allows us to think more, by thinking alongside us. For the foreseeable future, human experience, intuition and judgment will remain pillars of excellent clinical practice. Digital tools that permit us to exercise those uniquely human capabilities more effectively and efficiently are key to delivering a financially sustainable, high quality care at scale.

At MGH, our team of clinical and software experts took it upon ourselves some 7 years ago to make our EHR more useful in the clinical trench. The first application we launched reduced utilization of radiology studies by making clinicians aware of prior exams. Saving time and money for the system and avoiding unnecessary exposure for patients. Our solution also permitted a novel, powerful search across the entirety of a patient’s electronic health record and this capability “went viral”—starting in MGH, the application moved across departments and divisions of the hospital. Basic EHR search is a commodity, and our system has evolved well beyond its early capabilities to become an intelligent concept service platform, empowering workflow improvements all across a health care enterprise.

Now, when my colleagues move to other hospitals, they speak to how impossible it is to practice medicine without EHR intelligence—like suddenly being forced to navigate the Internet without Google again. Today at QPID Health, we are pushing the envelope to make it easy to find the Little Data about the patient that is essential to good care. Helping clinicians work smarter, not harder.

The reason I chose to become a physician was to help solve problems and deliver quality care—it’s immensely gratifying to contribute to a solution that allows physicians to do just that.

Dr. Mike Zalis is Co-founder and Chief Medical Officer of QPID Health, an associate professor at Harvard Medical School, and a board certified Radiologist serving part-time at Massachusetts General Hospital in Interventional Radiology. Mike’s deep knowledge of what clinicians need to practice most effectively and his ability to translate those needs into software solutions inform QPID’s development efforts. QPID software uses a scalable cloud-based architecture and leverages advanced concept-based natural language processing to extract patient insights from data stored in EHRs. QPID’s applciations support decision making at the point of care as well as population health and revenue cycle needs.

Hospital M&A Cost Boosted Significantly By Health IT Integration

Posted on August 18, 2014 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Most of the time, hospital M&A is sold as an exercise in saving money by reducing overhead and leveraging shared strengths. But new data from PricewaterhouseCoopers suggests that IT integration costs can undercut that goal substantially. (It also makes one wonder how ACOs can afford to merge their health IT infrastructure well enough to share risk, but that’s a story for another day.)

In any event, the cost of integrating the IT systems of hospitals that merge can add up to 2% to the annual operating costs of the facilities during the integration period, according to PricewaterhouseCoopers. That figure, which comes to $70,000 to $100,000 per bed over three to five years, is enough to reduce or even completely negate benefits of doing some deals. And it clearly forces merging hospitals to think through their respective IT strategies far more thoroughly than they might anticipated.

As if that stat isn’t bad enough, other experts feel that PwC is understating the case. According to Dwayne Gunter, president of Parallon Technology Solutions — who spoke to Hospitals & Health Networks magazine — IT integration costs can be much higher than those predicted by PwC’s estimate. “I think 2% being very generous,” Gunter told the magazine, “For example, if the purchased hospital’s IT infrastructure is in bad shape, the expense of replacing it will raise costs significantly.”

Of course, hospitals have always struggled to integrate systems when they merge, but as PwC research notes, there’s a lot more integrate these days, including not only core clinical and business operating systems but also EMRs, population health management tools and data analytics. (Given be extremely shaky state of cybersecurity in hospitals these days, merging partners had best feel out each others’ security systems very thoroughly as well, which obviously adds additional expenses.) And what if the merging hospitals use different enterprise EMR systems? Do you rip and replace, integrate and pray, or do some mix of the above?

On top of all that, working hospital systems have to make sure they have enough IT staffers available, or can contract with enough, to do a good job of the integration process. Given that in many hospitals, IT leaders barely have enough staff members to get the minimum done, the merger partners are likely costly consultants if they want to finish the process for the next millennium.

My best guess is that many mergers have failed to take this massive expense into account. The aftermath has got to be pretty ugly.

Big Data is Like Teenage Sex

Posted on November 5, 2013 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Yes, that is a catchy headline, but if you’ve read me for anytime you also know I love a good analogy. This analogy comes from Dan Ariely as shared by Iwona during #cikm2013.

For those who can’t load the image it says:
Big data is like teenage sex:
everyone talks about it,
nobody really knows how to do it,
everyone thinks everyone else is doing it,
so everyone claims they are doing it…

As a big proponent of no sex before marriage, this is a little out there for me, but the analogy illustrated the point so well. In fact, I think this is why in healthcare we’re seeing a new line of smaller data project with meaningful outcomes.

What I wish we could change is the final part. How about we all stop hiding behind what we are and aren’t doing. We all deserve to be frank about our actual efforts. The fact is that many organizations aren’t doing anything with big data and quite frankly they shouldn’t be doing anything. Kind of like how many teenagers shouldn’t be having sex.

Healthcare Big Data and Meaningful Use Challenges Video

Posted on October 2, 2013 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

This Fall we decided to do a whole series of weekly video interviews with top healthcare IT thought leaders. Many of you may have come across our EHR video site and the Healthcare Scene YouTube channel where we host all of the videos. The next interview in that series is happening Thursday, October 3rd at 1:00 EST with Dr. Tom Giannulli, discussing the future of small physician practices. You can join with us live or watch the recorded video after the event. Plus, you can see all the future interviews we have scheduled here.

Last week’s video interview was with Mandi Bishop, Principal at Adaptive Project Solutions and also a writer at EMR and HIPAA. Mandi does an amazing job sharing her insights into healthcare big data and the challenges of meaningful use. We also dig in to EHR data sharing with insurance plans and ask Mandi if meaningful use is completely devoid of value or not.

For those who missed the live interview, you can watch the recorded interview with Mandi Bishop embedded below.

Big Data Impacting Healthcare

Posted on July 19, 2013 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

The following is a guest post by Sarah E. Fletcher, BS, BSN, RN-BC, MedSys Group Consultant.
Sarah Fletcher
It is generally agreed that bigger is better.  When it comes to data, big data can be a challenge as well as a boon for healthcare.  As Meaningful Use drives electronic documentation and technologies grow to support it, big data is a reality that has to be managed to be meaningful.

Medical databases are becoming petabytes of data from any number of sources covering every aspect of a patient’s stay.  Hospitals can capture every medication, band-aid, or vital sign.  Image studies and reports are stored in imaging systems next to scanned documents and EKGs.

Each medication transaction includes drug, dose, and route details, which are sent to the dispensing cabinet.  The patient and medication can be scanned at the bedside and documentation added in real time.  Each step of the way is logged with a time stamp including provider entry, pharmacist verification, and nurse administration.  One dose of medication has dozens of individual datum.

All of this data is captured for each medication dose administered in a hospital, which can be tens of thousands of doses per month. Translate the extent of data captured to every patient transfer, surgery, or bandage, and the scope of the big data becomes clearer.

With the future of Health Information Exchanges (HIEs), hospitals will have access not just to their own patient data, but everyone else’s data as well.  Personal health records (PHRs), maintained by the patients themselves, may also lend themselves to big data and provide every mile run, blood pressure or weight measured at home, and each medication taken.

One of the primary challenges with big data is that the clinicians who use the data do not speak the same language as the programmers who design the system and analyze the data.  Determining how much data should be displayed in what format should be a partnership between the clinical and the technical teams to ensure the clinical relevance of the data is maximized to improve patient outcomes.  Big data is a relatively new event and data analysts able to manage these vast amounts of data are in short supply, especially those who can understand clinical data needs.

Especially challenging is the mapping of data across disparate systems.  Much of the data are pooled into backend tables with little to no structure.  There are many different nomenclatures and databases used for diagnoses, terminology, and medications.  Ensuring that discrete data points pulled from multiple sources match in a meaningful way when the patient data are linked together is a programmatic challenge.

Now that clinicians have the last thousand pulse measurements for a group of patients, what does one do with that?  Dashboards are useful for recent patient data, but how quickly it populates is critical for patient care. The rendering of this data requires stable wireless with significant bandwidth, processing power, and storage, all of which come with a cost, especially when privacy regulations must be met.

Likely the biggest challenge of all, and one often overlooked, is the human factor.  The average clinician does not know about technology; they know about patients.  The computer or barcode scanner is a tool to them just like an IV pump, glucometer, or chemistry analyzer.  If it does not work well for them consistently, in a timely and intuitive fashion, they will find ways around the system in order to care for their patients, not caring that it may compromise the data captured in the system.

Most people would point out that the last thousand measurements of anything is overkill for patient care, even if it were graphed to show a visual trend. There are some direct benefits of big data for the average clinician, such as being able to compare every recent vital sign, medication administration, and lab result on the fly.  That said, most of the benefit is indirect via health systems and health outcomes improvements.

The traditional paper method of auditing was to pull a certain number of random charts, often a small fraction of one percent of patient visits.  This gives an idea of whether certain data elements are being collected consistently, documentation completed, and quality goals met.  With big data and proper analytics, the ability exists to audit every single patient chart at any time.

The quality department may have reports and trending graphics to ensure their measures were met, not just for a percentage of a population, but each and every patient visit for as long as the data is stored.  This can be done by age, gender, level of care, and even by eye color, if that data is captured and the reports exist to pull it.

Researchers can use this data mining technique to develop new evidence to guide future care.  By reviewing the patients with the best outcomes in a particular group, correlations can be drawn, evaluated, and tested based on the data of a million patients.  Positive interventions discovered this way today can be turned into evidence-based practice tomorrow.

The sheer scope of big data is its own challenge, but the benefits have the potential to change healthcare in ways that have yet to be considered.  Big data comes from technology, but Meaningful Use is not about implementing technology.  It is about leveraging technology in a meaningful way to improve the care and outcomes of our patients.  This is why managing big data is so critical to the future of healthcare.

MedSys Group Consultant, Sarah E. Fletcher, BS, BSN, RN-BC has worked in technology for over fifteen years.  The last seven years have been within the nursing profession, beginning in critical care and transitioning quickly to Nursing Informatics.  She is a certified Nurse Informaticist and manages a regular Informatics Certification series for students seeking ANCC certification in Nursing Informatics.  Sarah currently works with MedSys Group Consulting supporting a multi-hospital system.

Is Skinny Data Harder Than Big Data?

Posted on May 24, 2013 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

On my post about Improving the Quality of EHR data for Healthcare Analytics, Glenn made a really great comment that I think is worth highlighting.

Power to change outcomes starts with liberating the data. Then transforming all that data into information and finally into knowledge. Ok – Sorry, that’s probably blindingly obvious. But skinny-data is a good metaphor because you don’t need to liberate ALL the data. And in fact the skinny metaphor covers what I refer to as the data becoming information part (filter out the noise). Selective liberation and combination into a skinny warehouse or skinny data platform is also manageable. And then build on top of that the analytics that release the knowledge to enable better outcomes. Now …if only all those behemoth mandated products would loosen up on their data controls…

His simple comment “filter out the noise” made me realize that skinny data might actually be much harder to do than big data. If you ask someone to just aggregate all the data, that is a generally pretty easy task. Once you start taking on the selection of data that really matters, it becomes much harder. This is likely why so many Enterprise Data Warehouses sit their basically idle. Knowing which data is useful, making sure it is collected in a useful way, and then putting that data to use is much harder than just aggregating all the data.

Dana Sellers commented on this in this Hospital EHR and Healthcare Analytics video interview I did (the whole video has some great insights). She said that data governance is going to be an important challenge going forward. Although she defined data governance as making sure that you’re collecting the data in a way that you know what that data really means and how it can be used in the future. That’s a powerful concept and one that most people haven’t dug into very much. They’re going to have to if they want to start using their data for good.