Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Healthcare Needs Clinician Data Experts

Posted on November 2, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

This week I read an interesting article by a physician about the huge challenges clinicians face coping with unthinkably large clinical data sets — and what we should do about it. The doctor who wrote the article argues for the creation of a next-gen clinician/health IT hybrid expert that will bridge the gaps between technology and medicine.

In the article, the doctor noted that while he could conceivably answer any question he had about his patients using big data, he would have to tame literally billions of data rows to do so.

Right now, logs of all EHR activity are dumped into large databases every day, notes Alvin Rajkomar, MD. In theory, clinicians can access the data, but in reality most of the analysis and taming of data is done by report writers. The problem is, the HIT staff compiling reports don’t have the clinical context they need to sort such data adequately, he says:

“Clinical data is complex and contextual,” he writes. “[For example,] a heart rate may be listed under the formal vital sign table or under nursing documentation, where it is listed as a pulse. A report writer without clinical background may not appreciate that a request for heart rate should actually include data from both tables.“

Frustrated with the limitations of this process, Rajkomar decided to take the EHR database problem on. He went through an intense training process including 24 hours of in–person classes, a four-hour project and four hours of supervised training to obtain the skills needed to work with large clinical databases. In other words, he jumped right in the middle of the game.

Even having a trained physician in the mix isn’t enough, he argues. Ultimately, understanding such data calls for developing a multidisciplinary team. Clinicians need each others’ perspectives on the masses of data coming in, which include not only EHR data but also sensor, app and patient record outcomes. Moreover, a clinician data analyst is likely to be more comfortable than traditional IT staffers when working with nurses, pharmacists or laboratory technicians, he suggests.

Still, having even a single clinician in the mix can have a major impact, Rajkomar argues. He contends that the healthcare industry needs to create more people like him, a role he calls “clinician-data translator.” The skills needed by this translator would include expertise in clinical systems, the ability to extract data from large warehouses and deep understanding of how to rigorously analyze large data sets.

Not only would such a specialist help with data analysis, and help to determine where to apply novel  algorithms, they could also help other clinicians decide which questions are worth investigating further in the first place. What’s more, clinician data scientists would be well-equipped to integrate data-gathering activities into workflows, he points out.

The thing is, there aren’t any well-marked pathways to becoming a clinician data scientist, with most data science degrees offering training that doesn’t focus on a particular domain. But if you believe Rajkomar – and I do – finding clinicians who want to be data scientists makes a lot of sense for health systems and clinics. While their will always be a role for health IT experts with purely technical training, we need clinicians who will work alongside them and guide their decisions.

Patients and Their Medical Data

Posted on April 4, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Sometimes they say a picture is worth a thousand words. That’s what I thought when I saw this image from Nature.com:
Patient Health Data Sharing and Big Healthcare Data

It’s great to see Nature.com talking about healthcare data. The authors are two people you likely know: Leonard Kish and Eric Topol.

This graphic shows the ideal. It’s interesting to think about what the reality would actually look like. Sadly, it would be much more complex, disconnected, and would lack the fluid sharing that this graphic shows.

It’s good to know what the idea for data sharing and understanding data would look like. Shows the potential of what’s possible and that’s exciting.

Uncovering Hidden Hospital Resources: Hospital IQ Shows that the First Resource is Data

Posted on January 4, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

We’ve become accustomed to the results that data mining have produced in fields ranging from retail to archeology to climate change. According to Rich Krueger, CEO of HospitalIQ, the health care industry is also ripe with data that could be doing a lot more for strategic operations and care.

Many companies (perhaps even too many) already offer analytics for grouping patients into risk pools and predicting the most unwanted events, such as readmissions within 30 days of a hospital discharge. According to Krueger, Hospital IQ is the first company offering analytics to improve operations. From a policy standpoint, the service has three benefits:

  • Improving operations can significantly lower the cost of providing health care, and the savings can be passed on to payers and consumers.
  • Better operational efficiency means better outcomes. For instance, if the ER is staffed properly and the hospital has enough staff resources to meet each patient’s needs, they will get the right treatment faster. This means patients will recover sooner and spend less time in the hospital.
  • Hospital IQ’s service is an intriguing example of the creative use of clinical data, inspiring further exploration of data analysis in health care.

One of the essential problems addressed by Hospital IQ is unexpected peaks in demand, which strains emergency rooms, ICUs and inpatient units, and ultimately can contribute to poor patient outcomes. The mirror images of this problem are unexpected troughs in usage, where staff is underutilized and costing the healthcare system.

“There’s no reason for surprise,” says Krueger. While it is difficult to predict demand for any given day, it is possible to forecast upper and lower boundaries for demand in a given week. Hospital staff often have an intuitive sense of this. Krueger says with his software, they can mine clinical data to make these forecasts based on actual live data.

There are firms who make sensors that can tell you when a bed is occupied, when someone has come to clean the room or visit a patient, and so forth. The interesting thing about Hospital IQ is that it does its work without needing to install any special equipment or IT builds. Its inputs are the everyday bits of data that every hospital already collects and stores in their electronic record systems: when someone enters the ED, when they are seen, where they are transferred, when surgery started and ended. The more at-risk patients in telemetry beds generate information that is also used for analytics.

Hospital IQ uses mature business analytics tools such as queuing theory that have been popularized for decades since the age of W. Edwards Deming. Even though EHRs differ, Krueger has discovered that it’s not hard to retrieve the necessary data from them. “You tell us where the data is and we’ll get it.”

The service also provides dashboards designed for different types of staff, and tools for data analysis. For instance, a manager can quickly see how often patients are sent to the ICU not because they need ICU care but because all other beds are full. While this doesn’t compromise patient safety, it’s an unnecessary cost and also reduces patient satisfaction because ICUs generally have a more restricted visiting policy. This situation becomes more problematic when a patient arrives through the ED with a real need for an ICU bed and the unit is full (Figure 1). Hospitals can look at trends over time to optimize staffing and even out the load by scheduling elective interventions at non-peak times. There’s a potentially tremendous impact on patient safety, length of stay, and mortality.

solutions-patient-safety-and-quality
Figure 1: Chart showing placements into the ICU. “Misplaced here” are patients that don’t belong in the ICU, whereas “Misplaced elsewhere” are patients that should have gone to the ICU but were sent somewhere else such as the PACU.

Taken to another level of sophistication, analytics can be used for long-term planning. For instance, if you are increasing access to a service, hospitals can forecast the additional number of beds, operating rooms, and staff the service will need based on historic demand and projected growth.. Hospitals can set a policy such as a maximum wait of three hours for a bed and see the resources necessary to meet that goal. Figure 2 shows an example of a graph being tweaked to look at different possible futures.

solutions-capacity-planning
Figure 2: Simulation using historical demand data showing the relationship between bed counts and average wait time for a patient to get a bed.

Hospital IQ is a fairly young company whose customers cover a wide range of hospitals: from safety net hospitals to academic institutions, large and small. It also works across large systems with multiple institutions, aiding in such tasks as consolidating multiple hospitals’ services into one hospital.

My hope for Hospital IQ is that it will open up its system with an API that allows hospitals and third parties to design new services. It seems to offer new insights that were hidden before, and we can only guess where other contributors can take it as well.

Using Healthcare Analytics to Achieve Strong Financial Performance

Posted on September 25, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Everyone is talking about analytics, but I’ve been looking for the solutions that take analytics and package it nicely. This is what I hoped for when I found this whitepaper called How Healthcare Providers Can Leverage Advanced Analytics to Achieve Strong Financial Performance. This is a goal that I think most of us in healthcare IT would like to achieve. We want healthcare providers to be able to leverage analytics to improve their business.

However, this illustration from the whitepaper shows exactly why we’re not seeing the results we want from our healthcare analytics efforts:
Advanced Analytics Impact on Healthcare

That’s a complex beast if I’ve ever seen one. Most providers I talk to want the results that this chart espouses, but they want it just to happen. They want all the back end processing of data to happen inside a black box and they just want to feed in data like they’ve always done and have the results spit out to them in a format they can use.

This is the challenge of the next century of healthcare IT. EHR is just the first step in the process of getting data. Now we have the hard work of turning that data into something more useful than the paper chart provided.

The whitepaper does suggest these three steps we need to take to get value from our analytics efforts:
1. Data capture, storage, and access
2. Big data and analytics
3. Cognitive computing

If you read the whitepaper they talk more about all three of these things. However, it’s very clear that most organizations are still at step 1 with only a few starting to dabble in step 2. Some might see this as frustrating or depressing. I see it as exciting since it means that the best uses of healthcare IT are still to come. However, we’re going to need these solutions to be packaged in a really easy to use package. Otherwise no one will adopt them.

Is Claims Data Really So Bad For Health Care Analytics?

Posted on June 12, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Two commonplaces heard in the health IT field are that the data in EHRs is aimed at billing, and that billing data is unreliable input to clinical decision support or other clinically related analytics. These statements form two premises to a syllogism for which you can fill in the conclusion. But at two conferences last week–the Health Datapalooza and the Health Privacy Summit–speakers indicated that smart analysis can derive a lot of value from claims data.

The Healthcare Cost and Utilization Project (HCUP), run by the government’s Agency for Healthcare Research and Quality (AHRQ), is based on hospital release data. Major elements include the payer, diagnoses, procedures, charges, length of stay, etc. along with potentially richer information such as patients’ ages, genders, and income levels. A separate Clinical Content Enhancement Toolkit does allow states to add clinical data, while American Hospital Association Linkage Files let hospitals upload data about their facilities.

But basically. HCUP data revolves around the claims from all-payer databases. It is collected currently from 47 states, and varies on a state-by-state basis depending on what data they allow to be released. HCUP goes back to 2006 and powers a lot of research, notably to improve outreach to underserved racial and ethnic groups.

During an interview at the Health Privacy Summit, Lucia Savage, Chief Privacy Officer at ONC, mentioned that one can use claims data to determine what treatments doctors offer for various conditions (such as mammograms, which tend to be underused, and antibiotics, which tend to be overused). Thus, analysts can target providers who fail to adhere to standards of care and theoretically improve outcomes.

M1, a large data analytics company serving a number of industries, bases a number of products in the health care space on claims data. For instance, medical device companies contract with M1 to find out which devices doctors are ordering. Insurance companies use it to sniff out fraud.

M1’s business model, incidentally, is a bit different from that pursued by most analytics organizations in the health care arena. Most firms contract with some institution–an insurer, for instance–to analyze its data and provide it with unique findings. But M1 goes around buying up data from multiple institutions and combining it for deeper insights. It then sells results back to these institutions, often paying out taking in payment from the same company.

In short, smart organizations are shelling out money for data about billing and claims. It looks like, if you have a lot of this data, you can reliably lower costs, improve marketing, and–most important of all–improve care. But we mustn’t lose sight of the serious limitations and weaknesses of this data.

  • A scandalously amount of it is clinical just wrong. Doctors “upcode” to extract the largest possible reimbursement for what they treat. A number of them go further and assign codes that have no justification whatsoever. And that doesn’t even count outright fraud, which reaches into the billions of dollars each year and therefore must leave a lot of bad data in the system.

  • Data is atomized, each claim standing on its own. A researcher will find it difficult to impossible (if patient identifiers are totally stripped out) to trace a sequence of visits that tell you about the progress of treatment.

  • Data is relatively impoverished. Clinical records flesh out the diagnosis with related conditions, demographic information, and other things that make the difference between correct and incorrect treatments.

But on the other hand, to go beyond billing data and reach the data utopia that reformers dream about, we’d have to slurp up a lot of complex and sensitive patient data. This has pitfalls of its own. Little clinical data is structured, and the doctors who do take the effort to enter it into structured fields do so inconsistently. Privacy concerns also raise their threatening heads when you get deep into patient conditions and demographics. So perhaps we should see how far we can get with claims data.

Unlocking EHR Data to Accelerate Clinical Quality Reporting & Enhance Renal Care Management

Posted on March 18, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

The following is a guest blog post by Christina Ai Chang from DaVita and Vaishali Nambiar from CitiusTech Inc.
Christina and Vaishali
When healthcare providers began achieving Meaningful Use (MU) — the set of standards, defined by CMS, that allows for providers to earn incentive dollars by complying with a set of specific criteria — a health IT paradox emerged. The reports required for incentive payments are built on data the EHR captures, however, EHRs don’t typically have built-in support for automated reporting. This places a time-intensive manual burden on physicians as they report for MU quality measures. In other words, a program intended to increase the use of technology inadvertently created a new, non-technical, burden. The need to manually assemble information for reports also extended to the CMS Physician Quality Reporting System (PQRS) incentive program. As with many providers, EHR reporting shortcomings for these CMS programs severely impacted the kidney care provider, DaVita Healthcare Partners, Inc. (DaVita).

As one of the largest and most successful kidney care companies in the United States, DaVita has constantly focused on clinical outcomes to enhance the quality of care that it provides to its patients. In its U.S. operations that include 550 physicians, DaVita provides dialysis services to over 163,000 patients each year at more than 2,000 outpatient dialysis centers. These centers run Falcon Physician, DaVita’s nephrology-focused solution that largely eliminates paper charting by capturing data electronically and providing a shared patient view to caregivers within the DaVita network.

Falcon Physician serves DaVita very well in its design: renal-care specific EHR capabilities and workflows to support patients with chronic kidney disease (CKD). However, federal incentive programs like MU and Physician Quality Reporting System posed their own challenges. Falcon, like most EHRs, did not have the sophisticated data processing and analytics capabilities needed to meet the complex clinical quality reporting mandated by these programs. With limited built-in support for automated reporting, DaVita physicians had to manually calculate denominators and complete forms for submission to CMS for quality measures reporting, typically taking five to six days per report. With the organization averaging 800 encounters per physician each month, this placed a highly time-intensive and manual burden on physician offices. In addition, manual reporting often resulted in errors, since physician offices had to manage ten or more pieces of data to arrive at a single measure calculation, and do that over and over again.

The Need to Automate Reporting – But How?

To address the time and accuracy issues, DaVita recognized it would need to unlock the data captured by the EHR and use an effective data analytics and reporting tool. To begin evaluating options, the organization put together a team to explore two potential paths: creating a proprietary reporting capability within the EHR, or integrating a third-party solution.

It became clear that proprietary development would be challenging, mainly because of the technological expertise that would be needed to build and maintain sufficiently advanced analytics capabilities. It would require special skillsets to build the rules engine, the data mapping tools, and the visualizations for reporting. In addition, DaVita would need to maintain a clinical informatics and data validation team to assess the complex clinical quality measures, develop these measures, and test the overall application on an ongoing basis. Further, DaVita would also need to get this functionality certified by CMS and other regulatory agencies on a periodic basis.

While looking for a third-party solution that could easily integrate with Falcon, DaVita came across CitiusTech, whose offerings include the BI-Clinical healthcare business intelligence and analytics platform. This platform comes with pre-built apps for multiple reporting functions, including MU and PQRS. Its application programming interface (API) simplifies integration into software like Falcon. The platform aligned closely with DaVita’s needs, and with a high interest in avoiding the expense, time and skillset hiring needed to build a proprietary reporting function, the organization decided to move forward with third-party integration.

Accelerated Implementation and Integration

Implementation began with a small proof of concept that delivered a readily scalable integration in fewer than six weeks. DaVita provided the database views and related data according to the third-party solution’s specifications. This freed DaVita not just from development, but also from testing, installation, and configuration of the platform; thereby, saving time and money, and creating a more robust analytics platform for DaVita’s physicians. In the end, going with an off-the-shelf solution reduced implementation time and cost by as much as two-thirds.

Integration with the third-party platform enabled DaVita’s Falcon EHR system to completely automate the collection and reporting of clinical quality measures, freeing up tremendous physician time while improving report accuracy. With additional capabilities that go beyond solving the reporting problem, the new solution translates EHR data into meaning performance dashboards that assist DaVita physicians in the transition to pay-for-performance medicine.

The platform with which DaVita integrated is ONC-certified for all MU measures for eligible professionals (EPs) and eligible hospitals (EHs). Falcon was able to leverage these certifications and achieve both MU Stage 1 and Stage 2 certification in record time. This also enabled Falcon to accelerate its PQRS program and offer PQRS reporting and data submission capabilities.

Automated Reporting and Dashboards in Action        

Today, hundreds of DaVita physicians use the upgraded EHR, and the integrated business intelligence and analytics function eliminates the need for these doctors to perform manual calculations for MU and PQRS measures. Where manually creating reports used to take five to six days, pre-defined measure sets now complete reports and submit data almost instantly.

With the manual reporting problem solved, DaVita’s physicians now take automation for granted. What they see on a daily basis are the quality-performance dashboards. These dashboards give them a visual, easily understood picture of how they’re doing relative to quality measures, and the feedback has been extremely positive. Many powerful reporting features are highly appreciated, such as key measurements appearing in red when it’s time to change course in care provision to meet a particular measure. Such information, provided in real-time with updates on a daily basis, has led to very strong adoption of the new reporting capabilities among physicians.

Currently, DaVita is working to develop a benchmarking tool that can rate all physicians within a location. The focus on quality-measurement rankings relative to their peers, with drill-downs to specific indicators such as hypertension and chronic kidney disease progression, will allow physicians to focus on enhancing care delivery.

Unlocking data located in the EHR has helped DaVita comply with MU and PQRS. In the coming years, the upgraded EHR will help physicians comply with evidence-based guidelines and optimize increasingly complex reimbursement requirements.

How Quick Can We Analyze Health IT Data?

Posted on October 9, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

While at the AHIMA Annual convention, I had a chance to sit down with Dr. Jon Elion, President and CEO of ChartWise, where we had a really interesting discussion about healthcare data. You might remember this video interview of Dr. Elion that I did a few years back. He’s a smart man with some interesting insights.

In our discussion, Dr. Elion led me on an oft repeated data warehouse discussion that most data warehouses have data that’s a day (or more) old since most data warehouses batch their data load function nightly. While I think this is beginning to evolve, it’s still true for many data warehouses. There’s good reason why the export to a data warehouse needs to occur. An EHR system (or other IT system) is a transactional system that’s build on a transactional database. This makes it difficult to do really good data analysis. Thus the need to move the data from a transactional system to a data store designed for crunching data. Plus, most hospitals also combine data from a wide variety of systems into their data warehouse.

Dr. Elion then told me about how they’d worked hard to change this model and that their ChartWise system had been able to update a hospital’s data warehouse (I think they may call it something different) every 5 minutes. Think about how much more you can do with 5 minute old data than you can do with day old data. It makes a huge difference.

Data that’s this fresh becomes actionable data. A hospital’s risk management department could leverage this data to identify at risk patients that need a little extra attention. Unfortunately, if that data is a day old, it might be too late for you to be able to act and prevent the issue from getting worse. That’s just one simple example of how the fresh data can be analyzed and improve the care a patient receives. I’m sure you can come up with many others.

No doubt there are a bunch of other companies that are working to solve this problem as well. Certainly, day old healthcare data is valuable as well, but fresh data in your data warehouse is so much more actionable than day old data. I’m excited to see what really smart people will be able to do with all this fresh data in their data warehouse.

Digital Health: How to Make Every Clinician the Smartest in the Room

Posted on August 21, 2014 I Written By

The following is a guest blog post by Dr. Mike Zalis, practicing MGH Radiologist and co-founder of QPID Health.
Zalis Headshot
Remember the “World Wide Web” before search engines? Less than two decades ago, you had to know exactly what you were looking for and where it was located in order to access information. There was no Google—no search engine that would find the needle in the haystack for you. Curated directories of URLs were a start, but very quickly failed to keep up with the explosion in growth of the Web. Now our expectation is that we will be led down the path of discovery by simply entering what’s on our mind into a search box. Ill-formed, half-baked questions quickly crystalize into a line of intelligent inquiry. Technology assists us by bringing the experience of others right to our screens.

Like the Internet, EHRs are a much-needed Web of information whose time has come. For a long time, experts preached the need to migrate from a paper-based documentation systems – aka old school charts—to electronic records. Hats off to the innovators and the federal government who’ve made this migration a reality. We’ve officially arrived: the age of electronic records is here. A recent report in Health Affairs showed that 58.9% of hospital have now adopted either a basic or comprehensive EHR—this is a four-fold increase since 2010 and the number of adoptions is still growing. So, EHRs are here to stay. Now, we’re now left to answer the question of what’s next? How can we make this data usable in a timely, efficient way?

My career as a radiologist spanned a similar, prior infrastructure change and has provided perspective on what many practitioners need—what I need—to make the move to an all-electronic patient record most useful: the ability to quickly get my hands on the patient’s current status and relevant past history at the point-of-care and apply this intelligence to make the best decision possible. In addition to their transactional functions (e.g., order creation), EHRs are terrific repositories of information and they’ve created the means but not the end. But today’s EHRs are just that—repositories. They’re designed for storage, not discovery.

20 years ago, we radiologists went through a similar transition of infrastructure in the move to the PACS systems that now form the core of all modern medical imaging. Initially, these highly engineered systems attempted to replicate the storage, display, and annotation functions that radiologists had until then performed on film. Initially, they were clunky and in many ways, inefficient to use. And it wasn’t until several years after that initial digital transition that technological improvements yielded the value-adding capabilities that have since dramatically improved capability, efficiency, and value of imaging services.

Something similar is happening to clinicians practicing in the age of EHRs. Publications from NEJM through InformationWeek have covered the issues of lack of usability, and increased administrative burden. The next frontier in Digital Health is for systems to find and deliver what you didn’t even know you were looking for. Systems that allow doctors to merge clinical experience with the technology, which is tireless and leaves no stone unturned. Further, technology that lets the less-experienced clinician benefit from the know-how of the more experienced.

To me, Digital Health means making every clinician the smartest in the room. It’s filtering the right information—organized fluidly according to the clinical concepts and complex guidelines that organize best practice—to empower clinicians to best serve our patients. Further, when Digital Health matures, the technology won’t make us think less—it allows us to think more, by thinking alongside us. For the foreseeable future, human experience, intuition and judgment will remain pillars of excellent clinical practice. Digital tools that permit us to exercise those uniquely human capabilities more effectively and efficiently are key to delivering a financially sustainable, high quality care at scale.

At MGH, our team of clinical and software experts took it upon ourselves some 7 years ago to make our EHR more useful in the clinical trench. The first application we launched reduced utilization of radiology studies by making clinicians aware of prior exams. Saving time and money for the system and avoiding unnecessary exposure for patients. Our solution also permitted a novel, powerful search across the entirety of a patient’s electronic health record and this capability “went viral”—starting in MGH, the application moved across departments and divisions of the hospital. Basic EHR search is a commodity, and our system has evolved well beyond its early capabilities to become an intelligent concept service platform, empowering workflow improvements all across a health care enterprise.

Now, when my colleagues move to other hospitals, they speak to how impossible it is to practice medicine without EHR intelligence—like suddenly being forced to navigate the Internet without Google again. Today at QPID Health, we are pushing the envelope to make it easy to find the Little Data about the patient that is essential to good care. Helping clinicians work smarter, not harder.

The reason I chose to become a physician was to help solve problems and deliver quality care—it’s immensely gratifying to contribute to a solution that allows physicians to do just that.

Dr. Mike Zalis is Co-founder and Chief Medical Officer of QPID Health, an associate professor at Harvard Medical School, and a board certified Radiologist serving part-time at Massachusetts General Hospital in Interventional Radiology. Mike’s deep knowledge of what clinicians need to practice most effectively and his ability to translate those needs into software solutions inform QPID’s development efforts. QPID software uses a scalable cloud-based architecture and leverages advanced concept-based natural language processing to extract patient insights from data stored in EHRs. QPID’s applciations support decision making at the point of care as well as population health and revenue cycle needs.

Hospital M&A Cost Boosted Significantly By Health IT Integration

Posted on August 18, 2014 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Most of the time, hospital M&A is sold as an exercise in saving money by reducing overhead and leveraging shared strengths. But new data from PricewaterhouseCoopers suggests that IT integration costs can undercut that goal substantially. (It also makes one wonder how ACOs can afford to merge their health IT infrastructure well enough to share risk, but that’s a story for another day.)

In any event, the cost of integrating the IT systems of hospitals that merge can add up to 2% to the annual operating costs of the facilities during the integration period, according to PricewaterhouseCoopers. That figure, which comes to $70,000 to $100,000 per bed over three to five years, is enough to reduce or even completely negate benefits of doing some deals. And it clearly forces merging hospitals to think through their respective IT strategies far more thoroughly than they might anticipated.

As if that stat isn’t bad enough, other experts feel that PwC is understating the case. According to Dwayne Gunter, president of Parallon Technology Solutions — who spoke to Hospitals & Health Networks magazine — IT integration costs can be much higher than those predicted by PwC’s estimate. “I think 2% being very generous,” Gunter told the magazine, “For example, if the purchased hospital’s IT infrastructure is in bad shape, the expense of replacing it will raise costs significantly.”

Of course, hospitals have always struggled to integrate systems when they merge, but as PwC research notes, there’s a lot more integrate these days, including not only core clinical and business operating systems but also EMRs, population health management tools and data analytics. (Given be extremely shaky state of cybersecurity in hospitals these days, merging partners had best feel out each others’ security systems very thoroughly as well, which obviously adds additional expenses.) And what if the merging hospitals use different enterprise EMR systems? Do you rip and replace, integrate and pray, or do some mix of the above?

On top of all that, working hospital systems have to make sure they have enough IT staffers available, or can contract with enough, to do a good job of the integration process. Given that in many hospitals, IT leaders barely have enough staff members to get the minimum done, the merger partners are likely costly consultants if they want to finish the process for the next millennium.

My best guess is that many mergers have failed to take this massive expense into account. The aftermath has got to be pretty ugly.

Big Data is Like Teenage Sex

Posted on November 5, 2013 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Yes, that is a catchy headline, but if you’ve read me for anytime you also know I love a good analogy. This analogy comes from Dan Ariely as shared by Iwona during #cikm2013.

For those who can’t load the image it says:
Big data is like teenage sex:
everyone talks about it,
nobody really knows how to do it,
everyone thinks everyone else is doing it,
so everyone claims they are doing it…

As a big proponent of no sex before marriage, this is a little out there for me, but the analogy illustrated the point so well. In fact, I think this is why in healthcare we’re seeing a new line of smaller data project with meaningful outcomes.

What I wish we could change is the final part. How about we all stop hiding behind what we are and aren’t doing. We all deserve to be frank about our actual efforts. The fact is that many organizations aren’t doing anything with big data and quite frankly they shouldn’t be doing anything. Kind of like how many teenagers shouldn’t be having sex.