Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Doctors and Disaster Relief: the Value of Technology and Data for HealthTap

Posted on February 2, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

In November 2015, when Tamil areas of southwestern India suffered from serious monsoon-related flooding that killed hundreds and caused the major city Chennai to essentially shut down for a week, local residents asked for help from an unusual source: HealthTap, the online service that offers medical advice and concierge care. This article explains the unique technical and organizational resources HealthTap offered, making it a valuable source of information for anyone in the disaster area with a cell phone or Internet access. At the end I will ask: what can public health institutions do to replicate HealthTap’s success in aiding the people of Chennai?

Disaster response from an unusual quarter
HealthTap, an online service connecting doctors and patients, has grown gradually but steadily in scope over the five years since its launch. It began by providing personalized answers, directly from practicing doctors, to individuals dealing with routine health issues such as pregnancy. HealthTap supported an ever-growing library of answers and continually added new services, leading to a concierge service. It expanded its services for doctors as well, providing easy consultations and discussion forums. It added a corporate (B-to-B) service that companies could offer to all their employees. And now, generalizing from their experience in Chennai, they have launched HealthTap SOS for disaster relief.

The Chennai intervention was requested by one of HealthTap’s clients, a company called Flex that had some employees in Chennai. When HealthTap started looking around for doctors in its network who had flood-related expertise, it turned up so much useful help that even the HealthTap leadership was surprised. CEO Ron Gutman explained to me that many doctors grew up in India, perhaps even got their degrees and practiced there, then moved to the US and joined HealthTap’s network. They are now able to help their country and local communities without actually traveling back.

HealthTap then discovered that their 85,000 doctors, located primarily in the United States, have come from 101 countries. Many languages are spoken, and many doctors intimately understand the cultures of other countries, as well as the medical conditions and disaster-related problems faced in them. HealthTap even organized psychiatrists and psychologists to advise and calm residents in the disaster area.

The organizational and technical elements of marshalling expertise
Public health and disaster agencies have networks of experts too, of course. But Gutman explained that these institutions can’t maintain a network as large and diverse as HealthTap just to prepare for occasional disasters. HealthTap’s strength is that it can redeploy a network developed to handle everyday medical conditions and turn it into a resource for communities struck by flooding or other disasters.

Doing so depends on the generosity and humane response of the doctors, of course, but it also requires a detailed understanding of the expertise offered by each of the 85,000 doctors in the network. According to HealthTap, they obtained crucial information on disaster recovery through crowdsourcing: they reached out to their network and asked the doctors to provide tips and checklists for managing during disaster situations. This turned up an abundance of information and offers to help.

Thus, HealthTap exemplifies the highly connected, intelligent expert network described in Beth Simone Noveck’s book Smart Citizens, Smarter State: The Technologies of Expertise and the Future of Governing. Such a network is more than a loose association of people in a given discipline: it is highly structured using details provided by individuals about themselves, or information collected from routine interactions.

In addition to this information-rich database of physicians, HealthTap has developed another technical advantage–once again, a set of tools they developed rigorously over time to facilitate routine care, but that also proves invaluable in emergencies. Their sophisticated search service can turn up information quickly that is relevant to the person logged into the system, based on information that the person reveals about himself or herself. HealthTap’s rating system (similar to those used on travel sites or other crowdsourced recommendation systems) brings up the best information out of millions of potential answers in their database. Although most of the Chennai residents asking for help found answers quickly in HealthTap’s database, HealthTap can also connect a person quickly with a clinician for one-to-one service. Because of the immense value of personalization, HealthTap suggests that public health workers set up an account with HealthTap before emergencies develop (an account they offer for a very modest charge).

General lessons
HealthTap did a great thing in Chennai, and their SOS service promises to be widely useful, especially in a world increasingly hit by climate change. But a private company such as HealthTap shouldn’t be the only institution with these resources for public health. Public agencies should take a leaf from Noveck’s book to set up expert networks with background on potentially useful experts.

Public health agencies already offer information during emergencies over the phone, broadcast media (do you ever hear “The following is just a test” announcements on the radio?), and popular information dissemination networks such as Twitter, but they could collect more information (voluntarily) from residents and allow them to connect to experts to answer specific questions when there is a need. For instance, if you depend on a medication and are running low in the aftermath of a major storm, you could find out from a specialist how to cope without it.

Technology and modern social organization offer a lot of tools to help the world deal with emergencies. Consider the well-known Ushahidi service, created in 2008 to coordinate input from local residents suffering from political violence and now used in a variety of situations. OpenStreetMap has also served disaster relief, used as resource along with Ushahidi during the 2010 Haiti earthquake. Public health agencies can learn from organizations such as these, along with HealthTap, to save lives.

Significant Articles in the Health IT Community in 2015

Posted on December 15, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Have you kept current with changes in device connectivity, Meaningful Use, analytics in healthcare, and other health IT topics during 2015? Here are some of the articles I find significant that came out over the past year.

The year kicked off with an ominous poll about Stage 2 Meaningful Use, with implications that came to a head later with the release of Stage 3 requirements. Out of 1800 physicians polled around the beginning of the year, more than half were throwing in the towel–they were not even going to try to qualify for Stage 2 payments. Negotiations over Stage 3 of Meaningful Use were intense and fierce. A January 2015 letter from medical associations to ONC asked for more certainty around testing and certification, and mentioned the need for better data exchange (which the health field likes to call interoperability) in the C-CDA, the most popular document exchange format.

A number of expert panels asked ONC to cut back on some requirements, including public health measures and patient view-download-transmit. One major industry group asked for a delay of Stage 3 till 2019, essentially tolerating a lack of communication among EHRs. The final rules, absurdly described as a simplification, backed down on nothing from patient data access to quality measure reporting. Beth Israel CIO John Halamka–who has shuttled back and forth between his Massachusetts home and Washington, DC to advise ONC on how to achieve health IT reform–took aim at Meaningful Use and several other federal initiatives.

Another harbinger of emerging issues in health IT came in January with a speech about privacy risks in connected devices by the head of the Federal Trade Commission (not an organization we hear from often in the health IT space). The FTC is concerned about the security of recent trends in what industry analysts like to call the Internet of Things, and medical devices rank high in these risks. The speech was a lead-up to a major report issued by the FTC on protecting devices in the Internet of Things. Articles in WIRED and Bloomberg described serious security flaws. In August, John Halamka wrote own warning about medical devices, which have not yet started taking security really seriously. Smart watches are just as vulnerable as other devices.

Because so much medical innovation is happening in fast-moving software, and low-budget developers are hankering for quick and cheap ways to release their applications, in February, the FDA started to chip away at its bureaucratic gamut by releasing guidelines releasing developers from FDA regulation medical apps without impacts on treatment and apps used just to transfer data or do similarly non-transformative operations. They also released a rule for unique IDs on medical devices, a long-overdue measure that helps hospitals and researchers integrate devices into monitoring systems. Without clear and unambiguous IDs, one cannot trace which safety problems are associated with which devices. Other forms of automation may also now become possible. In September, the FDA announced a public advisory committee on devices.

Another FDA decision with a potential long-range impact was allowing 23andMe to market its genetic testing to consumers.

The Department of Health and Human Services has taken on exceedingly ambitious goals during 2015. In addition to the daunting Stage 3 of Meaningful Use, they announced a substantial increase in the use of fee-for-value, although they would still leave half of providers on the old system of doling out individual payments for individual procedures. In December, National Coordinator Karen DeSalvo announced that Health Information Exchanges (which limit themselves only to a small geographic area, or sometimes one state) would be able to exchange data throughout the country within one year. Observers immediately pointed out that the state of interoperability is not ready for this transition (and they could well have added the need for better analytics as well). HHS’s five-year plan includes the use of patient-generated and non-clinical data.

The poor state of interoperability was highlighted in an article about fees charged by EHR vendors just for setting up a connection and for each data transfer.

In the perennial search for why doctors are not exchanging patient information, attention has turned to rumors of deliberate information blocking. It’s a difficult accusation to pin down. Is information blocked by health care providers or by vendors? Does charging a fee, refusing to support a particular form of information exchange, or using a unique data format constitute information blocking? On the positive side, unnecessary imaging procedures can be reduced through information exchange.

Accountable Care Organizations are also having trouble, both because they are information-poor and because the CMS version of fee-for-value is too timid, along with other financial blows and perhaps an inability to retain patients. An August article analyzed the positives and negatives in a CMS announcement. On a large scale, fee-for-value may work. But a key component of improvement in chronic conditions is behavioral health which EHRs are also unsuited for.

Pricing and consumer choice have become a major battleground in the current health insurance business. The steep rise in health insurance deductibles and copays has been justified (somewhat retroactively) by claiming that patients should have more responsibility to control health care costs. But the reality of health care shopping points in the other direction. A report card on state price transparency laws found the situation “bleak.” Another article shows that efforts to list prices are hampered by interoperability and other problems. One personal account of a billing disaster shows the state of price transparency today, and may be dangerous to read because it could trigger traumatic memories of your own interactions with health providers and insurers. Narrow and confusing insurance networks as well as fragmented delivery of services hamper doctor shopping. You may go to a doctor who your insurance plan assures you is in their network, only to be charged outrageous out-of-network costs. Tools are often out of date overly simplistic.

In regard to the quality ratings that are supposed to allow intelligent choices to patients, A study found that four hospital rating sites have very different ratings for the same hospitals. The criteria used to rate them is inconsistent. Quality measures provided by government databases are marred by incorrect data. The American Medical Association, always disturbed by public ratings of doctors for obvious reasons, recently complained of incorrect numbers from the Centers for Medicare & Medicaid Services. In July, the ProPublica site offered a search service called the Surgeon Scorecard. One article summarized the many positive and negative reactions. The New England Journal of Medicine has called ratings of surgeons unreliable.

2015 was the year of the intensely watched Department of Defense upgrade to its health care system. One long article offered an in-depth examination of DoD options and their implications for the evolution of health care. Another article promoted the advantages of open-source VistA, an argument that was not persuasive enough for the DoD. Still, openness was one of the criteria sought by the DoD.

The remote delivery of information, monitoring, and treatment (which goes by the quaint term “telemedicine”) has been the subject of much discussion. Those concerned with this development can follow the links in a summary article to see the various positions of major industry players. One advocate of patient empowerment interviewed doctors to find that, contrary to common fears, they can offer email access to patients without becoming overwhelmed. In fact, they think it leads to better outcomes. (However, it still isn’t reimbursed.)

Laws permitting reimbursement for telemedicine continued to spread among the states. But a major battle shaped up around a ruling in Texas that doctors have a pre-existing face-to-face meeting with any patient whom they want to treat remotely. The spread of telemedicine depends also on reform of state licensing laws to permit practices across state lines.

Much wailing and tears welled up over the required transition from ICD-9 to ICD-10. The AMA, with some good arguments, suggested just waiting for ICD-11. But the transition cost much less than anticipated, making ICD-10 much less of a hot button, although it may be harmful to diagnosis.

Formal studies of EHR strengths and weaknesses are rare, so I’ll mention this survey finding that EHRs aid with public health but are ungainly for the sophisticated uses required for long-term, accountable patient care. Meanwhile, half of hospitals surveyed are unhappy with their EHRs’ usability and functionality and doctors are increasingly frustrated with EHRs. Nurses complained about technologies’s time demands and the eternal lack of interoperability. A HIMSS survey turned up somewhat more postive feelings.

EHRs are also expensive enough to hurt hospital balance sheets and force them to forgo other important expenditures.

Electronic health records also took a hit from ONC’s Sentinel Events program. To err, it seems, is not only human but now computer-aided. A Sentinel Event Alert indicated that more errors in health IT products should be reported, claiming that many go unreported because patient harm was avoided. The FDA started checking self-reported problems on PatientsLikeMe for adverse drug events.

The ONC reported gains in patient ability to view, download, and transmit their health information online, but found patient portals still limited. Although one article praised patient portals by Epic, Allscripts, and NextGen, an overview of studies found that patient portals are disappointing, partly because elderly patients have trouble with them. A literature review highlighted where patient portals fall short. In contrast, giving patients full access to doctors’ notes increases compliance and reduces errors. HHS’s Office of Civil Rights released rules underlining patients’ rights to access their data.

While we’re wallowing in downers, review a study questioning the value of patient-centered medical homes.

Reuters published a warning about employee wellness programs, which are nowhere near as fair or accurate as they claim to be. They are turning into just another expression of unequal power between employer and employee, with tendencies to punish sick people.

An interesting article questioned the industry narrative about the medical device tax in the Affordable Care Act, saying that the industry is expanding robustly in the face of the tax. However, this tax is still a hot political issue.

Does anyone remember that Republican congressmen published an alternative health care reform plan to replace the ACA? An analysis finds both good and bad points in its approach to mandates, malpractice, and insurance coverage.

Early reports on use of Apple’s open ResearchKit suggested problems with selection bias and diversity.

An in-depth look at the use of devices to enhance mental activity examined where they might be useful or harmful.

A major genetic data mining effort by pharma companies and Britain’s National Health Service was announced. The FDA announced a site called precisionFDA for sharing resources related to genetic testing. A recent site invites people to upload health and fitness data to support research.

As data becomes more liquid and is collected by more entities, patient privacy suffers. An analysis of web sites turned up shocking practices in , even at supposedly reputable sites like WebMD. Lax security in health care networks was addressed in a Forbes article.

Of minor interest to health IT workers, but eagerly awaited by doctors, was Congress’s “doc fix” to Medicare’s sustainable growth rate formula. The bill did contain additional clauses that were called significant by a number of observers, including former National Coordinator Farzad Mostashari no less, for opening up new initiatives in interoperability, telehealth, patient monitoring, and especially fee-for-value.

Connected health took a step forward when CMS issued reimbursement guidelines for patient monitoring in the community.

A wonky but important dispute concerned whether self-insured employers should be required to report public health measures, because public health by definition needs to draw information from as wide a population as possible.

Data breaches always make lurid news, sometimes under surprising circumstances, and not always caused by health care providers. The 2015 security news was dominated by a massive breach at the Anthem health insurer.

Along with great fanfare in Scientific American for “precision medicine,” another Scientific American article covered its privacy risks.

A blog posting promoted early and intensive interactions with end users during app design.

A study found that HIT implementations hamper clinicians, but could not identify the reasons.

Natural language processing was praised for its potential for simplifying data entry, and to discover useful side effects and treatment issues.

CVS’s refusal to stock tobacco products was called “a major sea-change for public health” and part of a general trend of pharmacies toward whole care of the patient.

A long interview with FHIR leader Grahame Grieve described the progress of the project, and its the need for clinicians to take data exchange seriously. A quiet milestone was reached in October with a a production version from Cerner.

Given the frequent invocation of Uber (even more than the Cheesecake Factory) as a model for health IT innovation, it’s worth seeing the reasons that model is inapplicable.

A number of hot new sensors and devices were announced, including a tiny sensor from Intel, a device from Google to measure blood sugar and another for multiple vital signs, enhancements to Microsoft products, a temperature monitor for babies, a headset for detecting epilepsy, cheap cameras from New Zealand and MIT for doing retinal scans, a smart phone app for recognizing respiratory illnesses, a smart-phone connected device for detecting brain injuries and one for detecting cancer, a sleep-tracking ring, bed sensors, ultrasound-guided needle placement, a device for detecting pneumonia, and a pill that can track heartbeats.

The medical field isn’t making extensive use yet of data collection and analysis–or uses analytics for financial gain rather than patient care–the potential is demonstrated by many isolated success stories, including one from Johns Hopkins study using 25 patient measures to study sepsis and another from an Ontario hospital. In an intriguing peek at our possible future, IBM Watson has started to integrate patient data with its base of clinical research studies.

Frustrated enough with 2015? To end on an upbeat note, envision a future made bright by predictive analytics.

Why Meaningful Use Should Balance Interoperability With More Immediate Concerns

Posted on March 12, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Frustration over the stubborn blockage of patient data sharing is spreading throughout the health care field; I hear it all the time. Many reformers have told me independently that the Office of the National Coordinator should refocus their Meaningful Use incentives totally on interoperability and give up on all the other nice stuff in the current requirements. Complaints have risen so high up that the ONC is now concentrating on interoperability, while a new Congressional bill proposes taking the job out of their hands.
Read more..

Why ICD-10?

Posted on March 24, 2014 I Written By

Kyle is Founder and CEO of Pristine, a company in Austin, TX that develops telehealth communication tools optimized for Google Glass in healthcare environments. Prior to founding Pristine, Kyle spent years developing, selling, and implementing electronic medical records (EMRs) into hospitals. He also writes for EMR and HIPAA, TechZulu, and Svbtle about the intersections of healthcare, technology, and business. All of his writing is reproduced at kylesamani.com

At least half a dozen folks have asked me to explain why HHS is mandating the transition to ICD-10. So I thought I’d write a blog post about the subject.

First, I’ll examine some of the benefits that proponents of ICD-10 site. Then, I’ll examine the cost of transition from ICD-9 to ICD-10.

There are about a dozen frequently cited reasons to switch from ICD-9 to ICD-10. But they can be summarized into three major categories:

1) The US needs to catch up to the rest of the world.

2) The more granular nature of ICD-10 will lend itself to data analysis of all forms – claims processing, population health, improved interoperability, clinical trials, research, etc.

3) ICD-9 doesn’t support the latest diagnoses and procedures, and ICD-10 does.

Regarding #1, who cares? Coding standards are intrinsically arbitrary. Sequels are not necessarily better than their predecessors.

Although #2 sounds nice, there are a lot of problems with the supposed “value” of more granular data in practice. Following the classic 80-20 rule of life (80% of value comes from 20% of activity), the majority of codes are rarely used. By increasing the number of codes six-fold, the system is creating 6x the opportunities to inaccurately code. There is no reason to believe that providers will more accurately code, but the chances of incorrect diagnosis are now significantly higher than they were before. Garbage in, garbage out.

Below are some specific examples of how increasing the number of codes will affect processes in the healthcare system:

Payers – payers argue that making codes more granular will improve efficiency in the reimbursement process by removing ambiguity. There is nothing further from the truth. Payers will use the new granularity to further discriminate against providers and reject claims for what will appear to be no reason. With 6x the number of codes, there are at least 6x as many opportunities for payers to reject claims.

Clinical trials – ICD-10 proponents like to argue that with more granular diagnosis codes, companies like ePatientFinder can more effectively find patients and match them to clinical trials. This notion is predicated on the ability of providers to enter the correct diagnosis codes into EMRs, which is a poor assumption. Further, it doesn’t actually address the fundamental challenges of clinical trials recruitment, namely provider education, patient education, and the fact that most patients aren’t limited to trials by diagnosis codes, but rather by other data points (such as number of years with a given disease and comorbidities).

Public health – ICD-10 proponents also claim that the new coding system will help public health officials make better decisions. Again, this is predicated on accuracy of data, which is a poor assumption. But the greater challenge is that the most pressing public health issues of our time simply don’t need any more granularity in diagnosis codes. Public health officials already know what the top 20 public health problems are. Adding 6x the number of codes will not help address public health issues.

Regarding #3, why do we need to reinvent the entire coding system and make the entire system more granular to accommodate new diagnoses and procedures? Why can’t we continue to use the existing structure and simply create new branches of the ICD tree using alphanumeric characters? Why do we need to complicate every existing diagnosis and procedure to support new diagnoses and treatments? We don’t. There are plenty of letters left to be utilized in ICD-9 to accommodate new discoveries in medicine.

Next, I’ll provide a very brief summary of the enormity of the cost associated with transitioning from ICD-9 to ICD-10. The root of the challenge is that a string of interconnected entities, none of whom want to work with one another or even see one another, must execute in sync for the months and years leading up to the transition. Below is a synopsis of how the stars must align:

EMR vendors – EMR vendors must upgrade their entire client base to ICD-10 compliant versions of their systems in the next couple of months to begin testing ICD-10 based claims. Given the timescales at which providers move, the burden of MU2 on vendors, and the upgrade cycles for EMR vendors, this is a daunting challenge.

Providers – providers don’t want to learn a new coding system, and don’t want to see 6 times the number of codes when they search for basic clinical terms. Companies such as IMO can mitigate a lot of this, but only a small percentage of providers use EMRs that have integrated with IMO.

Coding vendors – like EMR vendors, auto-coding vendors must upgrade their clients systems now to one that supports dual coding for ICD-9 and ICD-10. They must also incur significant costs to add in a host of new ICD-10 based rules and mappings.

Coders – coders must achieve dual certification in ICD-9 and ICD-10, and must double-code all claims during the transition period to ensure no hiccups when the final cut over takes place.

Clearinghouses – clearinghouses must upgrade their systems to support both ICD-9 and ICD-10 and all of the new rules behind ICD-10, and must process an artificially inflated number of claims because of the volume of double-coded claims coming from providers.

Payers – payers must upgrade their systems to receive both ICD-9 and ICD-10 claims, process both, and provide results to clearinghouses and providers about accuracy to help providers ensure that everyone will be ready for the cut over to ICD-10.

The paragraphs above do not describe even 10% of the complexity involved in the transition. Reality is far more nuanced and complicated. It’s clear from the above that the likelihood that all of the parties can upgrade their systems, train their staff, and double code claims is dubious. The system is simply too convoluted with too many intertwined but unaligned puzzle pieces to make such a dramatic transition by a fixed drop-dead date.

Lastly, switching to ICD-10 now seems a bit shortsighted in light of the changes going on in the US healthcare system today. ICD-10 is already a decade old, and in no way reflects what we’re learning as we transition from volume to value models of care. It will make sense to change coding schemes at some point, but only when it’s widely understood what the future of healthcare delivery in the US will look like. As of today, no one knows what healthcare delivery will look like in 10 years, let alone 20. Why should we incur the enormous costs of the ICD-10 transition when we know what we’re transitioning to was never designed to accommodate a future we’re heading towards?

At the end of the day, the biggest winners as a result of this transition are the consultants and vendors who’re supporting providers in making the transition. And the payers who can come up with more reasons not to pay claims. Some have claimed that HHS is doing this to reduce Medicare reimbursements to artificially lower costs. Although the incentives are aligned to encourage malicious behavior, I think it’s unlikely the feds are being malicious. There are far easier ways to save money than this painful transition.

The ICD-10 transition may be one of the largest and most complex IT coordination projects in the history of mankind. And it creates almost no value. If you can think of a larger transition in technology history that has destroyed more value than the ICD-9 to ICD-10 transition in the US, please leave a comment. I’m always curious to learn more.

Value of Meaningful Use – Perspective from EHR Executive at simplifyMD

Posted on February 11, 2014 I Written By

The following is a guest blog post by Michael Brozino, in response to the question I posed in my “State of the Meaningful Use” call to action.

If MU were gone (ie. no more EHR incentive money or penalties), which parts of MU would you remove from your EHR immediately and which parts would you keep?

Michael Brozino
Michael Brozino
Michael Brozino, CEO of simplifyMD

If Meaningful Use were no longer a requirement, we would keep our software the same. I say this because there is nothing in our system that was built within the requirements of Meaningful Use that weren’t deployed with the intent of improving patient engagement, enhancing public health, promoting health record portability and improving software interoperability. The fact that 100 percent of our users who have chosen to attest for MU have been successful is certainly a benefit, but we don’t see much reason to remove any functionality unless the user’s benefit diminishes significantly and it becomes too expensive to support and implement.

Rather, if MU disappeared, we would hope that the entire healthcare industry or government would implement centralized, universal healthcare data exchanges and/or hubs to ease interoperability and promote uniformity. Features such as personal health records, electronic delivery of labs and receipt of orders, syndromic surveillance data, state immunization records, clinical decision support rules, secure patient messaging, and many others could have been easily implemented if the government fully committed to putting them in place. By only going halfway, standardization is left to a group of players that by their very nature are opinionated and independent.

ePrescribing, for example, has thrived because a central hub exists to allow competitors to quickly join the network. Competition between these organizations is now about who delivers the best user experience, the best customer support, and who adds the most value to their product. If such a central exchange were created for the interoperability requirements in MU, we would have already surpassed the goals of Stages 2 and 3 by 2014.

See other responses to this question here.