Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Could Clinicians Create Better HIE Tools?

Posted on August 13, 2014 I Written By

The following is a guest blog post by Andy Oram.His post reminds me of when I asked “Is Full Healthcare Interoperability a Pipe Dream?

A tense and flustered discussion took place on Monday, August 11 during a routine meeting of the HIT Standards Committee Implementation Workgroup, a subcommittee set up by the Office of the National Coordinator (ONC), which takes responsibility for U.S. government efforts to support new IT initiatives in the health care field. The subject of their uncomfortable phone call was the interoperability of electronic health records (EHRs), the leading issue of health IT. A number of “user experience” reports from the field revealed that the situation is not good.

We have to look at the depth of the problem before hoping to shed light on a solution.

An interoperability showcase literally takes the center of the major health IT conference each year, HIMSS. When I have attended, they physically arranged their sessions around a large pavilion filled with booths and computer screens. But the material on display at the showcase is not the whiz-bang features and glossy displays found at most IT coventions (those appear on the exhibition floor at HIMSS), but just demonstrations of document exchange among EHR vendors.

The hoopla over interoperability at HIMSS suggests its importance to the health care industry. The ability to share coordination of care documents is the focus of current government incentives (Meaningful Use), anchoring Stage 2 and destined to be even more important (if Meaningful Use lasts) in Stage 3.

And for good reason: every time we see a specialist, or our parent moves from a hospital to a rehab facility, or our doctor even moves to another practice (an event that recently threw my wife’s medical records into exasperating limbo), we need record exchange. If we ever expect to track epidemics better or run analytics that can lower health case costs, interoperability will matter even more.

But take a look at extensive testing done by a team for the Journal of the American Medical Informatics Association, recently summarized in a posting by health IT expert Brian Ahier. When they dug into the documents being exchanged, researchers found that many vendors inserted the wrong codes for diagnoses or drugs, placed results in the wrong fields (leaving them inaccessible to recipients), and failed to include relevant data. You don’t have to be an XML programmer or standards expert to get the gist from a list of sample errors included with the study.

And that list covers only the problems found in the 19 organizations who showed enough politeness and concern for the public interest to submit samples–what about the many who ignored the researchers’ request?

A slightly different list of complaints came up at the HIT Standards Committee Implementation Workgroup meeting, although along similar lines. The participants in the call were concerned with errors, but also pointed out the woeful inadequacy of the EHR implementations in representing the complexities and variety of patient care. Some called for changes I find of questionable ethics (such as the ability to exclude certain information from the data exchange while leaving it in the doctor’s records) and complained that the documents exchanged were not easy for patients to read, a goal that was not part of the original requirements.

However, it’s worth pointing out that documents exchange would fall far short of true coordinated care, even if everything worked as the standards called for. Continuity of care documents, the most common format in current health information exchange, have only a superficial sliver of diagnoses, treatments, and other immediate concerns, but do not have space for patient histories. Data that patients can now collect, either through fitness devices or self-reporting, has no place to be recorded. This is why many health reformers call for adopting an entire new standard, FHIR, a suggestion recognized by the ONC as valid but postponed indefinitely because it’s such a big change. The failure to adopt current formats seems to become the justification for keeping on the same path.

Let’s take a step back. After all those standards, all those certifications, all those interoperability showcases, why does document exchange still fail?

The JAMIA article indicated that failure can be widely spread around. There are rarely villains in health care, only people pursuing business as usual when that is insufficient. Thus:

  • The Consolidated CDA standard itself could have been more precisely defined, indicating what to do for instance when values are missing from the record.

  • Certification tests can look deeper into documents, testing for instance that codes are recorded correctly. Although I don’t know why the interoperability showcase results don’t translate into real-world success, I would find it quite believable that vendors might focus on superficial goals (such as using the Direct protocols to exchange data) without determining whether that data is actually usable.

  • Meaningful Use requirements (already hundreds of pages long) could specify more details. One caller in the HIT Standards Committee session mentioned medication reconciliation as one such area.

The HIT Standards Committee agonized over whether to pursue broad goals, necessarily at a slow pace, or to seek a few achievable improvements in the process right away. In either case, what we have to look forward to is more meetings of committees, longer and more mind-numbing documents, heavier and heavier tests–infrastructure galore.

Meanwhile, the structure facilitating all this bureaucracy is crumbling. Many criticisms of Meaningful Use Stage 2 have been publicly aired–some during the HIT Standards Committee call–and Stage 3 now looks like a faint hope. Some journalists predict a doctor’s revolt. Instead of continuing on a path hated by everybody, including the people laying it out, maybe we need a new approach.

Software developers over the past couple decades have adopted a range of ways to involve the users of software in its design. Sometimes called agile or lean methodologies, these strategies roll out prototypes and even production systems for realistic testing. The strategies call for a whole retooling of the software development process, a change that would not come easily to slow-moving proprietary companies such as those dominating the EHR industry. But how would agile programming look in health care?

Instead of bringing a doctor in from time to time to explain what a clinical workflow looks like or to approve the screens put up by a product, clinicians would be actively designing the screens and the transitions between them as they work. They would discover what needs to be in front of a resident’s eyes as she enters the intensive care ward and what needs to be conveyed to the nurses’ station when an alarm goes off sixty feet away.

Clinicians can ensure that the information transferred is complete and holds value. They would not tolerate, as the products tested by the JAMIA team do, a document that reports a medication without including its dose, timing, and route of administration.

Not being software experts (for the most part), doctors can’t be expected to anticipate all problems, such as changes of data versions. They still need to work closely with standards experts and programmers.

It also should be mentioned that agile methods include rigorous testing, sometimes to the extent that programmers write tests before writing the code they are testing. So the process is by no means lax about programming errors and patient safety.

Finally, modern software teams maintain databases–often open to the users and even the general public–of reported errors. The health care field needs this kind of transparency. Clinicians need to be warned of possible problems with a software module.

What we’re talking about here is a design that creates a product intimately congruent with each site’s needs and workflow. The software is not imported into a clinical environment–much less imposed on one–but grows organically from it, as early developers of the VistA software at the Veterans Administration claimed to have done. Problems with document exchange would be caught immediately during such a process, and the programmers would work out a common format cooperatively–because that’s what the clinicians want them to do.

The Random Results of Clinical Trials

Posted on June 23, 2014 I Written By

The following is a guest blog post by Andy Oram, writer and editor at O’Reilly Media.

For more than a century, doctors have put their faith in randomized, double-blind clinical trials. But this temple is being shaken to its foundations while radical sects of “big data” analysts challenge its orthodoxy. The schism came to a head earlier this month at the Health Datapalooza, the main conference covering the use of data in health care.

The themes of the conference–open data sets, statistical analysis, data sharing, and patient control over research–represent an implicit challenge to double-blind trials at every step of the way. Whereas trials recruit individuals using stringent critirea, ensuring proper matches, big data slurps in characteristics from everybody. Whereas trials march through rigid stages with niggling oversight, big data shoots files through a Hadoop computing cluster and spits out claims. Whereas trials scrupulously separate patients, big data analysis often draws on communities of people sharing ideas freely.

This year, the tension between clinical trials and big data was unmistakeable. One session was even called “Is the Randomized Clinical Trial (RCT) Dead?”

The background to the session is just as important as the points raised during the session. Basically, randomized trials have taken it on the chin for the past few years. Most have been shown to be unreproducible. Others have been repressed because they don’t show the results that their funders (usually pharmaceutical companies) would like to see. Scandals sometimes reach heights of absurdity that even a satirical novelist would have trouble matching.

We know that the subjects recruited to RCTs are unrepresentative of most people who receive treatments based on results. The subjects tend to be healthier (no comordities), younger, whiter, and more male than the general population. At the Datapalooza session, Robert Kaplan of NIH pointed out that a large number of clinical trials recruit patients from academic settings, even though only 1 in 100 of people suffering from a condition gets treated in such settings. He also pointed out that, since the federal government require clinical trials to register a few years ago, it has become clear that most don’t produce statistically significant results.

Two speakers from the Oak Ridge National Laboratory pushed the benefits of big data even further. Georgia Tourassi claimed that so far as data is concerned, “bigger can be better” even if the dat is “unusual, noisy, or sparse.” She suggested, however, that data analysis has roles to play before and after RCTs–on the one side, for instance, to generate hypotheses, and on the other to conduct longitudinal studies. Mallikarjun Shankar pointed out that we use big data successful in areas where randomized trials aren’t available, noticeably in enforcing test ban treaties and modeling climate change.

Robert Temple of the FDA came to the podium to defend RCTs. He opined that trials are required for clinical effectiveness–although I thought one of his examples undermined his claim–and pointed out that big data can have trouble finding important but small differences in populations. For example, an analysis of widely varying patients might miss the difference between two drugs, which may cause adverse effects in only 3% versus 4% of the population respectively. But for the people who suffer the adverse effects, that’s a 25% difference–something they’d like to know about.

RCTs received a battering in other parts of the Datapalooza as well, particularly in the keynote by Vinod Khosla, who has famously suggested that computing can replace doctors. While repeating the familiar statistics about the failures of RCTs, he waxed enthusiastic about the potential of big data to fix our ills. In his scenario, we will all collect large data sets about ourselves and compare them to other people to self-diagnose. Kathleen Sebelius, keynoting at the Datapalooza in one of her last acts as Secretary of Health and Human Services, said “We’ve been making health policy in this country for years based on anecdote, not information.”

Less present at the Datapalooza was the idea that there are ways to improve clinical trials. I have reported extensively on efforts at reform, which include getting patients involved in the goals and planning of trials, sharing raw data sets as well as published results, and creating teams that cross multiple organizations. The NIH is rightly proud of their open access policy, which requires publicly funded research to be published for free download at PubMed. But this policy doesn’t go far enough: it leaves a one-year gap after publication, which may itself take place a year after the paper was written, and the policy says nothing about the data used by the researcher.

I believe data analysis has many secrets to unlock in the universe, but its effectiveness in many areas is unproven. One may find a correlation between a certain gene and an effective treatment, but we still don’t know what other elements of the body have an impact. RCTs also have well tested rules for protecting patients that we need to explore and adapt to statistical analysis. It will be a long time before we know who is right, and I hope for a reconciliation along the way.

How Technological Backwardness Wastes Health Care Money

Posted on May 23, 2014 I Written By

The following is a guest blog post by Andy Oram, writer and editor at O’Reilly Media.

A rather disconcerting report on the state of health care payments has been released by InstaMed, a billing network that connects payers, health care providers, patients, and third-party billing services. (You can download the report after just filling out a few fields or watch some of the report details in this video.)

I think we all know that the adoption of computing technology to coordinate treatment and payments in health care lags behind most industries. This report reveals the progress it has made along with the substantial distance it has yet to go–and the effects of the lag on all of us. Patients and doctors are all suffering financially by a continued reliance on paper.

We should be charitable: the field is making progress. Half of insurers conform (p. 11) to meeting federally mandated standards covering the complex dance by which doctors request payments, insurers report back the status of the request (Electronic Remittance Advice), sometimes repeatedly over many months, and–when the doctor wins the jackpot and gets the procedure approved–insurers remit payment (Electronic Funds Transfer). Moreover, when the survey was conducted in 2013, 86 percent of health providers accepted payments by credit card or similar mechanisms (p. 9), although fewer than half of their payments actually come in that way (p. 5).

Huge amounts of time and effort are still being wasted, though. Even as patient responsibility for payment rises–because plans have been increasing copays and deductibles–there is still a tremendous lack of transparency. “In 2013, 72 percent of consumers said that they did not know their payment responsibility during a provider visit.” (p. 14) Perhaps, even worse, “42 percent of providers said that they did not know patient responsibility during the patient visit.” (p. 7)

What is the result? Providers get fewer payments at the time of visit, and have to send multiple bills to the patient by snail mail, and often even make phone calls (p. 17). About one third of the time, providers couldn’t collect payment when the service was provided because of “patient resistance,” (p. 9) probably a way to blame the victim because the patient was broke. But another third of the time, the provider admitted it didn’t know how much to charge the patient.

All this adds up to large costs for the provider. Moreover, patients can’t make intelligent choices. (We’ll leave aside for now the larger destructive consequences of fee for service.) It’s worth noting that the American College of Cardiology and American Heart Association recently recommended that doctors consider costs when recommending treatments for heart problems–certainly a harbinger of a trend. None of this can happen with the Byzantine payment systems in place.

I mentioned earlier half of payers follow standards to accept electronic payments. Well, that means that half don’t. The use of paper or fax adds an extra tax to negotiations that sometimes take months, as invoices go back and forth and payers reject invoices for a blank space or miscoding in a single field.

InstaMed recommendations include: “payers and providers must work together to help consumers take control of their healthcare payments–or risk further consumer dissatisfaction and lost revenue.” (p. 16) This is an audacious enough agenda, but I go much deeper in my call for change:

  • Publish open data on costs, hospital errors, and outcomes for common procedures. We already know that no correlation exists between cost and quality.

  • Collect detailed data about outcomes, deidentified in the best manner we know, in order to supplement clinical trials, which suffer from their own distortions. Find out where we’re wasting money just by assigning the wrong treatments.

  • Create better interfaces for submitting doctors’ bills, to eliminate the absurd ritual of multiple submissions that get rejected repeatedly by payers and create an entire third-party market just to get invoices right. Standardize billing procedures across payers. (I’m not taking on the issue of single-payer here.)

  • Eliminate fee-for-service and complete the payers’ current trend toward paying for outcome. This requires a lot more of the data mentioned in the second item, so we know what illnesses actually should cost to treat.

Modern Information Technology Endorsed by Government Health Quality Agency

Posted on April 22, 2014 I Written By

The following is a guest blog post by Andy Oram, writer and editor at O’Reilly Media.

If you want to see a blueprint for real health reform, take the time to read through the white paper, “A Robust Health Data Infrastructure,” written by an independent set of experts in various areas of health and information technology. They hone in, more intently than any other official document I’ve seen, on the weaknesses of our health IT systems and the modernizations required to fix them.

The paper fits very well into the contours of my own recent report, The Information Technology Fix for Health. I wish that my report could have cited the white paper, but even though it is dated November 2013, it was announced only last week. Whether this is just another instance of the contrasting pace between technologists and a government operating in a typically non-agile manner, or whether the paper’s sponsor (the Agency for Healthcare Research and Quality) spent five months trying to figure out what to do with this challenging document, I have no way of knowing.

The Robert Wood Johnson Foundation played an important role organizing the white paper, and MITRE, which does a lot in the health care space, played some undescribed role. The paper’s scope can almost be described as sprawing, with forays into side topics such as billing fraud, but its key points concern electronic health records (EHRs), patient ownership of information, and health data exchange.

Why do I like this white paper so much? Two reasons. First, it highlights current problems in health information technology. The authors:

  • Decry “the current lack of interoperability among the data resources for EHRs” as leading to a “crippled” health data infrastructure (p. 2), and demand that “EHR software vendors should be required to develop and publish APIs for medical records data, search and indexing, semantic harmonization and vocabulary translation, and user interface applications” (p. 44).

  • Report with caution that “The evidence for modest, but consistent, improvements in health care quality and safety is growing.” Although calling these “encouraging findings,” the authors can credit only “the potential for improved efficiency” (p. 2 of the paper).

  • Warn that the leading government program to push health care providers into a well-integrated health care system, Meaningful Use, fails to meet its goals “in any practical sense.” Data is still not available to most patients, to biomedical researchers, or even to the institutions that currently exchange it except as inert paper-based documents (p. 6). The authors recommend fixes to add into the next stage of Meaningful Use.

  • Lament the underpopulated landscape of business opportunities for better interventions in patient care. “Current approaches for structuring EHRs and achieving interoperability have largely failed to open up new opportunities for entrepreneurship and innovation” (p. 6).

Second, the paper lays out eminently feasible alternatives. The infrastructure they recommend is completely recognizable to people who have seen how data exchange works in other fields: open standards, APIs, modern security, etc. There is nothing surprising about the recommendations, except that they are made in the context of our current disfunction in handling health information.

A central principle in the white paper is that “the ultimate owner of a given health care record is the patient him/herself” (p. 4), a leading demand of health reformers and a major conclusion in my own report. Patient control solves at one stroke the current abuse of patient data for marketing, and allows patients to become partners in research instead of just subjects.

The principle of patient control leads to data segmentation, a difficult but laudable attempt to protect the patient from bias or exploitation. Patients may want to “restrict access to certain types of information to designated individuals or groups only (e.g., mental health records, family history, history of drug abuse) while making other types of information more generally available to medical personnel (e.g., known allergies, vaccination records, surgical history)” (p. 33).

This in turn leads to the most novel suggestion in the paper, the notion of a “patient privacy bundle.” Because most people have trouble deciding how to protect sensitive parts of their records, and don’t want to cull through all their records each time someone asks for research data, the health care field can define privacy policies that
meet common needs and let patients make simple choices. Unfortunately, a lot of hurdles may make it unfeasible to segment data, as I have pointed out.

Other aspects of the white paper are also questionable, such as their blithe suggestion that patients offer deidentified data to researchers, although this does appeal to some patients as shown by the Personal Genome Project. (By the way, the authors of the white paper mischaracterized that project as anonymous.) Deidentification expert Khaled El Emam (author of O’Reilly’s Anonymizing Health Data) pointed out to me that clnical and administrative data involves completely different privacy risks from genomic data, but that the white paper fails to distinguish them.

I was a bit disappointed that the paper makes only brief mentions of patient-generated data, which I see as a crucial wedge to force open a provider-dominated information system.

The paper is very research-friendly, though, recognizing that EHRs “are already being supplemented by genomic data, expression data, data from embedded and wireless sensors, and population data gleaned from open sources, all of which will become more pervasive in the years ahead” (p. 5). Several other practical features of health information also appear. The paper recognizes the strains of storing large amounts of genomics and related “omics” data, pointing out that modern computing infrastructures can scale and use cloud computing in a supple way. The authors also realize the importance of provenance, which marks the origin of data (p. 28).

Technologists are already putting in place the tools for a modern health IT system. The white paper did not mention SMART, but it’s an ideal API–open source, government-sponsored, and mature–through which to implement the white paper’s recommendations. The HL7 committee is working on a robust API-friendly standard, FHIR, and there are efforts to tie SMART and FHIR together. The Data Distribution Service has been suggested as a standard to tie medical devices to other data stores.

So the computer field is rising to its mission to support better treatment. The AHRQ white paper can reinforce the convictions of patient advocates and other reformers that better computer systems are feasible and can foster better patient interventions and research.

Barriers and Pathways to Healthcare IT

Posted on April 3, 2014 I Written By

The following is a guest blog post by Andy Oram, writer and editor at O’Reilly Media.

Those who follow health IT for a long time can easily oscillate between overenthusiasm and despair. Electronic records will bring us into the 21st century! No, electronic records just introduce complexity and frustration! Big data will find new cures! No, our data’s no good!

Indeed, a vast gulf looms between the demands that health reformers make on information technology and the actual status of that technology. But if we direct a steady vision at what’s available to us and what it provides, we can plan a path to the future.

This is the goal of a report I recently wrote for O’Reilly Media: The Information Technology Fix for Health: Barriers and Pathways to the Use of Information Technology for Better Health Care. As part of a comprehensive overview, it dissects the issues on some topics that often appear on this blog:

  • Patient empowerment. After looking at the various contortions hospitals go through to provide portals and pump up patients’ interest in following treatment regimes, I conclude that the best way to get patients involved in their care is to leave their data in their own hands.

    But wresting data out of doctors’ grip will be heavy exercise. Well aware that previous attempts at giving patients control over data (Google Health and Microsoft HealthVault) have shriveled up, and that new efforts by Box and Apple seem to be taking the same path, I suggest a way forward by encouraging people to collect health data that will hopefully become indispensable to doctors.

  • What’s wrong with current EHRs? We know that doctors grab any opportunity handed them to complain about their EHRs. Even more distressing, the research bears out their pique; my report cites examples from the medical literature finding only scattered benefits from EHRs. Sometimes their opacity and awkward interfaces contribute to horrific medical errors.

    One might think that nobody is actually getting what they want from their EHR, but in fact plenty of providers are quietly enjoying their records–success has a lot to do with their preparation and whether they take the extra effort to make effective use of data gathered by the EHRs.

    New interfaces such as tablets, convenient storage in the cloud, and agile programming may be producing a new crop of EHRs that will meet the needs of more clinicians. But open source software would lead to the most widespread advances, enabling more customization and a better response to bug reports.

  • The viability of ACOs. Accountable care, pretty much a synonym for the notion of pay-for-value, is on the agendas of nearly all payers, from CMS on down. It certainly makes sense to combine data and keep close tabs on people as they move from one institution to another. But it’s really a job to be done on a national level, or at least a regional one. Can a loose collection of hospitals and related institutions muster the data and the resources to analyze patient data, created viable health information exchanges, and perform data analysis? I don’t think the current crop of ACOs will meet their goals, but they’ll provide valuable insights while they try.

  • Can standards such as ICD-10 improve the data we collect? What about the promise of new standards, such as FHIR? I’m a big believer in standards, but I’ve seen enough of them fail to know they must be simple, lithe, and unambiguous.

    That doesn’t characterize ICD-10 to be sure. Perhaps it does pretty well in the unambiguous department. But like most classifications, it’s a weak representation of the real world: a crude hierarchy trying to reflect many vectors of interlocking effects–for instance, the various complications associated with diabetes. And although ICD-10 may lead to more precise records, the cost of conversion is so burdensome that the American Medical Association has asked the government to just let doctors spend their money on more pressing needs. The conversion has also been ruthlessly criticized on the EMR & EHR site.

    FHIR is a radical change of direction for the HL7 standards body. For the first time, a standard is being built from the ground up to be web-friendly as well as sleek. It currently looks like a replacement for C-CDA, so I hope it is extended to hold patient-generated data. What we don’t need is another hundred vendors going off to create divergent formats.

    For real innovation, we should look to the open SMART Platform. Its cleverness is that it functions as a one-way valve channeling data from silo’d EHRs at health providers to patient-controlled sites.

We need to know what current systems are capable of contributing to innovative health solutions, and when to enhance what we have versus seeking a totally disruptive solution. I look forward to more discussion of these trends. Comment on this article, write your own articles on the topics in the report, and if you like, comment to me privately by writing to the infofix alias @ the domain.

Interoperability vs. Coordinated Care

Posted on August 19, 2013 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Andy Oram asked me the following question, “Is the exchange of continuity of care documents really interoperability or coordinated care?

As it stands now, it seems like CCDs (continuity of care documents) are going to be the backbone of what healthcare information we exchange. We’ll see if something like Common Well changes this, but for now much of the interoperability of healthcare data is in CCDs (lab and radiology data are separate). The question I think Andy is asking is what can we really accomplish with CCDs?

Transferring a CCD from one doctor to the next is definitely a form of healthcare interoperability. Regardless of the form of the CCD, it would be a huge step in the right direction for all of the healthcare endpoints to by on a system that can share documents. Whether they share CCDs or start sharing other data doesn’t really matter. That will certainly evolve over time. Just having everyone so they can share will be of tremendous value.

It’s kind of like the fax machine or email. Just getting people on the system and able to communicate was the first step. What people actually send through those channels will continue to improve over time. However, until everyone was on email, it had limited value. This is the first key step to interoperable patient records.

The second step is what information is shared. In the forseeable future I don’t seeing us ever reaching a full standard for all healthcare data. Sure, we can do a pretty good job putting together a standard for Lab results, Radiology, RXs, Allergies, Past Medical History, Diagnosis, etc. I’m not sure we’ll ever get a standard for the narrative sections of the chart. However, that doesn’t mean we can’t make that information interoperable. We can, are, and will share that data between systems. It just won’t be in real granular way that many would love to see happen.

The idea of coordinated care is a much harder one. I honestly haven’t seen any systems out there that have really nailed what a coordinated care system would look like. I’ve seen very specific coordinated care elements. Maybe if we dug into Kaiser’s system we’d find some coordinated care. However, the goal of most software systems haven’t been to coordinate care and so we don’t see much on the market today that achieves this goal.

The first step in coordinating care is opening the lines of communication between care providers. Technology can really make an impact in this area. Secure text message company like docBeat (which I advise), are making good head way in opening up these lines of communications. It’s amazing the impact that a simple secure text message can have on the care a patient receives. Secure messaging will likely be the basis of all sorts of coordinated care.

The challenge is that secure messaging is just the start of care coordination. Healthcare is so far behind that secure messaging can make a big impact, but I’m certain we can create more sophisticated care coordination systems that will revolutionize healthcare. The biggest thing holding us back is that we’re missing the foundation to build out these more sophisticated models.

Let me use a simple example. My wife has been seeing a specialist recently. She’s got an appointment with her primary care doctor next week. I’ll be interested to see how much information my wife’s primary care doctor has gotten from the specialist. Have they communicated at all? Will my wife’s visit to her primary care doctor be basically my wife informing her primary care doctor about what the specialist found?

I think the answers to these questions are going to be disappointing. What’s even more disappointing is that what I described is incredibly basic care coordination. However, until the basic care coordination starts to happen we’ll never reach a more advanced level of care coordination.

Going back to Andy’s question about CCDs and care coordination. No doubt a CCD from my wife’s specialist to her primary care doctor would meet the basic care coordination I described. Although, does it provide an advanced level of care coordination? It does not. However, it does lay the foundation for advanced care coordination. What if some really powerful workflow was applied to the incoming CCD that made processing incoming CCDs easier for doctors? What if the CCD also was passed to any other doctors that might be seeing that patient based upon the results that were shared in the CCD? You can start to see how the granular data of a CCD can facilitate care coordination.

I feel like we’re on the precipice where everyone knows that we have to start sharing data. CCD is the start of that sharing, but is far from the end of how sophisticated will get at truly coordinated care.

The Online Medical Visit … For Free

Posted on January 3, 2012 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

In every situation online it seems like at some point someone takes the business model as deep as it goes and then someone just finally says, “Let’s make it free.” Readers of this site will be familiar with the leading Free EHR companies Mitochon and Practice Fusion (both advertisers on this site). They both seem to be doing really well and are working on some really interesting business models.

With my familiarity with the Free EHR business model, I was intrigued when I read about HealthTap’s model for basically providing an online medical visit for free. This was particularly interesting since I knew that HealthTap had received $11.5 million in funding recently.

Andy Oram summarizes what HealthTap is trying to solve really well:

In this digital age, HealthTap asks, why should a patient have to make an appointment and drive to the clinic just to find out whether her symptoms are probably caused by a recent medication? And why should a doctor repeat the same advice for each patient when the patient can go online for it?

Plus, he makes two important observations of what HealthTap has found:
1. Doctors will take the time to post information online for free.
2. Doctors are willing to rate each other.

It’s pretty interesting when you think about how many doctors visits could be saved using something like HealthTap. On face, I’d think that a site like this wouldn’t make much sense. Although, as I think back on my medical experiences I can think of about a dozen or so times where I tapped into my physician friends before going to the doctor. Basically, I wanted to know if going to the doctor would be worth my time or not. In about 90% of those cases I ended up not going to the doctor since the doctor wouldn’t have really been able to do much for me anyway.

As I think through these experiences, I realize that many people aren’t lucky enough to be like me and have lots of physician friends around to ask the casual medical question. I could see how HealthTap could fill that role.

One key to this model is that it doesn’t always replace the visit to the physician. In fact, in a few cases I was told that I’d need an X-ray and that I better go see the doctor. In that case I was more likely to go to the physician since I knew I needed to get something done. I already knew the physician would do something for me when I went so I didn’t have the fear that they just tell me to take some Tylenol and be careful with it.

I’m not quite sure if doctors would be glad to actually have only people that are sick visiting their office or not. Maybe they enjoy the break of the easy patient that doesn’t require any effort on their part.

I think there are still questions about the quality of information that patients will get on HealthTap. This is going to be the most interesting issue to follow. No doubt they’re going to be toeing a fine line called medical advice. However, whether it’s HealthTap or some other online source that someone likely finds through Google, people are going to be looking for this kind of health information online. The idea of a free online medical visit sounds good to me.

Let’s also not be surprised if the Free EHR vendors eventually get into online visits as well. Seems like a natural progression for them to offer this service if they wanted to go that direction. From what I understand they have plenty on their plates right now, but a few years from now it could get pretty interesting.

The Perfect EMR is Mythology

Posted on November 9, 2011 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I don’t know about the rest of you, but ever since David Blumenthal left ONC he’s had plenty of interesting things to say. I think he’s still somewhat cautious, but you can tell he’s given himself more freedom to comment on the state of EHR software and how it could be improved.

One example of this was in Andy Oram’s writeup of David Blumenthal’s speech in Boston a little while back. Here’s one section of Andy’s write up that really hit me (emphasis mine):

Perhaps Blumenthal’s enthusiasm for putting electronic records in place and seek interoperability later may reflect a larger pragmatism he brought up several times yesterday. He praised the state of EHRs (pushing back against members of the audience with stories to tell of alienated patients and doctors quitting the field in frustration), pointing to a recent literature survey where 92% of studies found improved outcomes in patient care, cost control, or user satisfaction. And he said we would always be dissatisfied with EHRs because we compare them to some abstract ideal

I don’t think his assurances or the literature survey can assuage everyone’s complaints. But his point that we should compare EHRs to paper is a good one. Several people pointed out that before EHRs, doctors simply lacked basic information when making decisions, such as what labs and scans the patient had a few months ago, or even what diagnosis a specialist had rendered. How can you complain that EHRs slow down workflow? Before EHRs there often was no workflow! Many critical decisions were stabs in the dark.

Lots of interesting discussion points there, but the one I take away from it is that there’s no such thing as the perfect EMR. Blumenthal is dead on that many doctors have this abstract ideal of what an EMR should be and it will never be that way. Certainly there are benefits to implementing an EMR, but there are also some challenges to using an EMR as well. No amount of programming and design are going to ever change that.

I wish I could find a description I read 4-5 years ago from an EHR vendor talking about the doctors they liked to work with. In it they described that they liked working with doctors who had reasonable expectations of the EHR implementation. They wanted to work with doctors who wanted to go electronic. They wanted to work with clinics that understood that some change was required as part of any IT implementation. From what I can tell, that EHR vendor has basically done just that.

Reminds me of trying to force my kids to do something they don’t want to do. Never seems to end well. Instead, it’s a much more satisfying experience for all when I help them understand why we’re doing what we’re doing. They still don’t like some of the details in many cases, but at least they understand the purpose for what we’re doing.

As long as doctors cling to some abstract ideal of EMR perfection, no EMR vendor will ever be able to satisfy them. A perfect EMR is not reasonable. Just because an EMR doesn’t offer everything that you could dream, doesn’t mean it’s not an incremental improvement over what you’re doing today.

Don’t let the quest for perfection get in the way of incremental improvement. Perfection is more nearly obtained through many incremental improvement than giant leaps.