Clinical Decision Support Should Be Open Source

Posted on January 26, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Clinical decision support is a long-standing occupant of the medical setting. It got in the door with electronic medical records, and has recently received a facelift under the term “evidence based medicine.” We are told that CDS or EBM is becoming fine-tuned and energized through powerful analytics that pick up the increasing number of patient and public health data sets out in the field. But how does the clinician know that the advice given for a treatment or test is well-founded?

Most experts reaffirm that the final word lies with the physician–that each patient is unique, and thus no canned set of rules can substitute for the care that the physician must give to a patient’s particular conditions (such as a compromised heart or a history of suicidal ideation) and the sustained attention that the physician must give to the effects of treatment. Still, when the industry gives a platform to futurists such as Vinod Khosla who suggest that CDS can become more reliable than a physician’s judgment, we have to start demanding a lot more reliability from the computer.

It’s worth stopping a moment to consider the various inputs to CDS. Traditionally, it was based on the results of randomized, double-blind clinical trials. But these have come under scrutiny in recent years for numerous failings: the questionable validity of extending the results found on selected test subjects to a broader population, problems reproducing results for as many as three quarters of the studies, and of course the bias among pharma companies and journals alike for studies showing positive impacts.

More recently, treatment recommendations are being generated from “big data,” which trawl through real-life patient experiences instead of trying to isolate a phenomenon in the lab. These can turn up excellent nuggets of unexpected impacts–such as Vioxx’s famous fatalities–but suffer also from the biases of the researches designing the algorithms, difficulties collecting accurate data, the risk of making invalid correlations, and the risk of inappropriately attributing causation.

A third kind of computerized intervention has recently been heralded: IBM’s Watson. However, Watson does not constitute CDS (at least not in the demo I saw at HIMSS a couple years ago). Rather, Watson just does the work every clinician would ideally do but doesn’t have time for: it consults thousands of clinical studies to find potential diagnoses relevant to the symptoms and history being reported, and ranks these diagnoses by probability. Both of those activities hijack a bit of the clinician’s human judgment, but they do not actually offer recommendations.

So there are clear and present justifications for demanding that CDS vendors demonstrate its reliability. We don’t really know what goes into CDS and how it works. Meanwhile, doctors are getting sick and tired of bearing the liability for all the tools they use, and the burden of their malpractice insurance is becoming a factor in doctors leaving the field. The doctors deserve some transparency and auditing, and so do the patients who ultimately incorporate the benefits and risks of CDS into their bodies.

CDS, like other aspects of the electronic health records into which it is embedded, has never been regulated or subjected to public safety tests and audits. The argument trotted out by EHR vendors–like every industry–when opposing regulation is that it will slow down innovation. But economic arguments have fuzzy boundaries–one can always find another consideration that can reverse the argument. In an industry that people can’t trust, regulation can provide a firm floor on which a new market can be built, and the assurance that CDS is working properly can open up the space for companies to do more of it and charge for it.

Still, there seems to be a pendulum swing away from regulation at present. The FDA has never regulated electronic health records as it has other medical software, and has been carving out classes of medical devices that require little oversight. When it took up EHR safety last year, the FDA asked merely for vendors to participate voluntarily in a “safety center.”

The prerequisite for gauging CDS’s reliability is transparency. Specifically, two aspects should be open:

  • The vendor must specify which studies, or analytics and data sets, went into the recommendation process.

  • The code carrying out the recommendation process must be openly published.

These fundamentals are just the start of of the medical industry’s responsibilities. Independent researchers must evaluate the sources revealed in the first step and determine whether they are the best available choices. Programmers must check the code in the second step for accuracy. These grueling activities should be funded by the clinical institutions that ultimately use the CDS, so that they are on a firm financial basis and free from bias.

The requirement for transparent studies raises the question of open access to medical journals, which is still rare. But that is a complex issue in the fields of research and publishing that I can’t cover here.

Finally, an independent service has to collect reports of CDS failures and make them public, like the FDA Adverse Event Reporting System (FAERS) for drugs, and the FDA’s Manufacturer and User Facility Device Experience (MAUDE) for medical devices.

These requirements are reasonably light-weight, although instituting them will seem like a major upheaval to industries accustomed to working in the dark. What the requirements can do, though, is put CDS on the scientific basis it never has had, and push forward the industry more than any “big data” can do.