Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

The Exciting Future of Healthcare IT #NHITWeek

Posted on September 28, 2016 I Written By

John Lynn is the Founder of the blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of and John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

One time I went to my wife’s OB/GYN appointment and I was in shock and awe with how well the doctor remembered my wife’s past pregnancies. Literally down to the tear that occurred. The reason I was in shock was that she prefaced her memory of my wife’s medical history with “Your old chart is off in storage, but as I recall you had a…”

While year later I’m still impressed with this OB/GYN’s ability to remember her patients, I know that this is not always the case. Doctors are humans and can’t possibly remember everything that occurred with every patient. Humans have limits. In fact, doctors deserve credit that they’ve provided such amazing medical care to so many patients despite these limits.

My esteem for doctors grows even greater when I think of the challenges associated with diagnosing computer problems (Yes, I am the nerd formerly known as @techguy). It’s not easy diagnosing a computer problem and then applying the fix that will remedy the problem. In fact, you often find yourself fixing the problem without really even knowing what’s causing the problem (ie. reinstall or reboot). While fixing computers is challenging, diagnosing and treating the human body has to be at least an order and probably two or more orders of magnitude more complex.

My point is that the work doctors do is really hard and they’ve generally done great work.

While I acknowledge the history of medicine, I also can’t help but think that technology is the pathway to solving many of the challenges that make doctors lives so difficult today. It seems fitting to me that IT stands for Information Technology since the core of healthcare’s challenges revolve around information.

Here are some of the ways technology can and will help:

Quality Information
The story of my wife’s OB/GYN is the perfect illustration of this potential. Doctors who have the right information at the point of care can provide better care. That’s a simple but powerful principle that can become a reality with healthcare IT. Instead of relying on this OB/GYN’s memory, she could have had that information readily available to her in an EHR.

Certainly, we’re not perfect at this yet. EHR software can go down. EHR can perpetuate misinformation. EHRs can paint the incorrect picture for a patient. However, on the whole, I believe an EHRs data is more accessible and available when and where it’s needed. Plus, this is going to get dramatically better over time. In some cases, it already is.

Deep Understanding of Individual Health Metrics
Health sensors are just starting to come into their own. As these health sensors create more and more clinically relevant data, healthcare providers will be empowered with a much deeper understanding of the specific health metrics that matter for each unique patient. Currently, doctors are often driving in the dark. This new wave of health sensors will be like turning the lights on in places that have never seen light before. In some cases, it already is.

Latest Medical Research
Doctors do an incredible job keeping up on the latest research in their specialty, but how can they keep up with the full body of medical knowledge? Even if they study all day and all night (which they can’t do because they have to see patients), the body of medical knowledge is so complex that the human mind can’t comprehend, process, and remember it all. Technology can.

I’m not suggesting that technology will replace humans. Not for the forseeable future anyway. However, it can certainly assist, inform, and remind humans. My phone already does this for me in my personal life. Technology will do the same for doctors in their clinical life. In some cases, it already is.

Patient Empowerment
Think about how dramatic a shift it’s been from a patient chart which the patient never saw to EHR software that makes your entire record available to patients all the time. If that doesn’t empower patients, nothing will. I love reading about how many kings use to suppress their people by suppressing information. Information is power and technology can make access to your health information possible.

Related to this trend is also how patients become more empowered through communities of patients with similar conditions and challenges. The obvious example is Patients Like Me, but it’s happening all over the internet and on social media. This is true for chronic patients who want to find patients with a rare condition, but it’s also true for patients who are finding the healthcare system a challenge to navigate. There is nothing more empowering than finding someone in a similar situation that can help you find the best opportunities and solutions to your problems.

In some cases, patient empowerment is already happening today.

Yes, I know that many of the technologies implemented to date don’t meet this ambitious vision of what technology can accomplish in healthcare. In fact, many health technologies have actually made things worse instead of better. This is a problem that must be dealt with, but it doesn’t deter me from the major hope I have the technology can solve many of the challenges that make being a doctor so hard. It doesn’t deter me from the dream that patients will be empowered to take a more active role in their care. It doesn’t deter me from the desire to leverage technology to make our healthcare system better.

The best part of my 11 years in healthcare IT has been seeing technology make things better on a small scale (“N of 1” –@cancergeek). My hope for the next decade is to see these benefits blow up on a much larger scale.

Correlations and Research Results: Do They Match Up? (Part 1 of 2)

Posted on May 26, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site ( and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Eight years ago, a widely discussed issue of WIRED Magazine proclaimed cockily that current methods of scientific inquiry, dating back to Galileo, were becoming obsolete in the age of big data. Running controlled experiments on limited samples just have too many limitations and take too long. Instead, we will take any data we have conveniently at hand–purchasing habits for consumers, cell phone records for everybody, Internet-of-Things data generated in the natural world–and run statistical methods over them to find correlations.

Correlations were spotlighted at the annual conference of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. Although the speakers expressed a healthy respect for big data techniques, they pinpointed their limitations and affirmed the need for human intelligence in choosing what to research, as well as how to use the results.

Petrie-Flom annual 2016 conference

Petrie-Flom annual 2016 conference

A word from our administration

A new White House report also warns that “it is a mistake to assume [that big data techniques] are objective simply because they are data-driven.” The report highlights the risks of inherent discrimination in the use of big data, including:

  • Incomplete and incorrect data (particularly common in credit rating scores)

  • “Unintentional perpetuation and promotion of historical biases,”

  • Poorly designed algorithmic matches

  • “Personaliziaton and recommendation services that narrow instead of expand user options”

  • Assuming that correlation means causation

The report recommends “bias mitigation” (page 10) and “algorithmic systems accountability” (page 23) to overcome some of these distortions, and refers to a larger FTC report that lays out the legal terrain.

Like the WIRED articles mentioned earlier, this gives us some background for discussions of big data in health care.

Putting the promise of analytical research under the microscope

Conference speaker Tal Zarsky offered both fulsome praise and specific cautions regarding correlations. As the WIRED Magazine issue suggested, modern big data analysis can find new correlations between genetics, disease, cures, and side effects. The analysis can find them much cheaper and faster than randomized clinical trials. This can lead to more cures, and has the other salutory effect of opening a way for small, minimally funded start-up companies to enter health care. Jeffrey Senger even suggested that, if analytics such as those used by IBM Watson are good enough, doing diagnoses without them may constitute malpractice.

W. Nicholson Price, II focused on the danger of the FDA placing too many strict limits on the use of big data for developing drugs and other treatments. Instead of making data analysts back up everything with expensive, time-consuming clinical trials, he suggested that the FDA could set up models for the proper use of analytics and check that tools and practices meet requirements.

One of exciting impacts of correlations is that they bypass our assumptions and can uncover associations we never would have expected. The poster child for this effect is the notorious beer-and-diapers connection found by one retailer. This story has many nuances that tend to get lost in the retelling, but perhaps the most important point to note is that a retailer can depend on a correlation without having to ascertain the cause. In health, we feel much more comfortable knowing the cause of the correlation. Price called this aspect of big data search “black box” medicine.” Saying that something works, without knowing why, raises a whole list of ethical concerns.

A correlation stomach pain and disease can’t tell us whether the stomach pain led to the disease, the disease caused the stomach pain, or both are symptoms of a third underlying condition. Causation can make a big difference in health care. It can warn us to avoid a treatment that works 90% of the time (we’d like to know who the other 10% of patients are before they get a treatment that fails). It can help uncover side effects and other long-term effects–and perhaps valuable off-label uses as well.

Zarsky laid out several reasons why a correlation might be wrong.

  • It may reflect errors in the collected data. Good statisticians control for error through techniques such as discarding outliers, but if the original data contains enough apples, the barrel will go rotten.

  • Even if the correlation is accurate for the collected data, it may not be accurate in the larger population. The correlation could be a fluke, or the statistical sample could be unrepresentative of the larger world.

Zarsky suggests using correlations as a starting point for research, but backing them up by further randomized trials or by mathematical proofs that the correlation is correct.

Isaac Kohane described, from the clinical side, some of the pros and cons of using big data. For instance, data collection helps us see that choosing a gender for intersex patients right after birth produces a huge amount of misery, because the doctor guesses wrong half the time. However, he also cited times when data collection can be confusing for the reasons listed by Zarsky and others.

Senger pointed out that after drugs and medical devices are released into the field, data collected on patients can teach developers more about risks and benefits. But this also runs into the classic risks of big data. For instance, if a patient dies, did the drug or device contribute to death? Or did he just succumb to other causes?

We already have enough to make us puzzle over whether we can use big data at all–but there’s still more, as the next part of this article will describe.

Streamlining Pharmaceutical and Biomedical Research in Software Agile Fashion

Posted on January 18, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site ( and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Medical research should not be in a crisis. More people than ever before want its products, and have the money to pay for them. More people than ever want to work in the field as well, and they’re uncannily brilliant and creative. It should be a golden era. So the myriad of problems faced by this industry–sources of revenue slipping away from pharma companies, a shift of investment away from cutting-edge biomedical firms, prices of new drugs going through the roof–must lie with the development processes used in the industry.

Like many other industries, biomedicine is contrasted with the highly successful computer industry. Although the financial prospects of this field have sagged recently (with hints of an upcoming dot-com bust similar to the early 2000s), there’s no doubt that computer people have mastered a process for churning out new, appealing products and services. Many observers dismiss the comparison between biomedicine and software, pointing out that the former has to deal much more with the prevalence of regulations, the dominance of old-fashioned institutions, and the critical role of intellectual property (patents).

Still, I find a lot of intriguing parallels between how software is developed and how biomedical research becomes products. Coding up a software idea is so simple now that it’s done by lots of amateurs, and Web services can try out and throw away new features on a daily basis. What’s expensive is getting the software ready for production, a task that requires strict processes designed and carried out by experienced professionals. Similarly, in biology, promising new compounds pop up all the time–the hard part is creating a delivery mechanism that is safe and reliable.

Generating Ideas: An Ever-Improving Environment

Software development has benefited in the past decade from an incredible degree of evolving support:

  • Programming languages that encapsulate complex processes in concise statements, embody best practices, and facilitate maintenance through modularization and support for testing

  • Easier development environments, especially in the cloud, which offer sophisticated test tools (such as ways to generate “mock” data for testing and rerun tests automatically upon each change to the code), easy deployment, and performance monitoring

  • An endless succession of open source libraries to meet current needs, so that any problem faced by programmers in different settings is solved by the first wave of talented programmers that encounter it

  • Tools for sharing and commenting on code, allowing massively distributed teams to collaborate

Programmers have a big advantage over most fields, in that they are experts in the very skills that produce the tools they use. They have exploited this advantage of the years to make software development cheaper, faster, and more fun. Treated by most of the industry as a treasure of intellectual property, software is actually becoming a commodity.

Good software still takes skill and experience, no doubt about that. Some research has discovered that a top programmer is one hundred times as productive as a mediocre one. And in this way, the programming field also resembles biology. In both cases, it takes a lot of effort and native talent to cross the boundary from amateur to professional–and yet more than enough people have done so to provoke unprecedented innovation. The only thing holding back medical research is lack of funding–and that in turn is linked to costs. If we lowered the costs of drug development and other treatments, we’d free up billions of dollars to employ the thousands of biologists, chemists, and others striving to enter the field.

Furthermore, there are encouraging signs that biologists in research labs and pharma companies are using open source techniques as software programmers do to cut down waste and help each other find solutions faster, as described in another recent article and my series on Sage Bionetworks. If we can expand the range of what companies call “pre-competitive research” and sign up more of the companies to join the commons, innovation in biotech will increase.

On the whole, most programming teams practice agile development, which is creative, circles around a lot, and requires a lot of collaboration. Some forms of development still call for a more bureaucratic process of developing requirements, approving project plans, and so forth–you can’t take an airplane back to the hanger for a software upgrade if a bug causes it to crash into a mountain. And all those processes exist in agile development too, but subject to a more chaotic process. The descriptions I’ve read of drug development hark of similar serendipity and unanticipated twists.

The Chasm Between Innovation and Application

The reason salaries for well-educated software developers are skyrocketing is that going from idea to implementation is an entirely different job from idea generation.

Software that works in a test environment often wilts when exposed to real-life operating conditions. It has to deal with large numbers of requests, with ill-formed or unanticipated requests from legions of new users, with physical and operational interruptions that may result from a network glitch halfway around the world, with malicious banging from attackers, and with cost considerations associated with scaling up.

In recent years, the same developers who created great languages and development tools have put a good deal of ingenuity into tools to solve these problems as well. Foremost, as I mentioned before, are cloud offerings–Infrastructure as a Service or Platform as a Service–that take hardware headaches out of consideration. At the cost of increased complexity, cloud solutions let people experiment more freely.

In addition, a bewildering plethora of tools address every task an operations person must face: creating new instances of programs, scheduling them, apportioning resources among instances, handling failures, monitoring them for uptime and performance, and so on. You can’t count the tools built just to help operations people collect statistics and create visualizations so they can respond quickly to problems.

In medicine, what happens to a promising compound? It suddenly runs into a maze of complicated and costly requirements:

  • It must be tested on people, animals, or (best of all) mock environments to demonstrate safety.

  • Researchers must determine what dose, delivered in what medium, can withstand shipping and storage, get into the patient, and reach its target.

  • Further testing must reassure regulators and the public that the drug does its work safely and effectively, a process that involves enormous documentation.

As when deploying software, developing and testing a treatment involves much more risk and many more people than the original idea took. But software developers are making progress on their deployment problem. Perhaps better tools and more agile practices can cut down the tool taken by the various phases of pharma development. Experiments being run now include:

  • Sharing data about patients more widely (with their consent) and using big data to vastly increase the pool of potential test subjects. This is crucial because a a large number of tests fail for lack of subjects

  • Using big data also to track patients better and more quickly find side effects and other show-stoppers, as well as potential off-label uses.

  • Tapping into patient communities to determine better what products they need, run tests more efficiently, and keep fewer from dropping out.

There’s hope for pharma and biomedicine. The old methods are reaching the limits of their effectiveness, as we demand ever more proof of safety and effectiveness. The medical field can’t replicate what software developers have done for themselves, but it can learn a lot from them nevertheless.