Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

So, EMRs Do Reduce Tests Ordered? Partners Says Yes

Posted on April 16, 2012 I Written By

Priya Ramachandran is a Maryland based freelance writer. In a former life, she wrote software code and managed Sarbanes Oxley related audits for IT departments. She now enjoys writing about healthcare, science and technology.

About the same time last month, I brought your guys some unwelcome news – that physician access to electronic records perhaps doesn’t reduce the number of tests subsequently ordered, and hence doesn’t reduce healthcare costs as much as previously thought.

Except that maybe it does. At least that’s according to an article in Chicago Tribune that summarizes the findings of a study by Partners Healthcare, and a research letter published in the Archives of Internal Medicine (full text, PDF).

According to the study:
– It looked at health information exchange and test data between Mass. General Hospital and Brigham and Women’s over a 5 year period from Jan. 1, 1999 to Dec. 31, 2004.
– The study looked at 117,606 patients during this period. Of these, 346 patients had recent off-site tests, of which 44 were done prior to the HIE rollout.
– The study found that for patients with recent off-site tests, there was a 49% reduction in number of tests ordered.
In number terms, the number of tests ordered per person reduced from 7 in 1999 to 4 in 2004.
– There was however a slight increase in number of tests ordered for the population that didn’t have any prior testing done during the same time period – increasing from 5 per person to 6 per person.

These findings directly contradict the Health Affairs study that I mentioned earlier. The Chicago Tribune article has a little researchers-play-nice subsection at the end where the Health Affairs and Partners researchers try to interpret each other’s contradictory results.

If I may add my 0.02:
– Even though the Partners study follows a larger population of patients, the data that is used to calculate the reduction (346 and 44) is way too small
– The Health Affairs studied some 28,000 patients spread across 1,187 doctor’s offices, while the Partners study followed a larger population of patients at two huge Mass. hospitals that entered into a partnership with each other.

While this not directly discounting anything each group has found, I would think the HA study is more representative of what’s going on in different parts of the country, where doctors are using different (in capability/costs) EMRs and labs to get their results. In Partners case there may well be a tacit agreement on EMR brand, or even tacit trust between the labs/facilities that each hospital uses.

Very interesting though, and I’d really love to see what else comes out on EMR and healthcare costs.

Bad research? Flawed conclusions from Harvard-based EMR study?

Posted on June 1, 2011 I Written By

Dr. West is an endocrinologist in private practice in Washington, DC. He completed fellowship training in Endocrinology and Metabolism at the Johns Hopkins University School of Medicine. Dr. West opened The Washington Endocrine Clinic, PLLC in 2009. He can be contacted at

Recently I read a post over at Medwire News citing a that investigated whether the copy-and-paste methodology could have detrimental effects on diabetes control.  Since I previously blogged on this topic, of course I found the study very interesting.  In their study, the authors used a software program to correlate hemoglobin A1c reductions (a marker of diabetes control) with how often copy-and-paste was likely used.  The software guessed at whether parts of a note met enough similarity to previous notes to suggest that they might have originated by copy-and-paste methods.  Because the diabetic patients being followed in the copy-and-paste group did not have lower hemoglobin A1c levels over time, the authors concluded that copy-and-paste had a role in causing bad patient outcomes.

Although interesting, this study is concerning for several reasons.  It places a bad view (real or not) on doctors who use copy-and-paste responsibly and effectively.  Since I do this all the time and have not noted any particularly bad outcomes in my diabetes patients, I have to question whether the conclusion is valid.  There is so much potential for error in this type of statistical correlation research that I think big disclaimers should be noted.  What kind of counseling was being done in the subgroup of diabetics that had an allegedly poorer outcome?  Was that group of diabetics different from the ones with a good outcome?  What approaches to treatment were used?  What years were they being treated?  Who was treating them?  Etc, etc.  Perhaps the most important two questions in my mind are:  Who are the authors responsible for the study?  What are the authors’ personal biases?

The authors conclude that “These results lead us to question whether copied electronic documentation is a reliable representation of patient care,” in a letter to the Archives of Internal Medicine. “If it is not, it could be either an honest mistake or deliberate falsification.  In the latter case, copied documentation that does not reflect the actual events is a serious breach of medical ethics. In either case, it carries a significant financial and legal risk.”  There seems to be such a negative slant here that I have to again ask about the personal biases of the authors and how this may have affected their study design.

What would be my motivation to enter documentation that said I did  things that I didn’t actually do?  That could also be an important question with multiple possible answers.  How much time does the doctor have?  Is the doctor a resident trainee?  How protective of the practice does the doctor feel?  Does quality of electronic notes in general (which is highly varied) directly correlate with patient outcomes?  Maybe, since the data from Brigham and Women’s Hospital suggests falsification in the author’s eyes, they should take a good, long look at the context of their research methodology (who was involved in writing the clinical notes and under what circumstances) rather than relying on a possibly very flawed method to generalize negative study results to all doctors using reasonable and responsible documentation methods.  Or maybe I’m being too rash?

Dr. West is an endocrinologist in private practice in Washington, DC. He completed fellowship training in Endocrinology and Metabolism at the Johns Hopkins University School of Medicine. Dr. West opened The Washington Endocrine Clinic, PLLC, as a solo practice in 2009.  He can be reached at