Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Streamlining Pharmaceutical and Biomedical Research in Software Agile Fashion

Posted on January 18, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Medical research should not be in a crisis. More people than ever before want its products, and have the money to pay for them. More people than ever want to work in the field as well, and they’re uncannily brilliant and creative. It should be a golden era. So the myriad of problems faced by this industry–sources of revenue slipping away from pharma companies, a shift of investment away from cutting-edge biomedical firms, prices of new drugs going through the roof–must lie with the development processes used in the industry.

Like many other industries, biomedicine is contrasted with the highly successful computer industry. Although the financial prospects of this field have sagged recently (with hints of an upcoming dot-com bust similar to the early 2000s), there’s no doubt that computer people have mastered a process for churning out new, appealing products and services. Many observers dismiss the comparison between biomedicine and software, pointing out that the former has to deal much more with the prevalence of regulations, the dominance of old-fashioned institutions, and the critical role of intellectual property (patents).

Still, I find a lot of intriguing parallels between how software is developed and how biomedical research becomes products. Coding up a software idea is so simple now that it’s done by lots of amateurs, and Web services can try out and throw away new features on a daily basis. What’s expensive is getting the software ready for production, a task that requires strict processes designed and carried out by experienced professionals. Similarly, in biology, promising new compounds pop up all the time–the hard part is creating a delivery mechanism that is safe and reliable.

Generating Ideas: An Ever-Improving Environment

Software development has benefited in the past decade from an incredible degree of evolving support:

  • Programming languages that encapsulate complex processes in concise statements, embody best practices, and facilitate maintenance through modularization and support for testing

  • Easier development environments, especially in the cloud, which offer sophisticated test tools (such as ways to generate “mock” data for testing and rerun tests automatically upon each change to the code), easy deployment, and performance monitoring

  • An endless succession of open source libraries to meet current needs, so that any problem faced by programmers in different settings is solved by the first wave of talented programmers that encounter it

  • Tools for sharing and commenting on code, allowing massively distributed teams to collaborate

Programmers have a big advantage over most fields, in that they are experts in the very skills that produce the tools they use. They have exploited this advantage of the years to make software development cheaper, faster, and more fun. Treated by most of the industry as a treasure of intellectual property, software is actually becoming a commodity.

Good software still takes skill and experience, no doubt about that. Some research has discovered that a top programmer is one hundred times as productive as a mediocre one. And in this way, the programming field also resembles biology. In both cases, it takes a lot of effort and native talent to cross the boundary from amateur to professional–and yet more than enough people have done so to provoke unprecedented innovation. The only thing holding back medical research is lack of funding–and that in turn is linked to costs. If we lowered the costs of drug development and other treatments, we’d free up billions of dollars to employ the thousands of biologists, chemists, and others striving to enter the field.

Furthermore, there are encouraging signs that biologists in research labs and pharma companies are using open source techniques as software programmers do to cut down waste and help each other find solutions faster, as described in another recent article and my series on Sage Bionetworks. If we can expand the range of what companies call “pre-competitive research” and sign up more of the companies to join the commons, innovation in biotech will increase.

On the whole, most programming teams practice agile development, which is creative, circles around a lot, and requires a lot of collaboration. Some forms of development still call for a more bureaucratic process of developing requirements, approving project plans, and so forth–you can’t take an airplane back to the hanger for a software upgrade if a bug causes it to crash into a mountain. And all those processes exist in agile development too, but subject to a more chaotic process. The descriptions I’ve read of drug development hark of similar serendipity and unanticipated twists.

The Chasm Between Innovation and Application

The reason salaries for well-educated software developers are skyrocketing is that going from idea to implementation is an entirely different job from idea generation.

Software that works in a test environment often wilts when exposed to real-life operating conditions. It has to deal with large numbers of requests, with ill-formed or unanticipated requests from legions of new users, with physical and operational interruptions that may result from a network glitch halfway around the world, with malicious banging from attackers, and with cost considerations associated with scaling up.

In recent years, the same developers who created great languages and development tools have put a good deal of ingenuity into tools to solve these problems as well. Foremost, as I mentioned before, are cloud offerings–Infrastructure as a Service or Platform as a Service–that take hardware headaches out of consideration. At the cost of increased complexity, cloud solutions let people experiment more freely.

In addition, a bewildering plethora of tools address every task an operations person must face: creating new instances of programs, scheduling them, apportioning resources among instances, handling failures, monitoring them for uptime and performance, and so on. You can’t count the tools built just to help operations people collect statistics and create visualizations so they can respond quickly to problems.

In medicine, what happens to a promising compound? It suddenly runs into a maze of complicated and costly requirements:

  • It must be tested on people, animals, or (best of all) mock environments to demonstrate safety.

  • Researchers must determine what dose, delivered in what medium, can withstand shipping and storage, get into the patient, and reach its target.

  • Further testing must reassure regulators and the public that the drug does its work safely and effectively, a process that involves enormous documentation.

As when deploying software, developing and testing a treatment involves much more risk and many more people than the original idea took. But software developers are making progress on their deployment problem. Perhaps better tools and more agile practices can cut down the tool taken by the various phases of pharma development. Experiments being run now include:

  • Sharing data about patients more widely (with their consent) and using big data to vastly increase the pool of potential test subjects. This is crucial because a a large number of tests fail for lack of subjects

  • Using big data also to track patients better and more quickly find side effects and other show-stoppers, as well as potential off-label uses.

  • Tapping into patient communities to determine better what products they need, run tests more efficiently, and keep fewer from dropping out.

There’s hope for pharma and biomedicine. The old methods are reaching the limits of their effectiveness, as we demand ever more proof of safety and effectiveness. The medical field can’t replicate what software developers have done for themselves, but it can learn a lot from them nevertheless.

Significant Articles in the Health IT Community in 2015

Posted on December 15, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Have you kept current with changes in device connectivity, Meaningful Use, analytics in healthcare, and other health IT topics during 2015? Here are some of the articles I find significant that came out over the past year.

The year kicked off with an ominous poll about Stage 2 Meaningful Use, with implications that came to a head later with the release of Stage 3 requirements. Out of 1800 physicians polled around the beginning of the year, more than half were throwing in the towel–they were not even going to try to qualify for Stage 2 payments. Negotiations over Stage 3 of Meaningful Use were intense and fierce. A January 2015 letter from medical associations to ONC asked for more certainty around testing and certification, and mentioned the need for better data exchange (which the health field likes to call interoperability) in the C-CDA, the most popular document exchange format.

A number of expert panels asked ONC to cut back on some requirements, including public health measures and patient view-download-transmit. One major industry group asked for a delay of Stage 3 till 2019, essentially tolerating a lack of communication among EHRs. The final rules, absurdly described as a simplification, backed down on nothing from patient data access to quality measure reporting. Beth Israel CIO John Halamka–who has shuttled back and forth between his Massachusetts home and Washington, DC to advise ONC on how to achieve health IT reform–took aim at Meaningful Use and several other federal initiatives.

Another harbinger of emerging issues in health IT came in January with a speech about privacy risks in connected devices by the head of the Federal Trade Commission (not an organization we hear from often in the health IT space). The FTC is concerned about the security of recent trends in what industry analysts like to call the Internet of Things, and medical devices rank high in these risks. The speech was a lead-up to a major report issued by the FTC on protecting devices in the Internet of Things. Articles in WIRED and Bloomberg described serious security flaws. In August, John Halamka wrote own warning about medical devices, which have not yet started taking security really seriously. Smart watches are just as vulnerable as other devices.

Because so much medical innovation is happening in fast-moving software, and low-budget developers are hankering for quick and cheap ways to release their applications, in February, the FDA started to chip away at its bureaucratic gamut by releasing guidelines releasing developers from FDA regulation medical apps without impacts on treatment and apps used just to transfer data or do similarly non-transformative operations. They also released a rule for unique IDs on medical devices, a long-overdue measure that helps hospitals and researchers integrate devices into monitoring systems. Without clear and unambiguous IDs, one cannot trace which safety problems are associated with which devices. Other forms of automation may also now become possible. In September, the FDA announced a public advisory committee on devices.

Another FDA decision with a potential long-range impact was allowing 23andMe to market its genetic testing to consumers.

The Department of Health and Human Services has taken on exceedingly ambitious goals during 2015. In addition to the daunting Stage 3 of Meaningful Use, they announced a substantial increase in the use of fee-for-value, although they would still leave half of providers on the old system of doling out individual payments for individual procedures. In December, National Coordinator Karen DeSalvo announced that Health Information Exchanges (which limit themselves only to a small geographic area, or sometimes one state) would be able to exchange data throughout the country within one year. Observers immediately pointed out that the state of interoperability is not ready for this transition (and they could well have added the need for better analytics as well). HHS’s five-year plan includes the use of patient-generated and non-clinical data.

The poor state of interoperability was highlighted in an article about fees charged by EHR vendors just for setting up a connection and for each data transfer.

In the perennial search for why doctors are not exchanging patient information, attention has turned to rumors of deliberate information blocking. It’s a difficult accusation to pin down. Is information blocked by health care providers or by vendors? Does charging a fee, refusing to support a particular form of information exchange, or using a unique data format constitute information blocking? On the positive side, unnecessary imaging procedures can be reduced through information exchange.

Accountable Care Organizations are also having trouble, both because they are information-poor and because the CMS version of fee-for-value is too timid, along with other financial blows and perhaps an inability to retain patients. An August article analyzed the positives and negatives in a CMS announcement. On a large scale, fee-for-value may work. But a key component of improvement in chronic conditions is behavioral health which EHRs are also unsuited for.

Pricing and consumer choice have become a major battleground in the current health insurance business. The steep rise in health insurance deductibles and copays has been justified (somewhat retroactively) by claiming that patients should have more responsibility to control health care costs. But the reality of health care shopping points in the other direction. A report card on state price transparency laws found the situation “bleak.” Another article shows that efforts to list prices are hampered by interoperability and other problems. One personal account of a billing disaster shows the state of price transparency today, and may be dangerous to read because it could trigger traumatic memories of your own interactions with health providers and insurers. Narrow and confusing insurance networks as well as fragmented delivery of services hamper doctor shopping. You may go to a doctor who your insurance plan assures you is in their network, only to be charged outrageous out-of-network costs. Tools are often out of date overly simplistic.

In regard to the quality ratings that are supposed to allow intelligent choices to patients, A study found that four hospital rating sites have very different ratings for the same hospitals. The criteria used to rate them is inconsistent. Quality measures provided by government databases are marred by incorrect data. The American Medical Association, always disturbed by public ratings of doctors for obvious reasons, recently complained of incorrect numbers from the Centers for Medicare & Medicaid Services. In July, the ProPublica site offered a search service called the Surgeon Scorecard. One article summarized the many positive and negative reactions. The New England Journal of Medicine has called ratings of surgeons unreliable.

2015 was the year of the intensely watched Department of Defense upgrade to its health care system. One long article offered an in-depth examination of DoD options and their implications for the evolution of health care. Another article promoted the advantages of open-source VistA, an argument that was not persuasive enough for the DoD. Still, openness was one of the criteria sought by the DoD.

The remote delivery of information, monitoring, and treatment (which goes by the quaint term “telemedicine”) has been the subject of much discussion. Those concerned with this development can follow the links in a summary article to see the various positions of major industry players. One advocate of patient empowerment interviewed doctors to find that, contrary to common fears, they can offer email access to patients without becoming overwhelmed. In fact, they think it leads to better outcomes. (However, it still isn’t reimbursed.)

Laws permitting reimbursement for telemedicine continued to spread among the states. But a major battle shaped up around a ruling in Texas that doctors have a pre-existing face-to-face meeting with any patient whom they want to treat remotely. The spread of telemedicine depends also on reform of state licensing laws to permit practices across state lines.

Much wailing and tears welled up over the required transition from ICD-9 to ICD-10. The AMA, with some good arguments, suggested just waiting for ICD-11. But the transition cost much less than anticipated, making ICD-10 much less of a hot button, although it may be harmful to diagnosis.

Formal studies of EHR strengths and weaknesses are rare, so I’ll mention this survey finding that EHRs aid with public health but are ungainly for the sophisticated uses required for long-term, accountable patient care. Meanwhile, half of hospitals surveyed are unhappy with their EHRs’ usability and functionality and doctors are increasingly frustrated with EHRs. Nurses complained about technologies’s time demands and the eternal lack of interoperability. A HIMSS survey turned up somewhat more postive feelings.

EHRs are also expensive enough to hurt hospital balance sheets and force them to forgo other important expenditures.

Electronic health records also took a hit from ONC’s Sentinel Events program. To err, it seems, is not only human but now computer-aided. A Sentinel Event Alert indicated that more errors in health IT products should be reported, claiming that many go unreported because patient harm was avoided. The FDA started checking self-reported problems on PatientsLikeMe for adverse drug events.

The ONC reported gains in patient ability to view, download, and transmit their health information online, but found patient portals still limited. Although one article praised patient portals by Epic, Allscripts, and NextGen, an overview of studies found that patient portals are disappointing, partly because elderly patients have trouble with them. A literature review highlighted where patient portals fall short. In contrast, giving patients full access to doctors’ notes increases compliance and reduces errors. HHS’s Office of Civil Rights released rules underlining patients’ rights to access their data.

While we’re wallowing in downers, review a study questioning the value of patient-centered medical homes.

Reuters published a warning about employee wellness programs, which are nowhere near as fair or accurate as they claim to be. They are turning into just another expression of unequal power between employer and employee, with tendencies to punish sick people.

An interesting article questioned the industry narrative about the medical device tax in the Affordable Care Act, saying that the industry is expanding robustly in the face of the tax. However, this tax is still a hot political issue.

Does anyone remember that Republican congressmen published an alternative health care reform plan to replace the ACA? An analysis finds both good and bad points in its approach to mandates, malpractice, and insurance coverage.

Early reports on use of Apple’s open ResearchKit suggested problems with selection bias and diversity.

An in-depth look at the use of devices to enhance mental activity examined where they might be useful or harmful.

A major genetic data mining effort by pharma companies and Britain’s National Health Service was announced. The FDA announced a site called precisionFDA for sharing resources related to genetic testing. A recent site invites people to upload health and fitness data to support research.

As data becomes more liquid and is collected by more entities, patient privacy suffers. An analysis of web sites turned up shocking practices in , even at supposedly reputable sites like WebMD. Lax security in health care networks was addressed in a Forbes article.

Of minor interest to health IT workers, but eagerly awaited by doctors, was Congress’s “doc fix” to Medicare’s sustainable growth rate formula. The bill did contain additional clauses that were called significant by a number of observers, including former National Coordinator Farzad Mostashari no less, for opening up new initiatives in interoperability, telehealth, patient monitoring, and especially fee-for-value.

Connected health took a step forward when CMS issued reimbursement guidelines for patient monitoring in the community.

A wonky but important dispute concerned whether self-insured employers should be required to report public health measures, because public health by definition needs to draw information from as wide a population as possible.

Data breaches always make lurid news, sometimes under surprising circumstances, and not always caused by health care providers. The 2015 security news was dominated by a massive breach at the Anthem health insurer.

Along with great fanfare in Scientific American for “precision medicine,” another Scientific American article covered its privacy risks.

A blog posting promoted early and intensive interactions with end users during app design.

A study found that HIT implementations hamper clinicians, but could not identify the reasons.

Natural language processing was praised for its potential for simplifying data entry, and to discover useful side effects and treatment issues.

CVS’s refusal to stock tobacco products was called “a major sea-change for public health” and part of a general trend of pharmacies toward whole care of the patient.

A long interview with FHIR leader Grahame Grieve described the progress of the project, and its the need for clinicians to take data exchange seriously. A quiet milestone was reached in October with a a production version from Cerner.

Given the frequent invocation of Uber (even more than the Cheesecake Factory) as a model for health IT innovation, it’s worth seeing the reasons that model is inapplicable.

A number of hot new sensors and devices were announced, including a tiny sensor from Intel, a device from Google to measure blood sugar and another for multiple vital signs, enhancements to Microsoft products, a temperature monitor for babies, a headset for detecting epilepsy, cheap cameras from New Zealand and MIT for doing retinal scans, a smart phone app for recognizing respiratory illnesses, a smart-phone connected device for detecting brain injuries and one for detecting cancer, a sleep-tracking ring, bed sensors, ultrasound-guided needle placement, a device for detecting pneumonia, and a pill that can track heartbeats.

The medical field isn’t making extensive use yet of data collection and analysis–or uses analytics for financial gain rather than patient care–the potential is demonstrated by many isolated success stories, including one from Johns Hopkins study using 25 patient measures to study sepsis and another from an Ontario hospital. In an intriguing peek at our possible future, IBM Watson has started to integrate patient data with its base of clinical research studies.

Frustrated enough with 2015? To end on an upbeat note, envision a future made bright by predictive analytics.

How Much Patient Data Do We Truly Need?

Posted on November 23, 2015 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

As the demands placed on healthcare data increase, the drive to manage it effectively has of course grown as well. This has led to the collection of mammoth quantities of data — one trade group estimates that U.S. hospitals will manage 665 terabytes of data during 2015 alone — but not necessarily better information.

The assumption that we need to capture most, if not all, of a patient’s care history digitally is clearly driving this data accumulation process. As care moves into the digital realm, the volume of data generated by the healthcare industry is climbing 48% percent per year, according to one estimate. I can only assume that the rate of increase will grow as providers incorporate data feeds from mHealth apps, remote monitoring devices and wearables, the integration of which is not far in the future.

The thing is, most of the healthcare big data discussions I’ve followed assume that providers must manage, winnow and leverage all of this data. Few, if any, influencers seem to be considering the possibility that we need to set limits on what we manage, much less developing criteria for screening out needless data points.

As we all know, all data is not made equal.  One conversation I had with a physician in the back in the early 1990s makes the point perfectly. At the time, I asked him whether he felt it would be helpful to put a patient’s entire medical history online someday, a distant but still imaginable possibility at the time. “I don’t know what we should keep,” he said. “But I know I don’t need to know what a patient’s temperature was 20 years ago.”

On the other hand, providers may not have access to all of the data they need either. According to research by EMC, while healthcare organizations typically import 3 years of legacy data into a new EMR, many other pertinent records are not available. Given the persistence of paper, poor integration of clinical systems and other challenges, only 25% of relevant data may be readily available, the vendor reports.

Because this problem (arguably) gets too little attention, providers grappling with it are being forced to to set their own standards. Should hospitals and clinics expand that three years of legacy data integration to five years? 10 years? The patient’s entire lifetime? And how should institutions make such a decision? To my knowledge, there’s still no clear-cut way to make such decisions.

But developing best practices for data integration is critical. Given the costs of managing needless patient data — which may include sub-optimal outcomes due to data fog — it’s critical to develop some guidelines for setting limits on clinical data accumulation. While failing to collect relevant patient data has consequences, turning big data into astronomically big data does as well.

By all means, let’s keep our eye on how to leverage new patient-centric data sources like wearable health  trackers. It seems clear that such data has a role in stepping up patient care, at least once we understand what part of it is wheat and which part chaff.

That being said, continuing to amass data at exponential rates is unsustainable and ultimately, harmful. Sometimes, setting limits is the only way that you can be sure that what remains is valuable.

Some Methods For Improving EMR Alerts

Posted on June 25, 2015 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

A new study appearing in the Journal of the American Medical Informatics Association has made some points that may turn out to be helpful in designing those pesky but helpful alerts for clinicians.

Making alerts useful and appropriate is no small matter. As we reported on a couple of years ago, even then EMR alert fatigue has become a major source of possible medical errors. In fact, a Pediatrics study published around that time found that clinicians were ignoring or overriding many alerts in an effort to stay focused.

Despite warnings from researchers and important industry voices like The Joint Commission, little has changed since then. But the issue can’t be ignored forever, as it’s a car crash waiting to happen.

The JAMIA study may offer some help, however. While it focuses on making drug-drug interaction warnings more usable, the principles it offers can serve as a model for designing other alerts as well.

For what it’s worth, the strategies I’m about to present came from a DDI Clinical Decision Support conference attended by experts from ONC, health IT vendors, academia and healthcare organizations.

While the experts offered several recommendations applying specifically to DDI alerts, their suggestions for presenting such alerts seem to apply to a wide range of notifications available across virtually all EMRs. These suggestions include:

  • Consistent use of color and visual cues: Like road signs, alerts should come in a limited and predictable variety of colors and styles, and use only color and symbols for which the meaning is clear to all clinicians.
  • Consistent use of terminology and brevity: Alerts should be consistently phrased and use the same terms across platforms. They should also be presented concisely, with minimal text, allowing for larger font sizes to improve readability.
  • Avoid interruptions wherever possible:  Rather than freezing clinician workflow over actions already taken, save interruptive alerts that require action to proceed for the most serious situation. The system should proactively guide decisions to safer alernatives, taking away the need for interruption.

The research also offers input on where and when to display alerts.

Where to display alert information:  The most critical information should be displayed on the alert’s top-level screen, with links to evidence — rather than long text — to back up the alert justification.

When to display alerts: The group concluded that alerts should be displayed at the point when a decision is being made, rather than jumping on the physician later.

The paper offers a great deal of additional information, and if you’re at all involved in addressing alerting issues or designing the alerts I strongly suggest you review the entire paper.

But even the excerpts above offer a lot to consider. If most alerts met these usability and presentation standards, they might offer more value to clinicians and greater safety to patients.

Doctors, Not Patients, May Be Holding Back mHealth Adoption

Posted on June 24, 2015 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Clearly, mHealth technology has achieved impressive momentum among a certain breed of health-conscious, self-monitoring consumer. Still, aside from wearable health bands, few mHealth technologies or apps have achieved a critical level of adoption.

The reason for this, according to a new survey, may lie in doctors’ attitudes toward these tools. According to the study, by market research firm MedPanel, only 15% of physicians are suggesting wearables or health apps as approaches for growing healthier.

It’s not that the tools themselves aren’t useful. According to a separate study by Research Now summarized by HealthData Management, 86% of 500 medical professionals said mHealth apps gave them a better understanding of a patient’s medical condition, and 76% said that they felt that apps were helping patients manage chronic illnesses. Also, HDM reported that 46% believed that apps could make patient transitions from hospital to home care simpler.

While doctors could do more to promote the use of mHealth technology — and patients might benefit if they did — the onus is not completely on doctors. MedPanel president Jason LaBonte told HDM that vendors are positioning wearables and apps as “a fad” by seeing them as solely consumer-driven markets. (Not only does this turn doctors off, it also makes it less likely that consumers would think of asking their doctor about mHealth tool usage, I’d submit.)

But doctors aren’t just concerned about mHealth’s image. They also aren’t satisfied with current products, though that would change rapidly if there were a way to integrate mobile health data into EMR platforms directly. Sure, platforms like HealthKit exist, but it seems like doctors want something more immediate and simple.

Doctors also told MedPanel that mHealth devices need to be easier to use and generate data that has greater use in clinical practice.  Moreover, physicians wanted to see these products generate data that could help them meet practice manager and payer requirements, something that few if any of the current roster of mHealth tools can do (to my knowledge).

When it comes to physician awareness of specific products, only a few seem to have stood out from the crowd. MedPanel found that while 82% of doctors surveyed were aware of the Apple Watch, even more were familiar with Fitbit.

Meanwhile, the Microsoft Band scored highest of all wearables for satisfaction with ease of use and generating useful data. Given the fluid state of physicians’ loyalties in this area, Microsoft may not be able to maintain its lead, but it is interesting that it won out this time over usability champ Apple.

Industry Tries To Steamroll Physician Complaints About EMR Impact On Patient Face Time

Posted on June 9, 2015 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Some doctors — and a goodly number of consumers, too — argue that the use of EMRs inevitably impairs the relationship between doctors and patients. After all, it’s just common sense that forcing a doctor to glue herself to the keyboard during an encounter undercuts that doctor’s ability to assess the patient, critics say.

Of course, EMR vendors don’t necessarily agree. And some researchers don’t share that view either. But having reviewed some comments by a firm studying physician EMR use, and the argument an EMR vendor made that screen-itis doesn’t worry docs, it seems to me that the “lack of face time” complaint remains an important one.

Consider how some analysts are approaching the issue. While admitting that one-third to one-half of the time doctors spend with patients is spent using an EMR, and that physicians have been complaining about this extensively over the past several years, doctors are at least using these systems more efficiently, reports James Avallone, Director of Physician Research, who spoke with EHRIntelligence.com.

What’s important is that doctors are getting adjusted to using EMRs, Avallone suggests:

Whether [time spent with EMRs] is too much or too little, it’s difficult for us to say from our perspective…It’s certainly something that physicians are getting used to as it becomes more ingrained in their day-to-day behaviors. They’ve had more time to streamline workflow and that’s something that we’re seeing in terms of how these devices are being used at the point of care.

Another attempt to minimize the impact of EMRs on patient encounters comes from ambulatory EMR vendor NueMD. In a recent blog post, the editor quoted a study suggesting that other issues were far more important to doctors:

According to a 2013 study published in Health Affairs, only 25.8 percent of physicians reported that EHRs were threatening the doctor-patient relationship. Administrative burdens like the ICD-10 transition and HIPAA compliance regulations, on the other hand, were noted by more than 41 percent of those surveyed.

It’s certainly true that doctors worry about HIPAA and ICD-10 compliance, and that they could threaten the patient relationship, but only to the extent that they affect the practice overall. Meanwhile, if one in four respondents to the Health Affairs study said that EMRs were a threat to patient relationships, that should be taken quite seriously.

Of course, both of the entities quoted in this story are entitled to their perspective. And yes, there are clearly benefits to physician use of EMRs, especially once they become adjusted to the interface and workflow.

But if this quick sample of opinions is any indication, the healthcare industry as a whole seems to be blowing past physicians’ (and patients’) well-grounded concerns about the role EMR documentation plays in patient visits.

Someday, a new form factor for EMRs will arise — maybe augmented or virtual reality encounters, for example — which will alleviate the eyes-on-the-screen problem. Until then, I’d submit, it’s best to tackle the issue head on, not brush it off.

When Will Genomic Medicine Become As Common As Antibiotics?

Posted on April 29, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’m completely and utterly fascinated by the work that so many companies are doing with genomic medicine. I think that’s a good thing since I believe genomic medicine is just now starting to make its way into mainstream medicine. Plus, over the next couple years, genomic medicine is going to be a huge part of what every doctor does in healthcare. Maybe it won’t be as common as the antibiotic (what is?), but it will be extremely important to healthcare.

With that in mind, I’ve been devouring this whitepaper on the evolving promise of genomic medicine. It offers such a great overview of what’s happening with genomic medicine.

For example, they offer a great list of reasons why genomic medicine has become so important today: descreased cost of sequencing, speed of sequencing, availability of genomic tests, ways the genome can be used, reimbursement by payors, etc. That’s such a powerful cocktail of improvements. Does anyone doubt that widespread genomic medicine is near?

I also love how the whitepaper highlights the three pillars of genomic medicine: sequencing, translational medicine and personalized healthcare. That provides a great framework for starting to understand what’s happening with genomic medicine. Plus, the whitepaper offers these place where we’re seeing real benefits in healthcare: prediction of drug response, diagnosis of disease, and identification of targeted therapies. While much of this is still being tested, I’m excited by its progress.

I still have a lot to learn about genomic medicine, but the evolving promise of genomic medicine whitepaper has me even more interested in what’s happening. I’d be interested to hear what companies you think are most interesting in the genomic medicine space.

By Supporting Digital Health, EMRs To Create Collective Savings of $78B Over Next Five Years

Posted on December 1, 2014 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Here’s the news EMR proponents have been insisting would emerge someday, justifying their long-suffering faith in the value of such systems.  A new study from Juniper Research has concluded that EMRs will save $78 billion cumulatively across the globe over the next five years, largely by connecting digital health technologies together.

While I’m tempted to get cynical about this — my poor heart has been broken by so many unsupportable or conflicting claims regarding EMR savings over the years — I think the study definitely bears examination. If digital health technologies like smart watches, fitness trackers, sensor-laden clothing, smart mobile health apps, remote monitoring and telemedicine share a common backbone that serves clinicians, the study’s conclusions look reasonable on first glance.

According to Juniper, the growth of ACOs is pushing providers to think on a population health level and that, in turn, is propelling them to adopt digital health tech.  And it’s not just top healthcare leaders that are getting excited about digital health. Juniper found that over the last 18 months, healthcare workers have become significantly more engaged in digital healthcare.

But how will providers come to grips with the floods of data generated by these emerging technologies? Why, EMRs will do the job. “Advanced EHRs will provide the ‘glue’ to bring together the devices, stakeholders and medical records in the future connected healthcare environment,” according to Juniper report author Anthony Cox.

But it’s important to note that at present, EMRs aren’t likely to have the capacity sort out the growing flood of connected health data on their own. Instead, it appears that healthcare providers will have to rely on data intermediary platforms like Apple’s HealthKit, Samsung’s SAMI (Samsung Architecture for Multimodal Interactions) and Microsoft Health. In reality, it’s platforms like these, not EMRs, that are truly serving as the glue for far-flung digital health data.

I guess what I’m trying to say is that on reflection, my cynical take on the study is somewhat justified. While they’ll play a very important role, I believe that it’s disingenuous to suggest that EMRs themselves will create huge healthcare savings.

Sure, EMRs are ultimately where the buck stops, and unless digital health data can be consumed by doctors at an EMR console, they’re unlikely to use it. But even though using EMRs as the backbone for digital health collection and population health management sounds peachy, the truth is that EMR vendors are nowhere near ready to offer robust support for these efforts.

Yes, I believe that the combination of EMRs and digital health data will prove to be very powerful over time. And I also believe that platforms like HealthKit will help us get there. I even believe that the huge savings projected by Juniper is possible. I just think getting there will be a lot more awkward than the study makes it sound.

Are Researchers Ready to Use Patient Health Records?

Posted on October 20, 2014 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

There’s a groundswell of opinion throughout health care that to improve outcomes, we need to share clinical data from patients’ health records with researchers who are working on cures or just better population health measures. One recommendation in the much-studied JASON report–an object of scrutiny at the Office of the National Coordinator and throughout the field of health IT–called on the ONC to convene a conference of biomedical researchers.

At this conference, presumably, the health care industry will find out what researchers could accomplish once they had access to patient data and how EHRs would have to change to meet researchers’ needs. I decided to contact some researchers in medicine and ask them these very questions–along with the equally critical question of how research itself would have to evolve to make use of the new flood of data.
Read more..

Safety Issues Remain Long After EMR Rollout

Posted on June 24, 2014 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

The following is a bit depressing, but shouldn’t come as a surprise. A new study published in the Journal of the American Medical Informatics Association has concluded that patient safety issues relate to EMR rollouts continue long after the EMR has been implemented, according to a report in iHealthBeat.

Now, it’s worth noting that the study focused solely on the Veterans Health Administration’s EMR, which doubtless has quirks of its own. That being said, the analysis is worth a look.

To do the study, researchers used the Veterans Health Administration’s Informatics Patient Safety Office, which has tracked EMR safety issues since the VA’s EMR was implemented in 1999.  Researchers chose 100 closed patient safety investigations related to the EMR that took place between August 2009 and May 2013, which covered 344 incidents.

Researchers analyzed not only safety problems related to EMR technology, but also human operational factors such as workflow demands, organizational guidelines and user behavior, according to a BMJ release.

After reviewing the data, researchers found that 74 events related to safety problems with EMR technology, including false alarms, computer glitches and system failures. They also discovered problems with “hidden dependencies,” situation which a change in one part of the EMR system inadvertently changed important aspects in another part of the system.

The data also suggested that 25 other events were related to the unsafe use of technology, including mistakes in interpreting screens or human input errors.

All told, 70% of the investigations had found at least two reasons for each problem.

Commonly found safety issues included data transmission between different parts of the EMR system, problems related to software upgrades and EMR information display issues (the most commonly identified  problem), iHealthBeat noted.

After digging into this data, researchers recommended that healthcare organizations should build “a robust infrastructure to monitor and learn from” EMRs, because EMR-related safety concerns have complicated social and technical origins. They stressed that this infrastructure is valuable not only for providers with newly installed EMRs, but also for those with EMRs said that in place for a while, as both convey significant safety concerns.

They concede, however, that building such an infrastructure could prove quite difficult at this time, with organizations struggling with meaningful use compliance and the transition from ICD-9 to ICD-10.

However, the takeaway from this is that providers probably need to put safety monitoring — for both human and technical factors — closer to the top of their list of concerns. It stands to reason that both newly-installed and mature EMR implementations should face points of failure such as those described in the study, and they should not be ignored. (In the meantime, here’s one research effort going on which might be worth exploring.)