Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Where Medical Devices Fall Short: Can More Testing Help? (Part 2 of 2)

Posted on April 6, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

As we saw in the previous article, networks of medical devices suffer from many problems intrinsic to the use of wireless technology. But testimony at the joint workshop held by the FCC and FDA on March 31 revealed that problems with the devices themselves run deep. One speaker reported uncovering departures from the standards for transmitting information, which led to incompatibilities and failures. Another speaker found repeated violations of security standards. As a trivial example, many still use the insecure and long-deprecated WEP authentication instead of WPA.

Most devices incorporate generic radio transmitters from third parties, just as refrigerators use replaceable compressors and lawnmowers depend on engines from just a few manufacturers. When markets become commoditized in this way, one would expect reliability. Whatever problems radio transmitters may have, though, are compounded by the software layered on top. Each device needs unique software that can affect the transmissions.

The WiFi Alliance is a consortium of manufacturers that tests devices for reliability and interoperability. But because it doesn’t contain users or government representatives, some panelists thought it was too lenient toward manufacturers. The test plan itself is a trade secret (although it was described at a high level in the workshop by Mick Conley). Several speakers testified that devices could be certified by the Alliance and still perhaps fail to connect.

To my mind, testing is a weak response to design problems. It happens after the fact, and can punish a poor engineering process but not fix it. You can test-drive a car and note that the steering is a bit sluggish, but can you identify the software or the part that is causing the problem? And can you explain it to the salesman, presumably to be conveyed back to the engineers?

Cars tend to be reliable first because of widespread competition that extends internationally, and partly because lawsuits keep the managers of the automobile companies alert to engineering problems. It would be a shame to need lawsuits to correct technical problems with medical devices, but refusing to buy them might do the trick. Test beds do provide warnings that can aid purchase decisions.

Unfortunately, the forum produced no real progress on the leading question of the day, whether a national test bed would be a good idea. It was recognized generally that test beds have to reflect the particular conditions at different institutions, and that multiple test beds would be needed to cover a useful range of settings. Without further clarification of what a test bed would look like, or who would build it, a couple panelists called for the creation of national test beds. More usefully, in my opinion, one speaker suggested a public repository of tests, which are currently the proprietary sects of vendors or academic researchers.

So none of the questions about test beds received answers at the workshop, and no practical recommendations emerged. One would expect that gathering the leading experts in medical devices for seven and a half hours would allow them to come up with actionable next steps or at least a framework for proceeding, but much of the workshop was given over to rhetoric about the importance of medical devices, the need for them to interoperate, and other standard rallying cries of health care reform. I sometimes felt that I was in hearing a pitch for impressionable financial backers. And of course, there was always time taken up by vendors, providers, and regulators trying to point the finger at someone else for the problems.

Device and networking expert David Höglund has written up how the workshop fell short. I would like someone to add up all the doctors, all the senior engineers, all the leading policy makers in the room, calculate how much they are paid per minute, and add up the money wasted every time a speaker extols patient engagement, interoperability, or some other thesis that is already well known to everybody in the room. (Or perhaps they aren’t well-known–another challenge to the medical field.)

Personally, I would write off most of the day as a drain on the US economy. But I have tried to synthesize the points we need to look at going forward, so that I hope you feel the time you devoted to reading the article was well-spent.

Where Medical Devices Fall Short: Can More Testing Help? (Part 1 of 2)

Posted on April 3, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Clinically, medical devices do amazing things–they monitor vital signs (which, as the term indicates, can have life-or-death implications), deliver care, and measure health in the form of fitness devices. But technologically, medical devices fall way short–particularly in areas of interference, interoperability, and security.

The weaknesses of devices, their networks, and the settings where they reside came up over and over again in a joint workshop held by the FCC and FDA on March 31. I had a chance to hear most of it via live broadcast, a modern miracle of networking in itself.

Officially, the topic of the gathering was test beds for medical devices. Test beds are physical centers set up to mimic real-life environments in which devices are used, hosting large numbers of devices from different manufacturers running the popular software and protocols that they would employ out in the field. The workshop may have been an outgrowth of a 2012 report from an FCC mHealth Task Force which recommended “FCC should encourage and lend its expertise for the creation and implementation of wireless test beds.” (Goal 4.4, page 13) I thought the workshop had little new to offer on test beds, however, as the panelists concentrated on gaps between clinical needs and the current crop of devices and networks.

Medical settings are notoriously difficult places to employ technology. One panelist even referred to them as “hostile environments,” which I think is going a bit far. After all, other industries employ devices outdoors where temperatures drop below zero or rise precariously, or underwater, or even on battlefields (which actually are also medical settings).

I don’t dispute that medical networks present their own particular challenges. Hospitals crowd many devices into small spaces (one picture displayed at the workshop showed 15 wireless devices in a hospital room). Some last for decades, churning away while networks, environments, and requirements change around them. Walls and equipment may contain lead, blocking signals. Meanwhile, patient safety requires correct operation, resilience, and iron-clad security. Meanwhile, patients and their families expect access to a WiFi networks just like they get in the cafe down the street.

And yet Shawn Jackman, Director of Strategic Planning at Kaiser Permanente IT, said that problems are usually not in the infrastructure but in the devices. Let’s look at the main issue, interference (on which the panelists spent much more time than interoperability or security) and then at the ideas emerging from the workshop.

All the devices we associate with everyday network use (the IEEE 802.11 devices called WiFi) are all squeezed into two bands of the radio spectrum at 2.4 Gigahertz and 5 Gigahertz. When the inventors of WiFi told the world’s regulators that they had a new technology requiring a bandwidth in which to operate, freeing up existing bandwidth was hard to do, and the inventors were mere engineers, not powerful institutions such as the military or television broadcasters. So they resigned themselves to the use of the 2.4 and 5 Gigahertz bands, which were known as “junk spectrum” because all sorts of other equipment were allowed to emit radio-frequency noise in those bands.

Thus, because the bands are relatively narrow and are crowded with all sorts of radio emissions, interference is hard to avoid. But you don’t want to enter a patient’s room and find her comatose while a key monitor was unable to send out its signal.

Ironically, at the request of health IT companies, the FCC set aside two sets of spectrum for medical use, the Medical Device Radiocommunications Service (MedRadio) established in 1999 and the Wireless Medical Telemetry Service (WMTS) established in 2002. But these are almost completely ignored.

According to Shahid Shah, a medical device and software development expert, technologies that are dedicated to narrow markets such as health are crippled from the outset. They can’t benefit from the economies of scale enjoyed by mass market technologies, so they tend to be expensive, poorly designed, and locked in to their vendors. Just witness the market for electronic health records. So the medical profession found devices designed for the medical bands unsatisfactory and turned to devices that used the WiFi spectrum.

In 2010, by the way, the FCC relaxed its rules and permitted new devices to enter the little-used spectrum at the edges of television channels, known as white spaces, but commercial exploitation of the new spectrum is still in its infancy.

Furthermore, the FCC has freed up the enormous bandwidth used for decades to broadcast TV networks, by kicking off the stubborn users (known with respect as the “last grandmas”) who didn’t want to pay more for cable. An enormous stretch of deliciously long-range spectrum is theoretically available for public use–but the FCC won’t release it that way. Instead, they will sell it to other large corporations.

Networks are unreliable across the field. How often do you notice the wireless Internet go down at a conference? (It happened to me at a conference I attended the next day after the FCC/FDA workshop. At one conference, somebody even stole the hubs!) Further problems include network equipment of different ages that use slightly different protocols, which prove particularly troublesome when devices have to change location. (Think of wheeling a patient down the hall.) And you can’t just make sure everything is working the first time a device is deployed. Changes in the environment and surrounding equipment can lead to a communications failure that never turned up before, or that turned up and you thought you had fixed.

Medical device and wireless expert David Höglund claims that WLAN can work in a healthcare environment for medical devices. He lays out three overarching tasks that administrators must do for success:

  • They have to understand how each application works and its communications patterns: real-time delivery of small packets, batch delivery of large volumes of data, etc.

  • They have to provide the coverage required for each device or application. Is it used in the hallways, the patient rooms, the labs? How about the elevators on which patients are transported?

  • They need to obey the application’s quality service requirements. For instance, how long is a failure tolerable? For a device monitoring a patient’s heart in the ICU, a five-second interruption may be too long.

Medical devices and hospital networks need to be more robust and more secure than the average WiFi network. This calls for redundant equipment, separate networks for different purposes, and lots of testing. Hence the need for test beds, which many hospitals and conglomerates set up for themselves. Should the FCC create a national test bed? We’ll look at that in the next installment of this article.

FTC Gingerly Takes On Privacy in Health Devices (Part 2 of 2)

Posted on February 11, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this series of articles laid out the difficulties of securing devices in the Internet of Things (particularly those used in the human body). Accepting that usability and security have to be traded off against one another sometimes, let’s look at how to make decisions most widely acceptable to the public.

The recent FTC paper on the Internet of Things demonstrates that they have developed a firm understanding of the problems in security and privacy. For this paper, they engaged top experts who had seen what happens when technology gets integrated into daily life, and they covered all the issues I know of. As devices grow in sophistication and spread to a wider population, the kinds of discussion the FTC held should be extended to the general public.

For instance, suppose a manufacturer planning a new way of tracking people–or a new use for their data–convened some forums in advance, calling on potential users of the device to discuss the benefits and risks. Collectively, the people most affected by the policies chosen by the manufacturer would determine which trade-offs to adopt.

Can ordinary people off the street develop enough concerned with their safety to put in the time necessary to grasp the trade-offs? We should try asking them–we may be pleasantly surprised. Here are some of the issues they need to consider.

  • What can malicious viewers determine from data? We all may feel nervous about our employer learning that we went to a drug treatment program, but how much might the employer learn just by knowing we went to a psychotherapist? We now know that many innocuous bits of data can be combined to show a pattern that exposes something we wished to keep secret.

  • How guarded do people feel about their data? This depends largely on the answer to the previous question–it’s not so much the individual statistics reported, but the patterns that can emerge.

  • What data does the device need to collect to fulfill its function? If the manufacturer, clinician, or other data collector gathers up more than the minimal amount, how are they planning to use that data, and do we approve of that use? This is an ethical issue faced constantly by health care researchers, because most patients would like their data applied to finding a cure, but both the researchers and the patients have trouble articulating what’s kosher and what isn’t. Even collecting data for marketing purposes isn’t necessarily evil. Some patients may be willing to share data in exchange for special deals.

  • How often do people want to be notified about the use of their data, or asked for permission? Several researchers are working on ways to let patients express approval for particular types of uses in advance.

  • How long is data being kept? Most data users, after a certain amount of time, want only aggregate data, which is supposedly anonymized. Are they using well-established techniques for anonymizing the data? (Yes, trustworthy techniques exist. Check out a book I edited for my employer, Anonymizing Health Data.)

I believe that manufacturers can find a cross-section of users to form discussion groups about the devices they use, and that these users can come to grips with the issues presented here. But even an engaged, educated public is not a perfect solution. For instance, a privacy-risking choice that’s OK for 95% of users may turn out harmful to the other 5%. Still, education for everyone–a goal expressed by the FTC as well–will undoubtedly help us all make safer choices.

FTC Gingerly Takes On Privacy in Health Devices (Part 1 of 2)

Posted on February 10, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Are you confused about risks to privacy when everything from keystrokes to footsteps is being monitored? The Federal Trade Commission is confused too. In January they released a 55-page paper summarizing results of discussions with privacy experts about the Internet of Things, plus some recommendations. After a big build-up citing all sorts of technological and business threats, the report kind of fizzles out. Legislation specific to the IoT was rejected, but several suggestions for “general privacy legislation” such as requiring security on devices.

Sensors and controls are certainly popping up everywhere, so the FTC investigation comes at an appropriate time. My senator, Ed Markey, who has been a leader in telecom and technology for decades in Congress, recently released a report focused on automobiles. But the same concerns show up everywhere in various configurations. In this article I’ll focus on health care, and on the dilemma of security in that area.

No doubt about it, pacemakers and other critical devices can be hacked. It could be a movie: in Scene 1 a non-descript individual is moving through a crowded city street, thumbing over a common notepad. In Scene 2, later, numerous people fall to the ground as their pacemakers fail. They just had the bad luck to be in the vicinity of the individual with the notepad, who implanted their implants with malicious code that took effect later.

But here are the problems with requiring more security. First, security in computers almost always rests on encryption, which leads to an increase in the size of the data being protected. The best-known FTC case regarding device security, where they forced changes for cameras used in baby monitors, was appropriate for these external devices that could absorb the extra overhead. But increased data size leads to an increase in memory use, which in turn requires more storage and computing power on a small embedded device, as well as more transmission time over the network. In the end, devices may have to be heavier and more costly, serious barriers to adoption.

Furthermore, software always has bugs. Some lie dormant for years, like the notorious Heartbleed bug in the very software that web sites around the world depend on for encrypted communications. To provide security fixes, a manufacturer has to make it easy for embedded devices to download updated software–and any bug in that procedure leaves a channel for attack.

Perhaps there is a middle ground, where devices could be designed to accept updates only from particular computers in particular geographic locations. A patient would then be notified through email or a text message to hike it down to the doctor, where the fix could be installed. And the movie scene where malicious code gets downloaded from the street would be less likely to happen.

In the next part of this article I’ll suggest how the FTC and device manufacturers can engage the public to make appropriate privacy and security decisions.

The Future of Health Involves Human-Agent Collectives (Part 2 of 2)

Posted on February 3, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this article looked at the basic idea of devices and computer systems that can deal with loosely connected actors, human and mechanical. This part takes it further into current experiments in health care.

Devices Must Adapt to Collaborators’ Needs

To be a useful agent, a computer system must understand the context in which it is operating. Take pulse oximetry–the measurement of oxygen in the blood. It’s an easy procedure to perform, and is used in hospitals to tell whether a sick patient, such as one with lung problems, is in danger. The same technology can also be used by fitness buffs to tell whether they’re getting a good workout.

These are obviously very different goals–and the device used for pulse oximetry will also be used in different ways. In a risk monitoring situation, samples may be taken less often than during a healthy fitness workout. At the minimum, a device should be configurable so that it gives the timing and accuracy needed in a particular setting. It should also be easy to turn a device on and off if it is needed for a limited time period, such as a workout.

Diego Alonso, a researcher at MD PnP, points to analgesia (the administration of pain killers) in the hospital as an example of competing needs that must be reconciled by a supervisor, human or machine. So long as the patient is stable, the pain killer should be administered. But if a monitor notices a drop in the patient’s vital signs, the painkiller’s dose must be reduced.

A popular standard for exchanging data among devices is the Data Distribution Service (DDS). The standard is rich and complex, typical of those produced by the Object Management Group. But among its virtues are an ability to specify how often you want data from a particular device. OpenICE uses DDS, among many other systems.

In short, the frequency and accuracy of data collection should be configurable. As patterns of human behavior are better understood, devices may become even more responsive to the contexts in which they are needed.

Even before the current move to standards, Capsule Tech managed to get devices to talk to EHRs through the grueling effort of interpreting the inputs and outputs of each system and crafting protocols to make them work together.

Started in 1997, the company has recently expanded from merely sharing data to developing useful tools based on data, such as alerts and a modest amount of analytics. Some of these tools demonstrate a kind of adaptability reminiscent of a human-agent collective.

For instance, alerts are crucial in any hospital environment, but notorious for crying wolf–90% can be false. In addition to sending data to the EHR, Capsule’s SmartLinx’s Medical Device Information System sends near-real-time alarm data to its Alarm Management System. This helps hospitals manage their alarms, in line with the Joint Commission’s National Patient Safety Goals.

SmartLinx does not suppress any information, but when reporting it through the Alarm Management System to the clinician’s mobile device, includes some context to help the clinician decide whether the alert needs a response. Some context involves basics such as who, where, when, and which device was activated. Other context can consist of physiological data such as the patient’s heart rate and how long the alarm has been sounding.

Additionally, to provide actionable, timely information that aids in human decision making, Capsule has built an early warning scoring system application that uses vital sign information to calculate an immediate general health status score for patients and to identify those likely to deteriorate. The application also guides the care team through appropriate actions. This may be the beginning of an intelligent, integrated health system.

Computer Systems Must Be Sensitive to Bad Input and Failure

An unfortunate tenet of human-agent collectives is that agents can’t be trusted. The most basic example is system failure. If you don’t hear from a device, does that mean the patient is fine or that the device’s battery has run out of power? DDS offers a handshake or heartbeat, the common way for distributed computing systems to determine whether part of the system has gone bust.

Provenance is another requirement for collaborative environments. This means recording when a measurement was taken, and what person or device was responsible. There must also be ways to protect against data that arrives late or is assigned the wrong timestamp. When data is entered by humans, errors can be assumed as a matter of course, even in something as simple as spelling the name of a medication manufactured by your company.

More subtle is input from inexact devices, and worse still is the potential for malicious manipulation. I heard of instances where people who got rewards by their employers for reporting exercise put their fitness devices on their dogs. Using analytics, a health care system should be able to tell that a series of sudden 20-mile-per-hour rushes interrupted by inactivity are not a human activity.

Ethical and Technical Considerations

Lots of issues come up as simple human-computer interaction evolves into collaboration among agents. I’ve already mentioned error detection and provenance. Other issues include flexibility in computers taking or relinquishing control (agile teaming), legal responsibility, providing each agent with the right incentives, considering when to engage the user’s attention (instead of taking action behind the scenes), and offering the proper interface to do so. Connected health is a deep concept offering a lot to explore, and technologies will get better as we understand more of it.

ONC Annual Meeting – Who’s Going?

Posted on January 28, 2015 I Written By

When Carl Bergman isn't rooting for the Washington Nationals or searching for a Steeler bar, he’s Managing Partner of EHRSelector.com, a free service for matching users and EHRs. For the last dozen years, he’s concentrated on EHR consulting and writing. He spent the 80s and 90s as an itinerant project manger doing his small part for the dot com bubble. Prior to that, Bergman served a ten year stretch in the District of Columbia government as a policy and fiscal analyst.

ONC’s Agenda – February 2-3, Washington, DC

Next Monday, ONC holds its annual meeting in downtown DC. I’m going, one small advantage of living here. Here’s the agenda. To see day two, click on the agenda header.

I’m particularly interested in these topics:

  • Adverse event reporting,
  • Interoperability standards,
  • Meaningful Use program’s future, and
  • Usability.

Looking at the agenda, I should stay busy with one exception. There isn’t much on usability. The word’s only on the agenda once. Not a surprise since ONC has pretty much relinquished any role to the vendors.

How important do you think the ONC meeting and also the ONC run Healthdatapalooza now that meaningful use has kind of run its course? Will these two meeting gain steam and influence or will organizations start to go other places? I’ll be interested to watch that trend as I attend the event.

If you can’t attend, you can follow on various webcasts and twitter. If you do plan to attend, I’d love to see you there. To email me, click on my name in my profile blurb, or at carl@ehrselector.com.

A Rub On Tatoo for Diabetics

Posted on January 27, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’ve been covering a lot of wearables and sensors over on Smart Phone Healthcare through the years. It’s been great to see the evolution and I still think we’re just at the very beginning of what is going to be possible with these health sensors. However, the leaks in the damn are starting to appear and soon we’ll have a tidal wave of amazing health data from these health sensors.

Don’t believe me? Check out this story on Gizmodo about a Rub On Tattoo that measures a person’s blood glucose levels. For those too busy to click over, here’s an excerpt:

Pricking your finger for a blood glucose test will never, ever be fun. Thankfully, scientists have been hard at work on a bloodless and needleless alternative: a rub-on temporary tattoo that, as weird as it sounds, gently sucks glucose through the surface of the skin.

The thin, flexible device created by nanoengineers at UCSD is based on the much bulkier GlucoWatch, a now-discontinued wristband that worked through the same glucose-sucking principal. But the electric current GlucoWatch used to attract glucose to the surface of the skin was too high, and wearers were not keen on the discomfort. This temporary tattoo gets around the problem by using a gentler but still effective current.

Unfortunately, we’re still a few years out from this becoming a market ready product, but it’s another illustration of the kind of research and ingenuity that’s being put into the health sensors marketplace. I’m personally concerned about my risk for diabetes, and so I’m extremely excited about new developments around diabetes. However, this is just one of many more developments that are going to change the world of healthcare as we know it.

What do you think of this new wave of sensors? How will the medical establishment integrate all this new data? What other changes are happening which we should keep an eye on? I don’t think most doctors, practices, hospitals, EMR companies, etc are ready for what’s happening.

Adverse Event Reporting and EHRs: The MEDTECH Act’s Effects

Posted on December 18, 2014 I Written By

When Carl Bergman isn't rooting for the Washington Nationals or searching for a Steeler bar, he’s Managing Partner of EHRSelector.com, a free service for matching users and EHRs. For the last dozen years, he’s concentrated on EHR consulting and writing. He spent the 80s and 90s as an itinerant project manger doing his small part for the dot com bubble. Prior to that, Bergman served a ten year stretch in the District of Columbia government as a policy and fiscal analyst.

Medical systems generate adverse event (AE) reports to improve service delivery and public safety.

As I described in my first blog post on Adverse Events, these reports are both a record of what went wrong and a rich source for improving workflow, process and policy. They can nail responsibility not only for bad acts, but also bad actors and can help distinguish between the two. The FDA gathers AE reports to look for important health related patterns, and if needed to trigger recalls, modifications and public alerts.

EHRs generate AEs, but the FDA doesn’t require reporting them. Reporting is only for medical devices defined by the FDA and EHRs aren’t. However, users sometimes report EHR related AEs. Now, there’s proposed legislation that would preclude EHRs as medical devices and stop any consideration of EHR reports.

MEDTECH Act’s Impact

EHRs are benign software systems that need minimal oversight. At least that’s what MEDTECH Act’s congressional sponsors, Senators Orrin Hatch (R- Utah) and Michael Bennett (D- Colorado) think. If they have their way – and much of the EHR industry hopes so – the FDA can forget regulating EHRs and tracking any EHR related AEs.

EHRs and Adverse Events

Currently, if you ask MAUD, the FDA’s device, adverse event tracking system about EHRs, you don’t get much, as you might expect. Up to October, MAUD has 320,000 AEs. Of these about 30 mention an EHR in passing. (There may be many more, but you can’t search for phrases such as “electronic health,” etc.) While the FDA hasn’t defined EHRs as a device, vendors are afraid it may. Their fear is based on this part of the FDA’s device definition standard:

[A]n instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory which is:

…[I]ntended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals…

I think this section clearly covers EHRs. They are intended for diagnostic, cure, mitigation, etc., of disease. Consistent public policy in general and a regard for protecting the public’s health, I think, augers for mandatory reporting of EHR caused AEs.

Why then aren’t EHRs devices that require AE reporting? In a word, politics. The FDA’s been under pressure from vendors who contend their products aren’t devices just software. They also don’t want their products subject to being criticized for failures, especially in instances where they have no control over the process. That may be understandable from a corporate point of view, but there are several reasons for rejecting that point of view. Consider what the FDA currently defines as a medical device.

Other Devices. The FDA captures AE reports on an incredible number of devices. A few examples:

  • Blood pressure computers
  • Crutches
  • Drug dose calculators
  • Ice bags
  • Lab gear – practically all
  • Robotic telemedicine devices, and many, many more.

ECRI on EHR Adverse Events

The respected patient safety NGO, the ECRI Institute, puts the issue squarely. Each year, it publishes its Top Ten Health Technology Hazards. Number one is inadequate alarm configuration policies and practices. Number two: “Incorrect or missing data in electronic health records and other health IT systems.” Its report says:

Many care decisions today are based on data in an electronic health record (EHR) or other IT-based system. When functioning well, these systems provide the information clinicians need for making appropriate treatment decisions. When faults or errors exist, however, incomplete, inaccurate, or out-of-date information can end up in a patient’s record, potentially leading to incorrect treatment decisions and patient harm. What makes this problem so troubling is that the integrity of the data in health IT (HIT) systems can be compromised in a number of ways, and once errors are introduced, they can be difficult to spot and correct. Examples of data integrity failures include the following:

  • Appearance of one patient’s data in another patient’s record (i.e., a patient/data mismatch)
  • Missing data or delayed data delivery (e.g., because of network limitations, configuration errors, or data entry delays)
  • Clock synchronization errors between different medical devices and systems
  • Default values being used by mistake, or fields being prepopulated with erroneous data
  • Inconsistencies in patient information when both paper and electronic records are used
  • Outdated information being copied and pasted into a new report Programs for reporting and reviewing HIT-related problems can help organizations identify and rectify breakdowns and failures.

ECRI spells out why AE reporting is so important for EHRs:

…[S]uch programs face some unique challenges. Chief among these is that the frontline caregivers and system users who report an event—as well as the staff who typically review the reports—may not understand the role that an HIT system played in an event…

The MEDTECH Act’s Effects

The move to curtail the FDA’s EHR jurisdiction is heating up. Senators Hatch and Bennett’s proposed act exempts EHRs from FDA jurisdiction by defining EHRs as passive data repositories.

Most industry chatter about the act has been its exempting EHRs and others from the ACA’s medical device tax. However, by removing FDA’s jurisdiction, it would also exempt EHRs from AE reports. Repealing a tax is always popular. Preventing AE reports may make vendors happy, but clinicians, patients and the public may not be as sanguine.

The act’s first two sections declare that any software whose main purpose is administrative or financial won’t come under device reporting.

Subsection (c) is the heart of the act, which exempts:

Electronic patient records created, stored, transferred, or reviewed by health care professionals or individuals working under supervision of such professionals that functionally represent a medical chart, including patient history records,

Subsection (d) says that software that conveys lab or other test results are exempt.

Subsection (e) exempts any software that makes recommendations for patient care.

There are several problems with this language. The first is that while it goes to lengths to say what is not a device, it is silent about what is. Where is the line drawn? If an EHR includes workflow, as all do, is it exempt because it also has a chart function? The bill doesn’t say

Subsection (d) on lab gear is also distressing. Currently, most lab gear are FDA devices. Now, if your blood chemistry report is fouled by the lab’s equipment ends up harming you, it’s reportable. Under MEDTECH, it may not be.

Then there’s the question of who’s going to decide what’s in and what’s out? Is it the FDA or ONC, or both? Who knows Most important, the bill’s negative approach fails to account for those AEs, as ECRI puts it when: “Default values being used by mistake, or fields being prepopulated with erroneous data.”

Contradictory Terms

The act has a fascinating proviso in subsection (c):

…[P]rovided that software designed for use in maintaining such patient records is validated prior to marketing, consistent with the standards for software validation relied upon by the Secretary in reviewing premarket submissions for devices.

This language refers to information that device manufacturers file with HHS prior to marketing. Oddly, it implies that EHRs are medical devices under the FDA’s strictest purview, though the rest of the act says they are not. Go figure.

What’s It Mean?

The loud applause for the MEDTECH act coming from the EHR industry, is due to its letting vendors off the medical device hook. I think the industry should be careful about what it’s wishing for. Without effective reporting, adverse events will still occur, but without corrective action. In that case, everything will seem to go swimmingly. Vendors will be happy. Congress can claim to being responsive. All will be well.

However, this legislative penny in the fuse box will prove that keeping the lights on, regardless of consequences, isn’t the best policy. When something goes terribly wrong, but isn’t reported then, patients will pay a heavy price. Don’t be surprised when some member of Congress demands to know why the FDA didn’t catch it.

Fitbit Data Being Used In Personal Injury Case

Posted on December 8, 2014 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Lately, there’s been a lot of debate over whether data from wearable health bands is useful to clinicians or only benefits the consumer user. On the one hand, there are those that say that a patient’s medical care could be improved if doctors had data on their activity levels, heart rate, respirations and other standard metrics. Others, meanwhile, suggest that unless it can be integrated into an EMR and made usable, such data is just a distraction from other more important health indicators.

What hasn’t come up in these debates, but might far more frequently in the future,  is the idea that health band data can be used in personal injury cases to show the effects of an accident on a plaintiff. According to Forbes, a law firm in Calgary is working on what may be the first personal injury case to leverage smart band data, in this case activity data from a Fitbit.

The plaintiff, a young woman, was injured in an accident four years ago. While Fitbit hadn’t entered the market yet, her lawyers at McLeod Law believe they can establish the fact that she led an active lifestyle prior to her accident. They’ve now started processing data from her Fitbit to show that her activity levels have fallen under the baseline for someone of her age and profession.

It’s worth noting that rather than using Fitbit data directly, they’re processing it using analytics platform Vivametrica, which uses public research to compare people’s activity data with that of the general population. (Its core business is to analyze data from wearable sensor devices for the assessment of health and wellness.) The plaintiff will share her Fitbit data with Vivametrica for several months to present a rich picture of her activities.

Using even analyzed, processed data generated by a smart band is “unique,” according to her attorneys. “Till now we’ve always had to rely on clinical interpretation,” says Simon Muller of McLeod Law. “Now we’re looking at longer periods of time to the course of the day, and we have hard data.”

But even if the woman wins her case, there could be a downside to this trend. As Forbes notes, insurers will want wearable device data as much as plaintiffs will, and while they can’t force claimants to wear health bands, they can request a court order demanding the data from whoever holds the data. Dr. Rick Hu, co-founder and CEO of Vivametrica, tells Forbes that his company wouldn’t release such data, but doesn’t explain how he will be able to refuse to honor a court-ordered disclosure.

In fact, wearable devices could become a “black box” for the human body, according to Matthew Pearn, an associate lawyer with Canadian claims processing firm Foster & Company. In a piece for an insurance magazine, Pearn points out that it’s not clear, at least in his country, what privacy rights the wearers of health bands maintain over the data they generate once they file a personal injury suit.

Meanwhile, it’s still not clear how HIPAA protections apply to such data in the US. When FierceHealthIT recently spoke with Deven McGraw, a partner in the healthcare practice of Manatt, Phelps & Phillips, she pointed out that HIPAA only regulates data “in the hands of, with the control of, or within the purview of a medical provider, a health plan or other covered entity under the law.”  In other words, once the wearable data makes it into the doctor’s record, HIPAA protections are in force, but until then they are not.

All told, it’s pretty sobering to consider that millions of consumers are generating wearables data without knowing how vulnerable it is.

Adverse Event Reporting: What Is It?

Posted on December 3, 2014 I Written By

When Carl Bergman isn't rooting for the Washington Nationals or searching for a Steeler bar, he’s Managing Partner of EHRSelector.com, a free service for matching users and EHRs. For the last dozen years, he’s concentrated on EHR consulting and writing. He spent the 80s and 90s as an itinerant project manger doing his small part for the dot com bubble. Prior to that, Bergman served a ten year stretch in the District of Columbia government as a policy and fiscal analyst.

Eric Duncan’s Ebola death in Dallas was, to say the least, an adverse event (AE). Famously now, when he had a high fever, pronounced pain, etc., he went to Texas Health’s Presbyterian Hospital’s ER, and was sent home with antibiotics. Three days later much worse, he came back by ambulance.

In the aftermath of Duncan’s death, the hospital’s EHR, EPIC, came in for blame, though it was later cleared. Many questions have come from Duncan’s death including how our medical system handles such problems. Articles often use the term adverse event, but rarely mention reporting. I think it’s important to take a direct look at our adverse event reporting systems and where EHR and AEs are headed. This blog post looks at AE systems. The next will look at where EHRs fit in.

The FDA: Ground Zero for Adverse Event Reports

HHS’ Food and Drug Administration has prime, but not exclusive, jurisdiction over adverse reports breaking them into three classes:

  • Medicines
  • Medical Devices, and
  • Vaccines.

Four FDA systems cover these classes:

  • FAERS. This is FDA’s system for drug related adverse reports. It collects information for FDA’s post marketing for drug and biologic product surveillance. For example, if there’s a problem with Prozac, it’s reported here.
  • MAUDE. The Manufacturer and User Facility Device Experience reporting system. If an X-Ray machine malfunctions or lab equipment operates defectively, this is where the report goes.
  • VAERS. Vaccine adverse reports are collected here.
  • MEDSUN. This is voluntary, device reporting system gathers more detailed information than MAUD. It’s run by as a collaboration of the FDA and several hundred hospitals, clinics, etc. (Disclosure: My wife was MAUD project system developer.) MEDSUN captures details and incidents, such as close calls or events that may have had a potential for harm, but did not cause any. MEDSUN has two subsystems, HeartNet, which is for electrophysiology labs and KidNet for neonatal and pediatric ICUs.
DSC04388

MEDSUN Reporting Poster

State Adverse Event Reporting Systems

Several states require Adverse Event reporting in addition to FDA reports. Twenty-seven states and DC require Adverse Event reports, with varying coverage and reporting requirements. Some states, such as Pennsylvania, have an extensive, public system for reporting and analysis.

Patient Safety Organizations

Added to federal and state organizations are many patient safety organizations (PSOs) with an Adverse Event interest. Some are regional or state groups. Others, are national non profits, such as the ECR Institute.

The Safety Reporting Paradox

If you delve into an Adverse Event reporting systems, you’ll quickly see some institutions are more present than others. That doesn’t necessarily mean they are prone to bad events. In fact, these may be the most safety conscious who report more of their events than others. Moreover, high reporters often have policies that encourage AE reporting to find systemic problems without punitive consequences.

Many safety prevention systems work this way. Those in charge recognize it’s important to get all the facts out. They realize adopting a punitive approach drives behavior underground.

For example, the FAA has learned this the hard way. Recently on vacation, I met two air traffic controllers who contrasted the last Bush administration’s approach to now. Under Bush’s FAA errors were subject to public shaming. The result was that many systemic problems were hidden. Now, the FAA encourages reporting and separates individual behavior. The result is that incidents are more reported and more analyzed. If individual behavior is culpable, it’s addressed as needed.

In the next part, I’ll look at how EHRs fit into the current system and the congressional efforts to exempt them from reporting AEs, a move that I think is akin to putting pennies in a fuse box.