Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Health IT Security: What Can the Association for Computing Machinery (ACM) Contribute?

Posted on February 24, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

A dazed awareness of security risks in health IT has bubbled up from the shop floor administrators and conformance directors (who have always worried about them) to C-suite offices and the general public, thanks to a series of oversized data breaches that recentlh peaked in the Anthem Health Insurance break-in. Now the US Senate Health Committee is taking up security, explicitly referring to Anthem. The inquiry is extremely broad, though, promising to address “electronic health records, hospital networks, insurance records, and network-connected medical devices.”
Read more..

FTC Gingerly Takes On Privacy in Health Devices (Part 2 of 2)

Posted on February 11, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this series of articles laid out the difficulties of securing devices in the Internet of Things (particularly those used in the human body). Accepting that usability and security have to be traded off against one another sometimes, let’s look at how to make decisions most widely acceptable to the public.

The recent FTC paper on the Internet of Things demonstrates that they have developed a firm understanding of the problems in security and privacy. For this paper, they engaged top experts who had seen what happens when technology gets integrated into daily life, and they covered all the issues I know of. As devices grow in sophistication and spread to a wider population, the kinds of discussion the FTC held should be extended to the general public.

For instance, suppose a manufacturer planning a new way of tracking people–or a new use for their data–convened some forums in advance, calling on potential users of the device to discuss the benefits and risks. Collectively, the people most affected by the policies chosen by the manufacturer would determine which trade-offs to adopt.

Can ordinary people off the street develop enough concerned with their safety to put in the time necessary to grasp the trade-offs? We should try asking them–we may be pleasantly surprised. Here are some of the issues they need to consider.

  • What can malicious viewers determine from data? We all may feel nervous about our employer learning that we went to a drug treatment program, but how much might the employer learn just by knowing we went to a psychotherapist? We now know that many innocuous bits of data can be combined to show a pattern that exposes something we wished to keep secret.

  • How guarded do people feel about their data? This depends largely on the answer to the previous question–it’s not so much the individual statistics reported, but the patterns that can emerge.

  • What data does the device need to collect to fulfill its function? If the manufacturer, clinician, or other data collector gathers up more than the minimal amount, how are they planning to use that data, and do we approve of that use? This is an ethical issue faced constantly by health care researchers, because most patients would like their data applied to finding a cure, but both the researchers and the patients have trouble articulating what’s kosher and what isn’t. Even collecting data for marketing purposes isn’t necessarily evil. Some patients may be willing to share data in exchange for special deals.

  • How often do people want to be notified about the use of their data, or asked for permission? Several researchers are working on ways to let patients express approval for particular types of uses in advance.

  • How long is data being kept? Most data users, after a certain amount of time, want only aggregate data, which is supposedly anonymized. Are they using well-established techniques for anonymizing the data? (Yes, trustworthy techniques exist. Check out a book I edited for my employer, Anonymizing Health Data.)

I believe that manufacturers can find a cross-section of users to form discussion groups about the devices they use, and that these users can come to grips with the issues presented here. But even an engaged, educated public is not a perfect solution. For instance, a privacy-risking choice that’s OK for 95% of users may turn out harmful to the other 5%. Still, education for everyone–a goal expressed by the FTC as well–will undoubtedly help us all make safer choices.

FTC Gingerly Takes On Privacy in Health Devices (Part 1 of 2)

Posted on February 10, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Are you confused about risks to privacy when everything from keystrokes to footsteps is being monitored? The Federal Trade Commission is confused too. In January they released a 55-page paper summarizing results of discussions with privacy experts about the Internet of Things, plus some recommendations. After a big build-up citing all sorts of technological and business threats, the report kind of fizzles out. Legislation specific to the IoT was rejected, but several suggestions for “general privacy legislation” such as requiring security on devices.

Sensors and controls are certainly popping up everywhere, so the FTC investigation comes at an appropriate time. My senator, Ed Markey, who has been a leader in telecom and technology for decades in Congress, recently released a report focused on automobiles. But the same concerns show up everywhere in various configurations. In this article I’ll focus on health care, and on the dilemma of security in that area.

No doubt about it, pacemakers and other critical devices can be hacked. It could be a movie: in Scene 1 a non-descript individual is moving through a crowded city street, thumbing over a common notepad. In Scene 2, later, numerous people fall to the ground as their pacemakers fail. They just had the bad luck to be in the vicinity of the individual with the notepad, who implanted their implants with malicious code that took effect later.

But here are the problems with requiring more security. First, security in computers almost always rests on encryption, which leads to an increase in the size of the data being protected. The best-known FTC case regarding device security, where they forced changes for cameras used in baby monitors, was appropriate for these external devices that could absorb the extra overhead. But increased data size leads to an increase in memory use, which in turn requires more storage and computing power on a small embedded device, as well as more transmission time over the network. In the end, devices may have to be heavier and more costly, serious barriers to adoption.

Furthermore, software always has bugs. Some lie dormant for years, like the notorious Heartbleed bug in the very software that web sites around the world depend on for encrypted communications. To provide security fixes, a manufacturer has to make it easy for embedded devices to download updated software–and any bug in that procedure leaves a channel for attack.

Perhaps there is a middle ground, where devices could be designed to accept updates only from particular computers in particular geographic locations. A patient would then be notified through email or a text message to hike it down to the doctor, where the fix could be installed. And the movie scene where malicious code gets downloaded from the street would be less likely to happen.

In the next part of this article I’ll suggest how the FTC and device manufacturers can engage the public to make appropriate privacy and security decisions.

Healthcare During and After 9/11

Posted on September 11, 2011 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

If you’re interested in reading a more personal post about 9/11, you can check out this post I did on EMR and HIPAA about teaching the new generation about 9/11.

As I’ve watched the various news stories, documentaries and memorials about 9/11, this 60 Minutes news story about a doctor caring for 9/11 survivors was incredibly fascinating. Turns out, he set up a free clinic for the survivors and also started doing interviews with these people so that their stories would be recorded for others to hear. If you didn’t see it, you should watch it below.

The opening to the 60 Minutes video had me wondering about how healthcare dealt with all the injuries in the aftermath of September 11th. It seems like so many angles of September 11th have been covered, I can’t remember ever seeing the stories of hospitals and other doctors trying to treat the influx of patients that no doubt overwhelmed their doors. If you know of some, I’d love to see them.

Maybe that’s not such a terrible thing that the focus hasn’t been on the healthcare stories. Maybe it’s better that we focus on the heroes who lost their lives that day. Although, I’m sure we’re going to hear more and more healthcare related stories about 9/11 illnesses as time passes. Too bad we don’t have an integrated EMR with HIE that could help to track all those that were exposed to the gases and dust that were found at ground zero. That might help their cause since the 9/11 First Responders bill is only for the next 5 years.

John Halamka also has a post up about the impact of 9/11 on Healthcare IT. He concludes that “Disaster recovery, security, and emergency support efforts will continue, inspired by the memories of those who perished 10 years ago.”

Balancing Privacy and Security with Patient Care

Posted on December 23, 2009 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

Healthcare InformationWeek has an article that discusses the challenges of EMR security and privacy. A lot of the stuff is nothing new to those of us in the healthcare space. Although, it’s interesting to see how they summarize things like the goal to be full EMR by 2014 and the EMR stimulus money.

However, the article did include these interesting stats on the number of breaches that happen in healthcare and the focus IT managers put on privacy and data security in healthcare.

Healthcare providers and other health businesses aren’t stepping up to protect privacy, according to a recent study. Some 80% of healthcare organizations have experienced at least one incident of lost or stolen health information in the past year, according to the study, released this month from security management company LogLogic and the Ponemon Institute, which conducts privacy and information management research.

Also, some 70% of IT managers surveyed said senior management doesn’t view privacy and data security as a priority, and 53% say their organizations don’t take appropriate steps to protect patient privacy. Less than half judge their existing security measures as “effective or very effective.”

I was surprised that 80% of organizations have had an incident of lost or stolen health information. However, I honestly don’t see this ever changing. Stuff happens even with the very best efforts.

I did also like this quote of John Halamka about the challenge of balancing privacy and security with sharing the patient information to provide better patient care.

“You want to protect the patient’s preferences for confidentiality,” Halamka said. But you also need to get information where it’s needed. “If you come to the emergency department in a coma, and you have a record that includes psychiatric treatment, HIV, drug abuse, and other information, would you share part of it or all of it? My preference would be all of it, with the hope that emergency workers would use it discreetly, to save my life.” But other people may feel differently, Halamka said, and healthcare policy needs to serve all those needs.

I’m a little surprised that Halamka has had psychiatric treatment, HIV and drug abuse. He’s doing quite well considering that history. (that’s sarcasm in case you didn’t note it) His history aside, I’m totally with him on wanting that information available as well. However, he’s totally correct that many people wouldn’t want that stuff shared. Enabling the consumer to make that decision though is a hard nut to crack.