Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Health IT Security: What Can the Association for Computing Machinery (ACM) Contribute?

Posted on February 24, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

A dazed awareness of security risks in health IT has bubbled up from the shop floor administrators and conformance directors (who have always worried about them) to C-suite offices and the general public, thanks to a series of oversized data breaches that recentlh peaked in the Anthem Health Insurance break-in. Now the US Senate Health Committee is taking up security, explicitly referring to Anthem. The inquiry is extremely broad, though, promising to address “electronic health records, hospital networks, insurance records, and network-connected medical devices.”
Read more..

FTC Gingerly Takes On Privacy in Health Devices (Part 2 of 2)

Posted on February 11, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this series of articles laid out the difficulties of securing devices in the Internet of Things (particularly those used in the human body). Accepting that usability and security have to be traded off against one another sometimes, let’s look at how to make decisions most widely acceptable to the public.

The recent FTC paper on the Internet of Things demonstrates that they have developed a firm understanding of the problems in security and privacy. For this paper, they engaged top experts who had seen what happens when technology gets integrated into daily life, and they covered all the issues I know of. As devices grow in sophistication and spread to a wider population, the kinds of discussion the FTC held should be extended to the general public.

For instance, suppose a manufacturer planning a new way of tracking people–or a new use for their data–convened some forums in advance, calling on potential users of the device to discuss the benefits and risks. Collectively, the people most affected by the policies chosen by the manufacturer would determine which trade-offs to adopt.

Can ordinary people off the street develop enough concerned with their safety to put in the time necessary to grasp the trade-offs? We should try asking them–we may be pleasantly surprised. Here are some of the issues they need to consider.

  • What can malicious viewers determine from data? We all may feel nervous about our employer learning that we went to a drug treatment program, but how much might the employer learn just by knowing we went to a psychotherapist? We now know that many innocuous bits of data can be combined to show a pattern that exposes something we wished to keep secret.

  • How guarded do people feel about their data? This depends largely on the answer to the previous question–it’s not so much the individual statistics reported, but the patterns that can emerge.

  • What data does the device need to collect to fulfill its function? If the manufacturer, clinician, or other data collector gathers up more than the minimal amount, how are they planning to use that data, and do we approve of that use? This is an ethical issue faced constantly by health care researchers, because most patients would like their data applied to finding a cure, but both the researchers and the patients have trouble articulating what’s kosher and what isn’t. Even collecting data for marketing purposes isn’t necessarily evil. Some patients may be willing to share data in exchange for special deals.

  • How often do people want to be notified about the use of their data, or asked for permission? Several researchers are working on ways to let patients express approval for particular types of uses in advance.

  • How long is data being kept? Most data users, after a certain amount of time, want only aggregate data, which is supposedly anonymized. Are they using well-established techniques for anonymizing the data? (Yes, trustworthy techniques exist. Check out a book I edited for my employer, Anonymizing Health Data.)

I believe that manufacturers can find a cross-section of users to form discussion groups about the devices they use, and that these users can come to grips with the issues presented here. But even an engaged, educated public is not a perfect solution. For instance, a privacy-risking choice that’s OK for 95% of users may turn out harmful to the other 5%. Still, education for everyone–a goal expressed by the FTC as well–will undoubtedly help us all make safer choices.

FTC Gingerly Takes On Privacy in Health Devices (Part 1 of 2)

Posted on February 10, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Are you confused about risks to privacy when everything from keystrokes to footsteps is being monitored? The Federal Trade Commission is confused too. In January they released a 55-page paper summarizing results of discussions with privacy experts about the Internet of Things, plus some recommendations. After a big build-up citing all sorts of technological and business threats, the report kind of fizzles out. Legislation specific to the IoT was rejected, but several suggestions for “general privacy legislation” such as requiring security on devices.

Sensors and controls are certainly popping up everywhere, so the FTC investigation comes at an appropriate time. My senator, Ed Markey, who has been a leader in telecom and technology for decades in Congress, recently released a report focused on automobiles. But the same concerns show up everywhere in various configurations. In this article I’ll focus on health care, and on the dilemma of security in that area.

No doubt about it, pacemakers and other critical devices can be hacked. It could be a movie: in Scene 1 a non-descript individual is moving through a crowded city street, thumbing over a common notepad. In Scene 2, later, numerous people fall to the ground as their pacemakers fail. They just had the bad luck to be in the vicinity of the individual with the notepad, who implanted their implants with malicious code that took effect later.

But here are the problems with requiring more security. First, security in computers almost always rests on encryption, which leads to an increase in the size of the data being protected. The best-known FTC case regarding device security, where they forced changes for cameras used in baby monitors, was appropriate for these external devices that could absorb the extra overhead. But increased data size leads to an increase in memory use, which in turn requires more storage and computing power on a small embedded device, as well as more transmission time over the network. In the end, devices may have to be heavier and more costly, serious barriers to adoption.

Furthermore, software always has bugs. Some lie dormant for years, like the notorious Heartbleed bug in the very software that web sites around the world depend on for encrypted communications. To provide security fixes, a manufacturer has to make it easy for embedded devices to download updated software–and any bug in that procedure leaves a channel for attack.

Perhaps there is a middle ground, where devices could be designed to accept updates only from particular computers in particular geographic locations. A patient would then be notified through email or a text message to hike it down to the doctor, where the fix could be installed. And the movie scene where malicious code gets downloaded from the street would be less likely to happen.

In the next part of this article I’ll suggest how the FTC and device manufacturers can engage the public to make appropriate privacy and security decisions.

The Future of Privacy

Posted on December 24, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I realize that it’s Christmas Eve, but this tweet caught my eye and so I thought I’d share.

This is the season of over sharing. With that in mind, I think the discussion of privacy is an important one to have.

Fitbit Data Being Used In Personal Injury Case

Posted on December 8, 2014 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Lately, there’s been a lot of debate over whether data from wearable health bands is useful to clinicians or only benefits the consumer user. On the one hand, there are those that say that a patient’s medical care could be improved if doctors had data on their activity levels, heart rate, respirations and other standard metrics. Others, meanwhile, suggest that unless it can be integrated into an EMR and made usable, such data is just a distraction from other more important health indicators.

What hasn’t come up in these debates, but might far more frequently in the future,  is the idea that health band data can be used in personal injury cases to show the effects of an accident on a plaintiff. According to Forbes, a law firm in Calgary is working on what may be the first personal injury case to leverage smart band data, in this case activity data from a Fitbit.

The plaintiff, a young woman, was injured in an accident four years ago. While Fitbit hadn’t entered the market yet, her lawyers at McLeod Law believe they can establish the fact that she led an active lifestyle prior to her accident. They’ve now started processing data from her Fitbit to show that her activity levels have fallen under the baseline for someone of her age and profession.

It’s worth noting that rather than using Fitbit data directly, they’re processing it using analytics platform Vivametrica, which uses public research to compare people’s activity data with that of the general population. (Its core business is to analyze data from wearable sensor devices for the assessment of health and wellness.) The plaintiff will share her Fitbit data with Vivametrica for several months to present a rich picture of her activities.

Using even analyzed, processed data generated by a smart band is “unique,” according to her attorneys. “Till now we’ve always had to rely on clinical interpretation,” says Simon Muller of McLeod Law. “Now we’re looking at longer periods of time to the course of the day, and we have hard data.”

But even if the woman wins her case, there could be a downside to this trend. As Forbes notes, insurers will want wearable device data as much as plaintiffs will, and while they can’t force claimants to wear health bands, they can request a court order demanding the data from whoever holds the data. Dr. Rick Hu, co-founder and CEO of Vivametrica, tells Forbes that his company wouldn’t release such data, but doesn’t explain how he will be able to refuse to honor a court-ordered disclosure.

In fact, wearable devices could become a “black box” for the human body, according to Matthew Pearn, an associate lawyer with Canadian claims processing firm Foster & Company. In a piece for an insurance magazine, Pearn points out that it’s not clear, at least in his country, what privacy rights the wearers of health bands maintain over the data they generate once they file a personal injury suit.

Meanwhile, it’s still not clear how HIPAA protections apply to such data in the US. When FierceHealthIT recently spoke with Deven McGraw, a partner in the healthcare practice of Manatt, Phelps & Phillips, she pointed out that HIPAA only regulates data “in the hands of, with the control of, or within the purview of a medical provider, a health plan or other covered entity under the law.”  In other words, once the wearable data makes it into the doctor’s record, HIPAA protections are in force, but until then they are not.

All told, it’s pretty sobering to consider that millions of consumers are generating wearables data without knowing how vulnerable it is.

OCR Didn’t Meet HIPAA Security Requirements

Posted on December 27, 2013 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Oops — this doesn’t sound good. According to a report from the HHS OIG, the agency’s Office for Civil Rights has failed to meet the requirements for oversight and enforcement of the HIPAA security rule.

The 26-page report spells out several problems with OCR’s enforcement of the security rule, which was expanded by the HITECH ACT of 2009 to demand regular audits of covered healthcare organizations and their business associates. The vulnerabilities found leave procedural holes which could harm OCR’s ability to do its job regarding the security rule, the OIG said.

What was OCR failing to do? Well for one thing, the report contends, OCR had not assessed the risks, established priorities or implemented controls for the audits to ensure their compliance. Another example: OCRs investigation files didn’t contain the required documentation supporting key decisions made by staff, because the staff didn’t consistently follow the offices procedures by reviewing case documentation.

What’s more, the OCR apparently hasn’t been implementing sufficient controls, including supervisory review and documentation retention, to make sure investigators follow policies and procedures for properly managing security rule investigations.

The OIG also found that OCR wasn’t complying with federal cyber security requirements for its own information systems used to process and store data on investigations. Requirements it was neglecting included getting HHS authorizations to operate the system used to oversee and enforce security rule. OCR also failed to complete privacy impact assessments, risk analyses or system security plans for two of its three systems, the OIG concluded.

All told, it seems that if the OCR is going to oversee the privacy rule properly, it had better get its own act together.

100% Interoperability, Quantified Self Data, and Data Liquidity – #HITsm Chat Highlights

Posted on March 30, 2013 I Written By

Katie Clark is originally from Colorado and currently lives in Utah with her husband and son. She writes primarily for Smart Phone Health Care, but contributes to several Health Care Scene blogs, including EMR Thoughts, EMR and EHR, and EMR and HIPAA. She enjoys learning about Health IT and mHealth, and finding ways to improve her own health along the way.

Topic 1: Do you think the healthcare system WANTS 100% interoperability & data liquidity? Why/why not?

 

Topic 2: As consumer, what are YOUR fears about your health data being shared across providers/payers/government?

 

Topic 3: What do you think payers will do with #quantifiedself data if integrated into EHR? Actuarial/underwriting?

 

Topic 4: Could there be a correlation between your fear of data liquidity and your health?

 

Topic 5: What could assuage your fears? Education? Legislation? Regulation? Healthcare system withdrawal?

Sending PHI Over SMS

Posted on February 26, 2013 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I recently was talking with a doctor who told me about a healthcare communications company called YouCall MD. The doctor liked many of the features that YouCall MD provided. He loved that they would answer your Live Calls, transcribe a message to you and send you that message by SMS. Well, he loved all of it except the part that YouCallMD was using insecure SMS messages to send protected health information (PHI).

I wrote about this before in my post called “Texting is Not HIPAA Secure.” I know that many doctors sit on all sides of this. I heard one doctor tell me, “They’re not going to throw us all in jail.” Other doctors won’t use SMS at all because of the HIPAA violations.

While a doctor probably won’t get thrown in jail for sending PHI over SMS, they could get large fines. I think this is an even greater risk when sending PHI over SMS becomes institutionalized through a service like YouCallMD. This isn’t a risk I’d want to take if I were a doctor.

Plus, the thing that baffles me is that there are a lot of secure text message services out there. Using these services would accomplish the same thing for the doctor and YouCall MD and they wouldn’t put a doctor or institution at risk for violating HIPAA. Soon the day will come when doctors can send SMS like messages on their phones in a secure way and they won’t have to worry about it. I just think it’s a big mistake for them to be using their phone’s default SMS.

Privacy Group Seeks Rules For Healthcare Clouds

Posted on January 4, 2013 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

It’s time for HHS’ Office for Civil Rights to release “strong guidance” on cloud computing in healthcare, according to a letter sent by advocacy group Patient Privacy Rights. The letter, sent by PPR president Deborah Peel, argues that the transition to EMRs will be hampered if patients aren’t confident that their medical information is protected wherever it goes, including the cloud.

“More specific guidance in the health care ecosystem would help ensure that cloud providers, health care professionals and patients alike are aware of how the privacy and security rules apply to clouds,” Peel writes.

Peel suggests that HHS rely on lessons learned from the recently-settled Phoenix Cardiac Surgery case, in which a medical group was fined $100,000 for HIPAA violations including exposing clinical and surgical appointments on a publicly-available Internet calendar.

Specifically, Peel recommends the following standards be established:

Security Standards: Security standards must be implemented that are consistent and
compatible with standards required of federal agencies including the HIPAA Security
Rule and the HITECH breach notification requirements.

Privacy of Protected Health Information: Standards must be included that establish the
appropriate use, disclosure, and safeguarding of individually identifiable information,
which take into account stronger state and federal requirements, Constitutional rights to
health information privacy, and the fact that HIPAA is the “floor” for privacy protections
and was never intended to replace stronger ethical, or professional standards or “best
practices.”

BAA Requirement and Standardization: Consistent with prior OCR guidance, any
software company given access to protected health information by a HIPAA-covered
entity to perform a service for the covered entity is a business associate. Thus, as OCR
representatives have publicly stated on several occasions, a Business Associate
Agreement (BAA) is required between a cloud computing provider and any customer
entity that uses or discloses protected health information or de-identified health
information. It is imperative that these BAA standards promote the protection of privacy
and security of health information to ensure public trust in health IT systems and promote
quality health care, health care innovation and health provider collaboration.

I was particularly interested to note her suggestion that software companies given access to ePHI sign Business Associate Agreements.  My guess is that some cloud providers would fail miserably if asked to uphold HIPAA standards, simply because they aren’t prepared.  If Peel’s recommendations were enacted, in other words, it could shake up the cloud services industry.  Maybe that’s a good thing, but it won’t be a pleasant one for some.

Disaster Planning, Horrors of Generic HIT Training, and Snap.MD: Around Healthcare Scene

Posted on November 25, 2012 I Written By

Katie Clark is originally from Colorado and currently lives in Utah with her husband and son. She writes primarily for Smart Phone Health Care, but contributes to several Health Care Scene blogs, including EMR Thoughts, EMR and EHR, and EMR and HIPAA. She enjoys learning about Health IT and mHealth, and finding ways to improve her own health along the way.

EMR and HIPAA

Disaster Planning and HIPAA

Unfortunately, it appears that far too many healthcare providers don’t follow this rule. There aren’t very many that even have an emergency plan in place. However, this will soon need to be remedied. HIPAA security general rules state that not only must a patient’s privacy be protected, but the ePHI is available at all times — even in the case of an emergency. All healthcare providers, regardless of size, will need to implement some kind of disaster planning, regardless of their situation, in order to be in compliance with these regulations.

EMR Add-On’s that Provide Physician Benefit

MedCPU is a part of the inaugural NYC Digitial Health Accelerator class. They have developed a new concept that will likely to very helpful to many. It analyzes free text notes and structured data, and checks for compliance with rules and to identify any deviances. The company described one hospital using the services the company provides as a benefit given to doctors who use EHR. This is just one of many add-ons available, but some are seeing them to be a large reason why some doctors want to adopt EMRs.

Hospital EMR and EHR

Video: The Horrors of Generic HIT Training

Need a break from the day-to-day monotony? Be sure to check at this video on the horrors of generic HIT Training. It “offers a wry take on what happens when EMR training isn’t relevant for the doctor who’s getting the training. In this case, we witness the plight of a heart surgeon who’s forced through a discussion on primary care functions that she neither wants nor needs.”

Study: EMR ROI Stronger In Low-Income Setting

A recent study revealed something interesting. Hospitals in low-income areas actually may have a decent return on investment when an EMR is integrated. Three different areas were looked at and analyzed, and it was found that after five years of having an EMR, the hospital examined had a net benefit of over $600,000. Not all hospitals will benefit this much, but it’s encouraging to see more EMR success stories popping up.

Smart Phone Healthcare

Get Peace of Mind and Avoid the ER With Snap.MD

It’s the middle of the night, and your child breaks out in a rash all of his or her body. The doctor’s office doesn’t have middle of the night, on-call doctors, so the only option is the ER, right? Maybe not for long. Snap.MD, a new telemedicine system, may help parents decide if the Emergency Room is the best course of action. Parents of pediatric patients are connected to physician, who will help evaluate the situation via video conferencing.