Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Healthcare Orgs Must Do Better With Mobile Data Security Education

Posted on November 15, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

A new study finds that while most healthcare professionals use mobile messaging at work, many aren’t sure what their organization’s mobile messaging policies are, and a large number have transmitted Protected Health Information via insecure channels. In other words, it seems that health IT leaders still have a lot of work to do in locking down these channels.

According to a report by Scrypt, 65% of health professionals who use a mobile device at work also use the same device for personal use, the standard BYOD compromise which still gives healthcare CIOs the willies. Underscoring the security risks, 52% of respondents said that they had free reign over which applications they downloaded and used at work.

To be fair, virtually all respondents (96%) use at least one security method to protect the security of their mobile device. However, their one-factor efforts — usually passcode or PIN-based — may not be secure enough to protect such sensitive data.

The research also blows the whistle on the frequency with which health professionals share PHI using a mobile messaging clients (not surprisingly given that the vendor sells a secure mobile messaging solution). It notes that just a quarter of those who reported using mobile messages use a secure client, and that one in five have sent or received PHI via mobile message with names (24%), telephone numbers (19%) and email addresses (13%) included in the content.

Researchers found that 78% of healthcare professionals use mobile messaging at work. However, few understand how their organizations expect them to use these services. Fifty-two percent of respondents who use mobile messaging said they didn’t know or weren’t sure of what their organization’s policies were on the subject.

Showing some awareness of data security vulnerabilities, 56% of the survey respondents said they believe the organization could do more to educate employees on the rules around sharing PHI and HIPAA compliance. On the other hand, it seems like most consider this to be everybody else’s problem, as 80% of respondents reported that their own knowledge of HIPAA compliance was either good or very good.

Clearly, as self-serving as the vendor’s conclusion is, they’re onto something important. Not only are CIOs facing huge challenges in establishing a smart BYOD policy, they’re confronted with a major educational problem when it comes to sharing of PHI. While the professionals on their team may have been handed a mobile policy, they may not have absorbed it. And if they haven’t been given a policy, you have to be conservative and assume they’re not doing a great job protecting data on their own.

If nothing else, healthcare organizations can remind their staff members to be careful when texting at work – heck, why not text them the reminder so it’s in context? Bottom line, even highly intelligent and educated team members can succumb to habit and transmit PHI. So a nudge never hurts!

A 2 Prong Strategy for Healthcare Security – Going Beyond Compliance

Posted on November 7, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

This post is sponsored by Samsung Business. All thoughts and opinions are my own.

As if our security senses weren’t on heightened alert enough, I think all of us were hit by the recent distributed denial of service attacks that took down a number of major sites on the internet. The unique part of this attack was that it used a “botnet” of internet of things (IoT) devices. It’s amazing how creative these security attacks have become and healthcare is often the target.

The problem for healthcare is that too many organizations have spent their time and money on compliance versus security. Certainly, compliance is important (HIPAA Audits are real and expensive if you fail), but just because you’re compliant doesn’t mean you’re secure. Healthcare organizations need to move beyond compliance and make efforts to make their organizations more secure.

Here’s a 2 prong strategy that organizations should consider when it comes to securing their organization’s data and technology:

Build Enough Barriers
The first piece of every healthcare organization’s security strategy should be to ensure that you’ve created enough barriers to protect your organization’s health data. While we’ve seen an increase in targeted hacks, the most common attacks on healthcare organizations are still the hacker who randomly finds a weakness in your technology infrastructure. Once they find that weakness, they exploit it and are able to do all the damage.

The reality is that you’ll never make your health IT 100% secure. That’s impossible. However, if you create enough barriers to entry, you’ll keep out the majority of hackers that are just scouring the internet for opportunities. Building the right barriers to entry means that most hackers will move on to a more vulnerable target and leave you alone. Some of these barriers might be a high quality firewall, AI security, integrated mobile device security, user training, encryption (device and in transit), and much more.

Building these barriers has to be ingrained into your culture. You can’t just change to a secure organization overnight. It needs to be deeply embedded into everything you do as a company and all the decisions you make.

Create a Mitigation and Response Strategy
While we’d like to dream that a breach will never occur to us, hacks are becoming more a question of when and not if they will happen. This is why it’s absolutely essential that healthcare organizations create a proper mitigation and response strategy.

I recently heard about a piece of ransomware that hit a healthcare organization. In the 60 seconds from when the ransomware hit the organization, 6 devices were infected before they could mitigate any further spread. That’s incredible. Imagine if they didn’t have a mitigation strategy in place. The ransomware would have spread like wildfire across the organization. Do you have a mitigation strategy that will identify breaches so you can stop them before they spread?

Creating an appropriate response to breaches, infections, and hacks is also just as important. While no incident of this nature is fun, it is much better to be ahead of the incident versus learning about it when the news story, patient, or government organization comes to you with the information. Make sure you have a well thought out strategy on how you’ll handle a breach. They’re quickly becoming a reality for every organization.

As healthcare moves beyond compliance and focuses more on security, we’ll be much better positioned to protect patients’ data. Not only is this the right thing to do for our patients, it’s also the right thing to do for our businesses. Creating a good security plan which prevents incidents and then backing that up with a mitigation and response strategy are both great steps to ensuring your organization is prepared.

For more content like this, follow Samsung on Insights, Twitter, LinkedIn , YouTube and SlideShare.

New ONC Scorecard Tool Grades C-CDA Documents

Posted on August 2, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

The ONC has released a new scorecard tool which helps providers and developers find and resolve interoperability problems with C-CDA documents. According to HealthDataManagement, C-CDA docs that score well are coded with appropriate structure and semantics under HL7, and so have a better chance of being parseable by different systems.

The scorecard tool, which can be found here, actually offers two different types of scores for C-CDA documents, which must be uploaded to the site to be analyzed. One score diagnoses whether the document meets the requirements of the 2015 Edition Health IT Certification for Transitions of Care, granting a pass/fail grade. The other score, which is awarded as a letter grade ranging from A+ to D, is based on a set of enhanced interoperability rules developed by HL7.

The C-CDA scorecard takes advantage of the work done to develop SMART (Substitutable Medical Apps Resusable Technologies). SMART leverages FHIR, which is intended to make it simpler for app developers to access data and for EMR vendors to develop an API for this purpose. The scorecard, which leverages open-source technology, focuses on C-CDA 2.1 documents.

The SMART C-CDA scorecard was designed to promote best practices in C-CDA implementation by helping creators figure out how well and how often they follow best practices. The idea is also to highlight improvements that can be made right away (a welcome approach in a world where improvement can be elusive and even hard to define).

As SMART backers note, existing C-CDA validation tools like the Transport Testing Tool provided by NIST and Mode-Driven Health Tools, offer a comprehensive analysis of syntactic conformance to C-CDA specs, but don’t promote higher-level best practices. The new scorecard is intended to close this gap.

In case developers and providers have HIPAA concerns, the ONC makes a point of letting users know that the scorecard tool doesn’t retain submitted C-CDA files, and actually deletes them from the server after the files have been processed. That being said, ONC leaders still suggest that submitters not include any PHI or personally-identifiable information in the scorecards they have analyzed.

Checking up on C-CDA validity is becoming increasingly important, as this format is being used far more often than one might expect. For example, according to a story appearing last year in Modern Healthcare:

  • Epic customers shared 10.2 million C-CDA documents in March 2015, including 1.3 million outside the Epic ecosystem (non-Epic EMRs, HIEs and the health systems for the Defense and Veterans Affairs Departments)
  • Cerner customers sent 7.3 million C-CDA docs that month, more than half of which were consumed by non-Cerner systems.
  • Athenahealth customers sent about 117,000 C-CDA documents directly to other doctors during the first quarter of 2015.

Critics note that it’s still not clear how useful C-CDA information is to care, nor how often these documents are shared relative to the absolute number of patient visits. Still, even if the jury is still out on their benefits, it certainly makes sense to get C-CDA docs right if they’re going to be transmitted this often.

Practice Fusion Settles FTC Charges Over “Deceptive” Consumer Marketing

Posted on June 20, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

In what may be a first for the EMR industry, ambulatory EMR vendor Practice Fusion has settled Federal Trade Commission charges that it misled consumers as part of a campaign to gather reviews for its doctors.

Under the terms of the settlement, Practice Fusion agreed to refrain from making deceptive statements about the privacy and confidentiality of the information it collects from consumers. It also promised that if it planned to make any consumer information publicly available, it would offer a clear and conspicuous notice of its plans before it went ahead, and get affirmative consent from those consumers before using their information.

Prior to getting entangled in these issues, Practice Fusion had launched Patient Fusion, a portal allowing patients whose providers used its EMR to download their health information, transmit that information to another provider or send and receive messages from their providers.

The problem targeted by the FTC began in 2012, when Practice Fusion was preparing to expand Patient Fusion to include a public directory allowing enrollees to search for doctors, read reviews and request appointments. To support the rollout, the company began sending emails to patients of providers who used Practice Fusion’s EMR, asking patients to review their provider. In theory, this was probably a clever move, as the reviews would have given Practice Fusion-using practices greater social credibility.

The problem was, however, that the request was marketed deceptively, the FTC found. Rather than admitting that this was an EMR marketing effort, Practice Fusion’s email messages appeared to come from patients’ doctors. And the patients were never informed that the information would be made public. And worse, a pre-checked “Keep this review anonymous” only withheld the patient’s name, leaving information in the text box visible.

So patients, who thought they were communicating privately with their physicians, shared a great deal of private and personal health information. Many entered their full name or phone number in a text box provided as part of the survey. Others shared intimate health information, including on consumer who asked for dosing information for “my Xanax prescription,” and another who asked for help with a suicidally depressed child.

The highly sensitive nature of some patient comments didn’t get much attention until a year later, when EMR and HIPAA broke the story and then Forbes published a follow up article on the subject. After the articles appeared, Practice Fusion put automated procedures in place to block the publication of reviews in which consumers entered personal information.

In the future, Practice Fusion is barred from misrepresenting the extent to which it uses, maintains and protects the privacy or confidentiality of data it collects. Also, it may not publicly display the reviews it collected from consumers during the time period covered by the complaint.

There’s many lessons to be gleaned from this case, but the most obvious seems to be that misleading communications that impact patients are a complete no-no. According to an FTC blog item on the case, they also include that health IT companies should never bury key facts in a dense privacy policy, and that disclosures should use the same eye-catching methods they use for marketing, such as striking graphics, bold colors, big print and prominent placement.

Could Blockchain Tech Tackle Health Data Security Problems?

Posted on March 25, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

While you might not own any them, you’ve probably heard of bitcoins, a floating currency backed by no government entity. You may also be aware that these coins are backed by blockchain technology, a decentralized system in which all participants track everyone’s holdings on their own individual systems. In this world, buyers and sellers can exchange bitcoins untraceably, making bitcoins perfect for criminal use.

In fact, some readers may have first heard about bitcoins when a Hollywood, CA hospital recently had all its data assets frozen by malware hackers, who demanded a ransom of $3.4 million in bitcoins before the hospital could have its data back. (The hospital ended up talking the ransomware attackers down to paying $17K, and when it paid that sum, IT leaders got back control.)

What’s intriguing, however, is that blockchain technology may also be a solution for some of healthcare’s most vexing health data security problems. That, at least, is the view of Peter Nichol, a veteran healthcare business and technology executive consultant. As he sees it, “blockchain addresses the legitimate previous concerns of security, scalability and privacy of electronic medical records.”

In his essay posted on LinkedIn Nichol describes a way in which the blockchain can be used in healthcare data management:

  1. Patient: The patient is provided a code (private key or hash) and an address that provides the codes to unlock their patient data.  While the patient data is not stored in the blockchain, the blockchain provides the authentication or required hashes (multi-signatures, also referred to as multi-sigs) to be used to enable access to the data (identification and authentication).
  2. Provider: Contributors to patient’s medical records (e.g. providers) are provided a separate universal signature (codes or hashes or multi-sigs). These hashes when combined with the patient’s hash establishes the required authentication to unlock the patient’s data.
  3. Profile: Then the patient defines in their profile, the access rules required to unlock their medical record.
  4. Access: If the patient defines 2-of-2 codes, then two separate computer machines (the hashes) would have to be compromised to gain unauthorized access to the data. (In this case, establishing unauthorized privileged access becomes very difficult when the machines types differ, operating systems differ and are hosted with different providers.)

As Nichol rightly notes, blockchain strategies offer some big advantages over existing security, particularly given that keys are distributed and that multiple computers but need to be compromised for attackers to gain access to illicit data.

Nichols’ essay also notes that blockchain technology can be used to provide patients with more sophisticated levels of privacy control over their personal health information. As he points out, the patient can use their own blockchain signature, combined with, say, that of a hospital to provide more secure access when seeking treatment. Meanwhile, when they want to limit access to the data it’s easy to do so.

And voila, health data maintenance problems are solved, he suggests. “This model lifts the costly burden of maintaining a patient’s medical histories away from the hospitals,” he argues. “Eventually cost savings will make it full cycle back to the patient receiving care.”

What’s even more interesting is that Nichols is clearly not just a voice in the wilderness. For example, Philips Healthcare recently made an early foray into blockchain technology, partnering with blockchain-based record-keeping startup Tierion.

Ultimately, whether Nichols is entirely on target or not, it seems clear that health IT players have much to gain by exploring use of blockchain technology in some form. In fact, I predict that 2016 will be a breakout year for this type of application.

EHR Hosting Demystified – What to Look For (and Look Out For), on Your Way to the Healthcare Cloud

Posted on March 15, 2016 I Written By

The following is a guest blog post by Joe Cernik from eMedApps.

As I write this post I’m trying to reach the cloud. I’m on my third-in-a-row delayed flight segment on this week’s business trip – ARGH!  Ascending to the cloud these days is mostly easy though. My music is there, as are my photos, bank accounts and even my fitness stats collected on my wrist while I’m jogging or while I’m sleeping. Cloud computing has become ubiquitous and healthcare has embraced the transition. Health IT vendors are rapidly migrating EHR, PM and RCM solutions from client-server formats to on-demand, pay-as-you-go cloud hosted solutions.

According to healthcare analyst IDC, organizations that use on-site data storage spend 32% more on IT support than organizations that use an outside hosting provider. From infrastructure costs of servers and support staff to application deployment and ongoing maintenance costs, on-premises software can be a high-touch, high-cost model. Most EHRs are either in the cloud today, or claim cloud compatibility. The cloud promises scalability, interoperability and business continuity – but where do you start to evaluate solutions and define your own path to the cloud?  Here are a few basics to get you going.

Ready, set, cloud….

Step 1: Understand hosting and cloud approaches and determine which type is right for you.

Insourced Hosting: A model also called managed services, managed client-server, or managed on-site hosting, where the hosting vendor provides end-to-end management of your complete EHR/PM system including the hardware and software systems installed at your facility. In essence, your hosting vendor becomes a member of your team, in-house, and manages the infrastructure that you own – generally in a client-server configuration. You’re not in the cloud yet, but this may be a first step in that direction if you’re ready to get out of the EHR/PM management business.

Outsourced Hosting: Also called remote hosting, hosted off-premise, and cloud hosting, outsourced EHR hosting locates your critical EHR/PM applications in a datacenter facility – outside of your LAN-based practice or clinic. EHR and patient data is stored on remote servers accessed via secure Internet connections. Fully outsourced remote hosting shifts the expense of procuring, managing and maintaining your EHR application and servers from your facility and your IT team to a fully managed datacenter. Servers are owned, managed, and refreshed by the hosting company.  Now, you’re in the cloud.

Hybrid Model Hosting: Also called hosted client/server in the cloud and managed hosting, this model allows your organization to place your servers into a secure datacenter. This hybrid model between insourced hosting and outsourced hosting allows your organization to leverage existing capital investments in servers and investments in EHR application licenses, but moves the ongoing management and maintenance of this infrastructure investment to an internet accessible, secure remote site. Rather than installing and managing your application on a server in your office, the installation is managed on your server(s) in a controlled data center environment. Your users log into your remote server through a web browser.

Step 2: Understand Compliance and Regulatory Considerations (HIPAA, PHI, MU) Before You Sign a Contract

Your EHR hosting partner should be an EHR application expert, have demonstrable hosting expertise, and meet all regulatory and security protocols.  While this statement may seem obvious, note that no matter which type of hosting solution you consider or eventually adopt, your hosting provider and their facilities must meet all physical, procedural, operational, and technical readiness criteria established for hosting of protected healthcare data. Make certain to evaluate partners for compliance with all HIPAA/HITECH rules and, for outsourced or hybrid solutions, SOC 2 Type II and SOC 3 centers with certificates including: PCI DSS Level 1 and SSAE 16.

Step 3: Evaluate the Costs

Because there is no upfront cost for the software, and an organization is not required to buy a server, a cloud-based EHR may be less expensive than the onsite client/server setup. If one of your greatest hurdles to adopting an EHR is the initial cost of installation, an outsourced hosting model may be worth considering.

Some practices may also prefer to view their EHR expenses as a recurring operational expense (similar to a utility bill) rather than a capital investment. If your practice or clinic has already invested in on-premises infrastructure but want to consider a move to an outsourced hosting model, a hybrid approach may be a good first step with a full transition to an operational expense model on your next hardware refresh cycle.

Models vary among hosting vendors, and some vendors offer contract terms and conditions that offer hosting packages tailored to your revenue projections or offer low introductory pricing that increases over time. Variable models should be evaluated over a five-year cost-of-ownership timeframe to accurately compare costs across vendor plans.

Clear the fog…move to the cloud.

The way organizations procure and deploy IT infrastructure is undergoing a significant transformation. Don’t be confused by the transition – cut through the fog and get to the facts on a hosting solution that will help you meet your business AND patient care goals.  That solution may include ascending to the cloud – there’s a lot of great music already there. Now, let’s see if my plane will make it into another type of cloud today.

mHealth App-makers Must Develop Privacy, Security Standards

Posted on November 30, 2015 I Written By

The following is a guest blog post by Jon Michaeli, Executive Vice President of Medisafe

In recent times, consumers have developed a rapidly-growing interest in mobile health apps. In fact, more than half of the 1,600 mobile phone users surveyed recently by a New York University research team had downloaded at least one such app. And signs suggest that user uptake of mHealth apps could grow dramatically over the next few years.

But consumers’ adoption of mobile health apps is being held back by concerns that their health data isn’t safe.  Nearly half of consumers surveyed told Healthline that they’re afraid hackers may try to steal their personal health data from a wearable, and one-quarter of respondents said that they don’t believe app or health tracking data is secure.

We believe that it’s time for mHealth app developers and vendors to take a stand on mobile health data privacy and security. Consumers have the right to exchange private health data securely, and to be sure that data is never stolen or shared with unauthorized parties.

But until we develop industry-wide standards for protecting mobile health data, it’s unlikely that we’ll be able to do so. To make that happen, we welcome the creation of a broad industry coalition to create these standards.

Security fears justified

Concerns over the security and privacy of mHealth data are well-founded. Less than one-third of the 600 most commonly-used mHealth apps have privacy policies in place, according to recent research published in the Journal of the American Medical Informatics Association. Another study, by HIMSS, suggests that health IT leaders are just beginning to scope out their mobile health security strategies.

Worse, some practices engaged in by app developers pose a clear risk to users’ health data. For example, some health apps use a Social Security number as a “secure” user method of validating user identity. Unfortunately, Social Security numbers are often stolen during hacking exploits, and they’re fairly easy to buy online. Thieves have a powerful incentive to steal SSNs, as health data now sells for 10 times the prices of credit card numbers.

Once SSNs are obtained by the wrong party, the results can be catastrophic. If I obtain a user’s SSN and download their claims data, I might find out that they, for example, take meds used to treat psychiatric conditions or HIV. Malicious parties could conceivably use this information to blackmail someone, expose them at work or in the community, outflank them during a divorce or worse. There’s a reason that SSNs sell for 10 times the price of a stolen credit card number on the black market.

Not only that, even among those who post privacy policies, few app developers make it clear how they address privacy issues. Developers often fill their policy write-ups with jargon and deceptive language. And few consumers are informed enough to demand plain, straightforward disclosures in areas that may affect them. For example, they may not be aware that their privacy could be compromised if the app pulls data from outside sources without requiring an additional login and password.

Those opaque privacy policies may also conceal questionable data-sharing practices, such as the sale of personal data. If individually-identifiable data gets shared with the insurance industry, insurers might use this data to reject applications for coverage. Pharmaceutical companies could leverage this data to market meds to such consumers. Employers could even buy such data to screen out sick applicants. The possibilities for harm are great.

Time for mHealth security standards

Fortunately, mHealth vendors that want to boost security and privacy protections don’t have to start from scratch. Practices and standards already in place in healthcare IT departments provide a good foundation for mHealth app developers. Certainly, consumers need to play a role in protecting their own health information, by taking a responsible and smart approach to app use, but we have obligations too.

First, we should assume that any mHealth app must meet HIPAA standards for protecting patient health information (PHI). Requirements include making sure users are who they claim to be (authentication), seeing that PHI isn’t altered prior to reaching its destination, and assuring that data is encrypted at rest, in transit and when stored on independently-managed servers.

Also, if PHI is being exchanged, mHealth developers must be sure that any third-party apps integrated into our health app also meets HIPAA requirements. And we need to verify that compliance. If connected third parties are compromised, the app isn’t secure either.

But above all, our industry needs to establish privacy and security standards that meet the unique needs of mobile health environment, standards which evolve as mHealth changes. I believe it’s high time that the mobile health industry leaders collaborate and create these standards. Otherwise, we may fail in our ethical obligations and do lasting damage to consumer trust. We invite other mHealth app vendors and their partners to join us in collaborating to protect consumers.

Jon Michaeli is Executive Vice President of Medisafe (www.medisafe.com), a cloud-synched platform which helps consumers manage their medications.

A Lawyer’s Perspective on EHR Vendors Holding EHR Data Hostage

Posted on October 23, 2015 I Written By

The following is a guest blog post by Bill O’Toole is the founder of O’Toole Law Group.
William O'Toole - Healthcare IT and EHR Contracts
The recent post, EHR Data Hostage Wouldn’t Exist if EHR Were Truly Interoperable, on EMR & HIPAA got me thinking, and I wanted to offer a few observations from my experience as an HIT lawyer.

The goal is wonderful. However, it would take years and years to achieve such a goal. Data extraction and subsequent import take time, sometimes lots of it. What if there were a standardized specification to which vendors could design extraction tools and programs? Follow that with contractual commitment that the vendor adheres to those specifications. We did it with HL-7, why not data transport?

Thankfully I have not yet represented a vendor that withheld data solely due to the departure of a customer. I have however been involved in very tough situations where the vendor treads a fine line in not releasing data until customers fulfill their obligations (such as paying for use of the software). I like to believe that there is more to the story in the vast majority of data hostage disputes, and in my experience, this has always been the case.

The emergence of the hosted subscription model has resulted in a control shift to the vendor, as opposed to the on premises model where the customer is in control and a vendor can be shut out. That said, vendor assistance is usually required to extract data.

“HIPAA vs. vendor rights” is a very hot topic for me. Providers must provide patient data on request. Vendors have a right to be paid. The contractual right of a vendor to suspend customer access to a hosted EHR butts head-on against HIPAA. I have discussed this with ONC and while the problem is recognized, there is no solution at the present time.

Bill O’Toole is the founder of O’Toole Law Group of Duxbury, MA. You may contact him at wfo@otoolelawgroup.com

OpenUMA: New Privacy Tools for Health Care Data

Posted on August 10, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The health care field, becoming more computer-savvy, is starting to take advantage of conveniences and flexibilities that were developed over the past decade for the Web and mobile platforms. A couple weeks ago, a new open source project was announced to increase options for offering data over the Internet with proper controls–options with particular relevance for patient control over health data.

The User-Managed Access (UMA) standard supports privacy through a combination of encryption and network protocols that have a thirty-year history. UMA reached a stable release, 1.0 in April of this year. A number of implementations are being developed, some of them open source.

Before I try to navigate the complexities of privacy protocols and standards, let’s look at a few use cases (currently still hypothetical) for UMA:

  • A parent wants to show the child’s school records from the doctor’s office just long enough for the school nurse to verify that the child has received the necessary vaccinations.

  • A traveler taking a temporary job in a foreign city wants to grant a local clinic access to the health records stored by her primary care physician for the six months during which the job lasts.

The open source implementation I’ll highlight in this article is OpenUMA from a company named ForgeRock. ForgeRock specializes in identity management online and creates a number of open source projects that can be found on their web page. They are also a leading participant in the non-profit Kantara Initiative, where they helped develop UMA as part of the UMA Developer Resources Work Group.

The advantage of open source libraries and tools for UMA is that the standard involves many different pieces of software run by different parts of the system: anyone with data to share, and anyone who wants access to it. The technology is not aimed at any one field, but health IT experts are among its greatest enthusiasts.

The fundamental technology behind UMA is OAuth, a well-tested means of authorizing people on web sites. When you want to leave a comment on a news article and see a button that says, “Log in using Facebook” or some other popular site, OAuth is in use.

OAuth is an enabling technology, by which I mean that it opens up huge possibilities for more complex and feature-rich tools to be built on top. It provides hooks for such tools through its notion of profiles–new standards that anyone can create to work with it. UMA is one such profile.

What UMA contributes over and above OAuth was described to me by Eve Maler, a leading member of the UMA working group who wrote their work up in the specification I cited earlier, and who currently works for ForgeRock. OAuth lets you manage different services for yourself. When you run an app that posts to Twitter on your behalf, or log in to a new site through your Facebook account, OAuth lets your account on one service do something for your account on another service.

UMA, in contrast, lets you grant access to other people. It’s not your account at a doctor’s office that is accessing data, but the doctor himself.

UMA can take on some nitty-gritty real-life situations that are hard to handle with OAuth alone. OAuth provides a single yes/no decision: is a person authorized or not? It’s UMA that can handle the wide variety of conditions that affect whether you want information released. These vary from field to field, but the conditions of time and credentials mentioned earlier are important examples in health care. I covered one project using UMA in an earlier article.

With OAuth, you can grant access to an account and then revoke it later (with some technical dexterity). But UMA allows you to build a time limit into the original access. Of course, the recipient does not lose the data to which you granted access, but when the time expires he cannot return to get new data.

UMA also allows you to define resource sets to segment data. You could put vaccinations in a resource set that you share with others, withholding other kinds of data.

OpenUMA contains two crucial elements of a UMA implementation:

The authorization server

This server accepts a list of restrictions from the site holding the data and the credentials submitted by the person requesting access to the data. The server is a very generic function: any UMA request can use any authorization server, and the server can run anywhere. Theoretically, you could run your own. But it would be more practical for a site that hosts data–Microsoft HealthVault, for instance, or some general cloud provider–to run an authorization server. In any case, the site publicizes a URL where it can be contacted by people with data or people requesting data.

The resource server

These submit requests to the authorization server from applications and servers that hold people’s data. The resource server handles the complex interactions with the authorization server so that application developers can focus on their core business.

Instead of the OpenUMA resource server, apps can link in libraries that provide the same functions. These libraries are being developed by the Kantara Initiative.

So before we can safely share and withhold data, what’s missing?

The UMA standard doesn’t offer any way to specify a condition, such as “Release my data only this week.” This gap is filled by policy languages, which standards groups will have to develop and code up in a compatible manner. A few exist already.

Maler points out that developers could also benefit from tools for editing and testing code, along with other supporting software that projects build up over time. The UMA resource working group is still at the beginning of their efforts, but we can look forward to a time when fine-grained patient control over access to data becomes as simple as using any of the other RESTful APIs that have filled the programmer’s toolbox.

Live Hack of an Infusion Pump Medical Device

Posted on August 6, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

UPDATE: Looks like the video was taken down. I can imagine the legal issues that went on with such an incredible demonstration.

At the BlackBerry Security Summit, BlackBerry Chief Security Officer David Kleidermacher and Security Expert Graham Murphy showed how easy it is for hackers to take control of a medical device that’s not properly secured. Check out the video below to see the medical device hack:

What a compelling and scary demonstration!

I think most healthcare organizations assume that medical device manufacturers are taking care of securing the medical devices. Or that HIPAA will protect them from all of this. Many take the stance that “ignorance is bliss.” This demo should illustrate to everyone that you can’t leave security of your medical devices to the manufacturer or HIPAA. It takes both the medical device manufacturer and the healthcare organization to make sure a medical device is properly secured.