Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Consumers Take Risk Trading Health Data For Health Insurance Discounts

Posted on August 28, 2015 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

When Progressive Insurance began giving car owners the option of having their driving tracked in exchange for potential auto insurance discounts, nobody seemed to raise a fuss. After all, the program was voluntary, and nobody wants to pay more than they have to for coverage.

Do the same principles apply to healthcare? We may find out. According to a study by digital health research firm Parks Associates, at least some users are willing to make the same tradeoff. HIT Consultant reports that nearly half (42%) of digital pedometer users would be willing to share their personal data in exchange for a health insurance discount.

Consumer willingness to trade data for discounts varied by device, but didn’t fall to zero. For example, 35% of smart watch owners would trade their health data for health insurance discounts, while 26% of those with sleep-quality monitors would do so.

While the HIT Consultant story doesn’t dig into the profile of users who were prepared to sell their personal health data today — which is how I’d describe a data-for-discount scheme — I’d submit that they are, in short, pretty sharp.

Why do I say this? Because as things stand, at least, health insurers would get less than they were paying for unless the discount was paltry. (As the linked blog item notes, upstart health insurer Oscar Insurance already gives away free Misfit wearables. To date, though, it’s not clear from the write-up whether Oscar can quantify what benefit it gets from the giveaway.)

As wearables and health apps mature, however, consumers may end up compromising themselves if they give up personal health data freely. After all, if health insurance begins to look like car insurance, health plans could push up premiums every time they make a health “mistake” (such as overeating at a birthday dinner or staying up all night watching old movies). Moreoever, as such data gets absorbed into EMRs, then cross-linked with claims, health plans’ ability to punish you with actuarial tables could skyrocket.

In fact, if consumers permit health plans to keep too close a watch on them, it could give the health plans the ability to effectively engage in post-contract medical underwriting. This is an unwelcome prospect which could lead to court battles given the ACA’s ban on such activities.

Also, once health plans have the personal data, it’s not clear what they would do with it. I am not a lawyer, but it seems to me that health plans would have significant legal latitude in using freely given data, and might even be seen to sell that data in the aggregate to pharmas. Or they might pass it to their parent company’s life or auto divisions, which could potentially use the data to make coverage decisions.

Ultimately, I’d argue that unless the laws are changed to protect consumers who do so, selling personal health data to get lower insurance premiums is a very risky decision. The short-term benefit is unlikely to be enough to offset very real long-term consequences. Once you’ve compromised your privacy, you seldom get it back.

FTC Gingerly Takes On Privacy in Health Devices (Part 2 of 2)

Posted on February 11, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this series of articles laid out the difficulties of securing devices in the Internet of Things (particularly those used in the human body). Accepting that usability and security have to be traded off against one another sometimes, let’s look at how to make decisions most widely acceptable to the public.

The recent FTC paper on the Internet of Things demonstrates that they have developed a firm understanding of the problems in security and privacy. For this paper, they engaged top experts who had seen what happens when technology gets integrated into daily life, and they covered all the issues I know of. As devices grow in sophistication and spread to a wider population, the kinds of discussion the FTC held should be extended to the general public.

For instance, suppose a manufacturer planning a new way of tracking people–or a new use for their data–convened some forums in advance, calling on potential users of the device to discuss the benefits and risks. Collectively, the people most affected by the policies chosen by the manufacturer would determine which trade-offs to adopt.

Can ordinary people off the street develop enough concerned with their safety to put in the time necessary to grasp the trade-offs? We should try asking them–we may be pleasantly surprised. Here are some of the issues they need to consider.

  • What can malicious viewers determine from data? We all may feel nervous about our employer learning that we went to a drug treatment program, but how much might the employer learn just by knowing we went to a psychotherapist? We now know that many innocuous bits of data can be combined to show a pattern that exposes something we wished to keep secret.

  • How guarded do people feel about their data? This depends largely on the answer to the previous question–it’s not so much the individual statistics reported, but the patterns that can emerge.

  • What data does the device need to collect to fulfill its function? If the manufacturer, clinician, or other data collector gathers up more than the minimal amount, how are they planning to use that data, and do we approve of that use? This is an ethical issue faced constantly by health care researchers, because most patients would like their data applied to finding a cure, but both the researchers and the patients have trouble articulating what’s kosher and what isn’t. Even collecting data for marketing purposes isn’t necessarily evil. Some patients may be willing to share data in exchange for special deals.

  • How often do people want to be notified about the use of their data, or asked for permission? Several researchers are working on ways to let patients express approval for particular types of uses in advance.

  • How long is data being kept? Most data users, after a certain amount of time, want only aggregate data, which is supposedly anonymized. Are they using well-established techniques for anonymizing the data? (Yes, trustworthy techniques exist. Check out a book I edited for my employer, Anonymizing Health Data.)

I believe that manufacturers can find a cross-section of users to form discussion groups about the devices they use, and that these users can come to grips with the issues presented here. But even an engaged, educated public is not a perfect solution. For instance, a privacy-risking choice that’s OK for 95% of users may turn out harmful to the other 5%. Still, education for everyone–a goal expressed by the FTC as well–will undoubtedly help us all make safer choices.