Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Memorial Day

Posted on May 30, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

We have a lot to be thankful for this Memorial Day. We all make sacrifices, but some people made the ultimate sacrifice. Here are a few images which captured my feelings this Memorial Day. I hope as we each enjoy time with family and friends that we’ll remember the sacrifice of so many. Plus, thank you to all those in healthcare that are working today instead of being with family and friends.

Memorial Day

Memorial Day - Boot

Memorial Day - Graves

Correlations and Research Results: Do They Match Up? (Part 2 of 2)

Posted on May 27, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous part of this article described the benefits of big data analysis, along with some of the formal, inherent risks of using it. We’ll go even more into the problems of real-life use now.

More hidden bias

Jeffrey Skopek pointed out that correlations can perpetuate bias as much as they undermine it. Everything in data analysis is affected by bias, ranging from what we choose to examine and what data we collect to who participates, what tests we run, and how we interpret results.

The potential for seemingly objective data analysis to create (or at least perpetuate) discrimination on the basis of race and other criteria was highlighted recently by a Bloomberg article on Amazon Price deliveries. Nobody thinks that any Amazon.com manager anywhere said, “Let’s not deliver Amazon Prime packages to black neighborhoods.” But that was the natural outcome of depending on data about purchases, incomes, or whatever other data was crunched by the company to produce decisions about deliveries. (Amazon.com quickly promised to eliminate the disparity.)

At the conference, Sarah Malanga went over the comparable disparities and harms that big data can cause in health care. Think of all the ways modern researchers interact with potential subjects over mobile devices, and how much data is collected from such devices for data analytics. Such data is used to recruit subjects, to design studies, to check compliance with treatment, and for epidemiology and the new Precision Medicine movement.

In all the same ways that the old, the young, the poor, the rural, ethnic minorities, and women can be left out of commerce, they can be left out of health data as well–with even worse impacts on their lives. Malanga reeled out some statistics:

  • 20% of Americans don’t go on the Internet at all.

  • 57% of African-Americans don’t have Internet connections at home.

  • 70% of Americans over 65 don’t have a smart phone.

Those are just examples of ways that collecting data may miss important populations. Often, those populations are sicker than the people we reach with big data, so they need more help while receiving less.

The use of electronic health records, too, is still limited to certain populations in certain regions. Thus, some patients may take a lot of medications but not have “medication histories” available to research. Ameet Sarpatwari said that the exclusion of some populations from research make post-approval research even more important; there we can find correlations that were missed during trials.

A crucial source of well-balanced health data is the All Payer Claims Databases that 18 states have set up to collect data across the board. But a glitch in employment law, highlighted by Carmel Shachar, releases self-funding employers from sending their health data to the databases. This will most likely take a fix from Congress. Unless they do so, researchers and public health will lack the comprehensive data they need to improve health outcomes, and the 12 states that have started their own APCD projects may abandon them.

Other rectifications cited by Malanga include an NIH requirement for studies funded by it to include women and minorities–a requirement Malanga would like other funders to adopt–and the FCC’s Lifeline program, which helps more low-income people get phone and Internet connections.

A recent article at the popular TechCrunch technology site suggests that the inscrutability of big data analytics is intrinsic to artificial intelligence. We must understand where computers outstrip our intuitive ability to understand correlations.

Correlations and Research Results: Do They Match Up? (Part 1 of 2)

Posted on May 26, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Eight years ago, a widely discussed issue of WIRED Magazine proclaimed cockily that current methods of scientific inquiry, dating back to Galileo, were becoming obsolete in the age of big data. Running controlled experiments on limited samples just have too many limitations and take too long. Instead, we will take any data we have conveniently at hand–purchasing habits for consumers, cell phone records for everybody, Internet-of-Things data generated in the natural world–and run statistical methods over them to find correlations.

Correlations were spotlighted at the annual conference of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. Although the speakers expressed a healthy respect for big data techniques, they pinpointed their limitations and affirmed the need for human intelligence in choosing what to research, as well as how to use the results.

Petrie-Flom annual 2016 conference

Petrie-Flom annual 2016 conference

A word from our administration

A new White House report also warns that “it is a mistake to assume [that big data techniques] are objective simply because they are data-driven.” The report highlights the risks of inherent discrimination in the use of big data, including:

  • Incomplete and incorrect data (particularly common in credit rating scores)

  • “Unintentional perpetuation and promotion of historical biases,”

  • Poorly designed algorithmic matches

  • “Personaliziaton and recommendation services that narrow instead of expand user options”

  • Assuming that correlation means causation

The report recommends “bias mitigation” (page 10) and “algorithmic systems accountability” (page 23) to overcome some of these distortions, and refers to a larger FTC report that lays out the legal terrain.

Like the WIRED articles mentioned earlier, this gives us some background for discussions of big data in health care.

Putting the promise of analytical research under the microscope

Conference speaker Tal Zarsky offered both fulsome praise and specific cautions regarding correlations. As the WIRED Magazine issue suggested, modern big data analysis can find new correlations between genetics, disease, cures, and side effects. The analysis can find them much cheaper and faster than randomized clinical trials. This can lead to more cures, and has the other salutory effect of opening a way for small, minimally funded start-up companies to enter health care. Jeffrey Senger even suggested that, if analytics such as those used by IBM Watson are good enough, doing diagnoses without them may constitute malpractice.

W. Nicholson Price, II focused on the danger of the FDA placing too many strict limits on the use of big data for developing drugs and other treatments. Instead of making data analysts back up everything with expensive, time-consuming clinical trials, he suggested that the FDA could set up models for the proper use of analytics and check that tools and practices meet requirements.

One of exciting impacts of correlations is that they bypass our assumptions and can uncover associations we never would have expected. The poster child for this effect is the notorious beer-and-diapers connection found by one retailer. This story has many nuances that tend to get lost in the retelling, but perhaps the most important point to note is that a retailer can depend on a correlation without having to ascertain the cause. In health, we feel much more comfortable knowing the cause of the correlation. Price called this aspect of big data search “black box” medicine.” Saying that something works, without knowing why, raises a whole list of ethical concerns.

A correlation stomach pain and disease can’t tell us whether the stomach pain led to the disease, the disease caused the stomach pain, or both are symptoms of a third underlying condition. Causation can make a big difference in health care. It can warn us to avoid a treatment that works 90% of the time (we’d like to know who the other 10% of patients are before they get a treatment that fails). It can help uncover side effects and other long-term effects–and perhaps valuable off-label uses as well.

Zarsky laid out several reasons why a correlation might be wrong.

  • It may reflect errors in the collected data. Good statisticians control for error through techniques such as discarding outliers, but if the original data contains enough apples, the barrel will go rotten.

  • Even if the correlation is accurate for the collected data, it may not be accurate in the larger population. The correlation could be a fluke, or the statistical sample could be unrepresentative of the larger world.

Zarsky suggests using correlations as a starting point for research, but backing them up by further randomized trials or by mathematical proofs that the correlation is correct.

Isaac Kohane described, from the clinical side, some of the pros and cons of using big data. For instance, data collection helps us see that choosing a gender for intersex patients right after birth produces a huge amount of misery, because the doctor guesses wrong half the time. However, he also cited times when data collection can be confusing for the reasons listed by Zarsky and others.

Senger pointed out that after drugs and medical devices are released into the field, data collected on patients can teach developers more about risks and benefits. But this also runs into the classic risks of big data. For instance, if a patient dies, did the drug or device contribute to death? Or did he just succumb to other causes?

We already have enough to make us puzzle over whether we can use big data at all–but there’s still more, as the next part of this article will describe.

Insights from #WEDI25

Posted on May 25, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

This week I’ve been spending time at the WEDI annual conference in Salt Lake City. I’ve never been to a conference with a more diverse set of attendees. I’ve really enjoyed the diversity of attendees and perspectives that were at the conference. I was a little disappointed (but not really surprised) that clinicians weren’t part of the event. I understand why it’s hard to get them to attend an event like this, but it’s unfortunate that the physician voice isn’t part of the discussion.

Here’s a quick list of some insights I tweeted during the conference which could be useful to you:

E-patient Update: Remote Monitoring Leaves Me Out of The Loop

Posted on May 24, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

As some readers may recall, I don’t just write about digital health deployment — I live it. To be specific, my occasional heart arrhythmia (Afib) is being tracked remotely by device implanted in my chest near my heart. My cardiac electrophysiologist implanted the Medtronic device – a “loop recorder” roughly the size of a cigarette lighter though flatter — during a cardiac ablation procedure.

The setup works like this:

  • The implanted device tracks my heart rhythm, recording any events that fit criteria programmed into it. (Side note: It’s made entirely of plastic, which means I need not fear MRIs. Neat, huh?)
  • The center also includes a bedside station which comes with a removable, mouse shaped object that I can place on my chest to record any incidents that concern me. I can also record events in real time, when I’m on the road, using a smaller device that fits on my key ring.
  • Whether I record any perceived episodes or not, the bedside station downloads whatever information is stored in the loop recorder at midnight each night, then transmits it to the cardiac electrophysiologist’s office.
  • The next day, a tech reviews the records. If any unusual events show up, the tech notifies the doctor, who reaches out to me if need be.

Now, don’t get me wrong, this is all very cool. And these devices have benefited me already, just a month into their use. For example, one evening last week I was experiencing some uncomfortable palpitations, and wondered whether I had reason for concern. So I called the cardiac electrophysiologist’s after-hours service and got a call back from the on-call physician.

When she and I spoke, her first response was to send me to my local hospital. But once I informed her that the device was tracking my heart rhythms, she accessed them and determined that I was only experiencing mild tachycardia. That was certainly a relief.

No access for patients

That being said, it bugs me that I have no direct access to this information myself. Don’t get me wrong, I understand that interacting with heart rhythm data is complicated. Certainly, I can’t do as much in response to that information as I could if the device were, say, tracking my blood glucose levels.

That being said, my feeling is that I would benefit from knowing more about how my heart is working, or failing to work appropriately in the grand scheme of things, even if I can’t interpret the raw data of the device produces. For example, it would be great if I could view a chart that showed, say, week by week when events occurred and what time they took place.

Of course, I don’t know whether having this data would have any concrete impact on my life. But that being said, it bothers me that such remote monitoring schemes don’t have their core an assumption that patients don’t need this information. I’d argue that Medtronic and its peers should be thinking of ways to loop patients in any time their data is being collected in an outpatient setting. Don’t we have an app for that, and if not, why?

Unfortunately, no matter how patients scream and yell about this, I doubt we’ll make much progress until doctors raise their voices too. So if you’re a physician reading this, I hope you’re willing to get involved since patients deserve to know what’s going on with their bodies. And if you have the means to help them know, make it happen!

The Power Of Presenting Health Data In Context

Posted on May 23, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Today I read an interesting article on the 33 charts blog, written by the thoughtful pediatrician Bryan Vartabedian. In the article, Dr. Vartabedian describes an encounter with data at Texas Children’s Hospital:

When I walked into the patient’s room, I found this: A massive wall-mounted touchscreen at the foot of the bed with all of the patient’s critical data beautifully displayed…All of the patient’s Epic data is right there in real-time. Ins and outs, blood gases and trending art line readings in beautiful graphic display. And what isn’t there is accessible by with the poke of a finger.

He goes on to suggest that by displaying the data in this way, the hospital is changing how care is delivered:

The concept of decentralized, contextually-appropriate channeling of information is beginning to disrupt the clinical encounter. As ambient interfaces infiltrate the clinical environment, the right data will increasingly find us and our patients precisely at the point of care where it’s actionable.

I really enjoyed reading this piece, as it bottom-lined something I’ve had difficulty articulating. It made me realize that I’ve been wondering if the data that’s awkward to use on a laptop or PC can be used to greater effect elsewhere. After all, it’s not that doctors dislike access to EMR data — it’s just that they dislike the impact EMRs have on their work habits.

It’s not just workflow

Much of the discussion about fostering EMR adoption by physician focuses on improving user interfaces and workflow. And that is a legitimate line of inquiry. After all, healthcare organizations will never see the full benefits of their EMR investment unless clinicians can actually use them.

But Dr. Vartabedian makes the useful point that putting such data in the right context is also critical. Sure, making sure clinicians can get to clinical data via smart phone and tablet is a step in the right direction, as it allows them to use it in a more flexible manner. But ultimately, the data is the most useful when it’s presented in the right form, one which also allows patients to consume it.

For some clinical settings, the large touchscreen display he describes may be appropriate. For others, it might be a bedside tablet that the patient and doctor can share. Or perhaps the best approach for presenting healthcare data contextually hasn’t been invented yet. But regardless of what technology works best, organizing health data and presenting it in the right context is a powerful strategy.

Creating context is possible

Of course, talking about providing contextual healthcare data and delivering it are two different things. The presentation that works for Dr. Vartabedian may not work for other clinicians, and developing the unified data set needed to fuel these efforts can be taxing. Not only that, developing the right criteria for displaying contextual data could a major challenge.

Still, the tools needed to create the right context for EMR data delivery exist now, including interactive health tracking devices, smartphone apps and tablets. Meanwhile, these devices and platforms are delivering an ever-richer data set to clinicians. Toss in data from remote monitoring devices in the options multiply. What’s more, phones with GPS functions can provide location-based data dynamically.

Sure, it may not be practical to tackle this problem while your EMR implementation is young. But it would be smart to at least turn your imagination loose. If Dr. Vartabedian is right, putting data in context soon be a requirement rather than an option, and it’s best to be prepared.

3 Benefits of Virtual Care Infographic

Posted on May 20, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

The people at Carena have put out an infographic that looks at 3 ways virtual clinics are improving care quality. I’d like to see better sources since most of the sources for the data in this infographic come from virtual care providers. However, it’s also interesting to look at the case virtual care providers are making so we can test if they’re living up to those ideals.

What do you think of these 3 benefits? Are they achievable through virtual care?

3 Ways Virtual Clinicals are Improving Care Quality

Telemedicine Rollouts Are Becoming More Mature

Posted on May 19, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

For a long time, telemedicine was a big idea whose time had not come. Initially, the biggest obstacles providing video consults was consumer bandwidth. Once we got to the point that most consumers had high-speed Internet connections, proponents struggled to get commercial insurers and federal payers to reimburse providers for telemedicine. We also had to deal with medical licensure which most companies are dealing with by licensing their providers across multiple states (Crazy, but workable). Now, with both categories of payers increasingly paying for such services and patients increasingly willing to pay out of pocket, providers need to figure out which telemedicine business models work.

If I had to guess, I would’ve told you that very few providers have reached the stage where they had developed a fairly mature telemedicine service line. But data gathered by researchers increasingly suggests that I am wrong.

In fact, a new study by KPMG found that about 25% of healthcare providers have implemented telehealth and telemedicine programs which have achieved financial stability and improved efficiency. It should be noted that the study only involved 120 participants who reported they work for providers. Still, I think the results are worth a look.

Despite the success enjoyed by some providers with telemedicine programs, a fair number of providers are at a more tentative stage. Thirty-five percent of respondents said they didn’t have a virtual care program in place, and 40% had said they had just implemented a program. But what stands out to me is that the majority of respondents had telehealth initiatives underway.

Twenty-nine percent of survey respondents said that one of the key reasons they were in favor of telehealth programs is that they felt it would increase patient volumes and loyalty. Other providers have different priorities. Seventeen percent felt that implement the telehealth with help of care coordination for high-risk patients, another 17% said they wanted to reduce costs for access to medical specialists, and 13% said they were interested in telemedicine due to consumer demand.

When asked what challenges they faced in implementing telehealth, 19% said they had other tech priorities, 18% were unsure they had a sustainable business model, and 18% said their organization wasn’t ready to roll out a new technology.

As I see it, telemedicine is set up to get out of neutral and pull out of the gate. We’re probably past the early adopter stage, and as soon as influential players perfect their strategy for telemedicine rollouts, their industry peers are sure to follow.

What remains to be seen is whether providers see telemedicine as integral to the care they deliver, or primarily as a gateway to their brick-and-mortar services. I’d argue that telemedicine services should be positioned as a supplement to live care, a step towards greater continuity of care and the logical next step in going digital. Those who see it as a sideline, or a loyalty builder with no inherent clinical value, are unlikely to benefit as much from a telemedicine rollout.

Admittedly, integrating virtual care poses a host of new technical and administrative problems. But like it or not, telemedicine is important to the future of healthcare. Hold it is at arms’ length to your peril.

Healthcare Consent and its Discontents (Part 3 of 3)

Posted on May 18, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article rated the pros and cons of new approaches to patient consent and control over data. Here we’ll look at emerging risks.

Privacy solidarity

Genetics present new ethical challenges–not just in the opportunity to change genes, but even just when sequencing them. These risks affect not only the individual: other members of her family and ethnic group can face discrimination thanks to genetic weaknesses revealed. Isaac Kohane said that the average person has 40 genetic markers indicating susceptibility to some disease or other. Furthermore, we sometimes disagree on what we consider a diseased condition.

Big data, particularly with genomic input, can lead to group harms, so Brent Mittelstadt called for moving beyond an individual view of privacy. Groups also have privacy needs (a topic I explored back in 1998). It’s not enough for an individual to consider the effect of releasing data on his own future, but on the future of family members, members of his racial group, etc. Similarly, Barbara Evans said we have to move from self-consciousness to social consciousness. But US and European laws consider privacy and data protection only on the basis of the individual.

The re-identification bogey man

A good many references were made at the conference to the increased risk of re-identifying patients from supposedly de-identified data. Headlines are made when some researcher manages to uncover a person who thought himself anonymous (and who database curators thought was anonymous when they released their data sets). In a study conducted by a team that included speaker Catherine M. Hammack, experts admitted that there is eventually a near 100% probability of re-identifying each person’s health data. The culprit in all this is burgeoning set of data collected from people as they purchase items and services, post seemingly benign news about themselves on social media, and otherwise participate in modern life.

I think the casual predictions of the end of anonymity we hear so often are unnecessarily alarmist. The field of anonymity has progressed a great deal since Latanya Sweeney famously re-identified a patient record for Governor William Weld of Massachusetts. Re-identifications carried out since then, by Sweeney and others, have taken advantage of data that was not anonymized (people just released it with an intuitive assumption that they could not be re-identified) or that was improperly anonymized, not using recommended methods.

Unfortunately, the “safe harbor” in HIPAA (designed precisely for medical sites lacking the skills to de-identify data properly) enshrines bad practices. Still, in a HIPAA challenge cited by Ameet Sarpatwari,only two of 15,000 individuals were re-identified. The mosaic effect is still more of a theoretical weakness, not an immediate threat.

I may be biased, because I edited a book on anonymization, but I would offer two challenges to people who cavalierly dismiss anonymization as a useful protection. First, if we threw up our hands and gave up on anonymization, we couldn’t even carry out a census, which is mandated in the U.S. Constitution.

Second, anonymization is comparable to encryption. We all know that computer speeds are increasing, just as are the sophistication of re-identification attacks. The first provides a near-guarantee that, eventually, our current encrypted conversations will be decrypted. The second, similarly, guarantees that anonymized data will eventually be re-identified. But we all still visit encrypted web sites and use encryption for communications. Why can’t we similarly use the best in anonymization?

A new article in the Journal of the American Medical Association exposes a gap between what doctors consider adequate consent and what’s meaningful for patients, blaming “professional indifference” and “organizational inertia” for the problem. In research, the “reasonable-patient standard” is even harder to define and achieve.

Patient consent doesn’t have to go away. But it’s getting harder and harder for patients to anticipate the uses of their data, or even to understand what data is being used to match and measure them. However, precisely because we don’t know how data will be used or how patients can tolerate it, I believe that incremental steps would be most useful in teasing out what will work for future research projects.

Healthcare Consent and its Discontents (Part 2 of 3)

Posted on May 17, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article laid out what is wrong with informed consent today. We’ll continue now to look at possible remedies.

Could we benefit from more opportunities for consent?

Donna Gitter said that the Common Rule governing research might be updated to cover de-identified data as well as personally identifiable information. The impact of this on research, of course, would be incalculable. But it might lead to more participation in research, because 72% of patients say they would like to be asked for permission before their data is shared even in de-identified form. Many researchers, such as conference speaker Liza Dawson, would rather give researchers the right to share de-identified data without consent, but put protections in place.

To link multiple data sets, according to speaker Barbara Evans, we need an iron-clad method of ensuring that the data for a single individual is accurately linked. This requirement butts up against the American reluctance to assign a single ID to a patient. The reluctance is well-founded, because tracking individuals throughout their lives can lead to all kinds of seamy abuses.

One solution would be to give each individual control over a repository where all of her data would go. That solution implies that the individual would also control each release of the data. A lot of data sets could easily vanish from the world of research, as individuals die and successors lose interest in their data. We must also remember that public health requires the collection of certain types of data even if consent is not given.

Another popular reform envisioned by health care technologists, mentioned by Evans, is a market for health information. This scenario is part of a larger movement known as Vendor Relationship Management, which I covered several years ago. There is no doubt that individuals generate thousands of dollars worth of information, in health care records and elsewhere. Speaker Margaret Foster Riley claimed that the data collected from your loyalty card by the grocer is worth more than the money you spend there.

So researchers could offer incentives to share information instead of informed consent. Individuals would probably hire brokers to check that the requested uses conform to the individuals’ ethics, and that the price offered is fair.

Giving individuals control and haggling over data makes it harder, unfortunately, for researchers to assemble useful databases. First of all, modern statistical techniques (which fish for correlations) need huge data sets. Even more troubling is that partial data sets are likely to be skewed demographically. Perhaps only people who need some extra cash will contribute their data. Or perhaps only highly-educated people. Someone can get left out.

These problems exist even today, because our clinical trials and insurance records are skewed by income, race, age, and gender. Theoretically, it could get even worse if we eliminate the waiver that lets researchers release de-identified data without patient consent. Disparities in data sets and research were heavily covered at the Petrie-Flom conference, as I discuss in a companion article.

Privacy, discrimination, and other legal regimes

Several speakers pointed out that informed consent loses much of its significance when multiple data sets can be combined. The mosaic effect adds another layer of uncertainty about what will happen to data and what people are consenting to when they release it.

Nicolas Terry pointed out that American law tends to address privacy on a sector-by-sector basis, having one law for health records, another for student records, and so forth. He seemed to indicate that the European data protection regime, which is comprehensive, would be more appropriate nowadays where the boundary between health data and other forms of data is getting blurred. Sharona Hoffman said that employers and insurers can judge applicants’ health on the basis of such unexpected data sources as purchases at bicycle stores, voting records (healthy people have more energy to get involved in politics), and credit scores.

Mobile apps notoriously bring new leaks to personal data. Mobile operating systems fastidiously divide up access rights and require apps to request these rights during installation, but most of us just click Accept for everything, including things the apps have no right to need, such as our contacts and calendar. After all, there’s no way to deny an app one specific access right while still installing it.

And lots of these apps abuse their access to data. So we remain in a contradictory situation where certain types of data (such as data entered by doctors into records) are strongly protected, and other types that are at least as sensitive lack minimal protections. Although the app developers are free to collect and sell our information, they often promise to aggregate and de-identify it, putting them at the same level as traditional researchers. But no one requires the app developers to be complete and accurate.

To make employers and insurers pause before seeking out personal information, Hoffman suggested requiring that data brokers, and those who purchase their data, to publish the rules and techniques they employ to make use of the data. She pointed to the precedent of medical tests for employment and insurance coverage, where such disclosure is necessary. But I’m sure this proposal would be fought so heavily, by those who currently carry out their data spelunking under cover of darkness, that we’d never get it into law unless some overwhelming scandal prompted extreme action. Adrian Gropper called for regulations requiring transparency in every use of health data, and for the use of open source algorithms.

Several speakers pointed out that privacy laws, which tend to cover the distribution of data, can be supplemented by laws regarding the use of data, such as anti-discrimination and consumer protection laws. For instance, Hoffman suggested extending the Americans with Disabilities Act to cover people with heightened risk of suffering from a disability in the future. The Genetic Information Nondiscrimination Act (GINA) of 2008 offers a precedent. Universal health insurance coverage won’t solve the problem, Hoffman said, because businesses may still fear the lost work time and need for workplace accommodations that spring from health problems.

Many researchers are not sure whether their use of big data–such as “data exhaust” generated by people in everyday activities–would be permitted under the Common Rule. In a particularly wonky presentation (even for this conference) Laura Odwazny suggested that the Common Rule could permit the use of data exhaust because the risks it presents are no greater than “daily life risks,” which are the keystone for applying the Common Rule.

The final section of this article will look toward emerging risks that we are just beginning to understand.