Previous segments of this article explained what makes data sharing difficult in four major areas of Internet data: money, personal data, media content, and government information. Now it’s time to draw some lessons for the health care field.
Personal Health Data
So let’s look now at our health data. It’s clearly sensitive, because disclosure can lead to discrimination by employees and insurers, as well as ostracism by the general public. For instance, an interesting article highlights the prejudice faced by recovering opiate addicts among friends and co-workers if they dare to reveal that they are successfully dealing with their addition.
The value of personal health data is caught up with our very lives. We cannot change our diagnoses or genetic predispositions to disease the way we can change our bank accounts or credit cards. At the same time, whatever information we can provide about ourselves is of immense value to researchers who are trying to solve the health conditions we suffer from.
So we can assume that health data has an enhanced value and requires more protection than other types of personal data.
Currently, we rarely control our data. Anything we tell doctors is owned by them. HIPAA strictly controls the sharing of such data (especially as it was clarified a couple years ago in the handling of third parties known as “business associates”). But doctors have many ways to deny us access to our own data. One of my family members goes to a doctor who committed the sin of changing practices. We had to pay the old practice to transfer records to the new practice. (I have written about problems with interoperability and data exchange in many other contexts, including blog posts about the Health Datapalooza and the HIT Standards Committee. Data exchange problems hinder research, big data inquiries, and clinical interventions.)
A doctor might well claim, “Why shouldn’t I own that data? Didn’t I do the exam? Didn’t I order the test whose data is now in the record?” Using that logic, the doctor should grant the lab ownership of the test. Now that patients can order their own medical tests (at least in Arizona), how does this dynamic around ownership change? And as more and more patients collect data on themselves using things such as the Apple Watch, network-connected scales, and fitness devices — data that may contain inaccuracies but is still useful for understanding people’s behavior and health status — how does this affect the power balance between a patient and the healthcare provider, or a researcher pursuing a clinical trial?
It’s also interesting to note that although HIPAA covers data collected by people who treat us and insurers who pay for the treatment, it has no impact on data in other settings. In particular, anything we choose to share online joins the enormous stream of public data without restrictions on use.
And it’s disturbing how freely data can be shared with marketers. For instance, when Vermont tried to restrain pharmacies from selling data about prescriptions to marketers, it was overruled by the U.S Supreme Court. The court took it for granted that pharmacies would adequately de-identify patients, but this is by no means assured.
What are the competing priorities, then, about protection of health data? On the research side — where data can really help patients by finding cures or palliative measures — pressures are increasing to loosen our personal control over data. Laws and regulations are being amended to override the usual restrictions placed on researchers for the reuse of patient data.
The argument for reform is that researchers often find new uses for old data, and that the effort of contacting patients and getting permission to reuse the data impose prohibitive expenses on researchers.
Certainly, I would get annoyed to be asked every week to approve the particular reuse of my personal data. But I’d rather be asked than have my preferences overridden. In the Internet age, I find it ridiculous to argue that researchers would be overly burdened to request access to data for new uses.
A number of efforts have been launched to give researchers a general, transferable consent to patient data. Supposedly, the patient would grant a general release of data for some class of research at the beginning of data collection. But these efforts have all come to naught. Remember that a patient is often asked for consent to release data at a very tense moment — just after being diagnosed with a serious disease or while on the verge of starting a difficult treatment regimen. Furthermore, the task of designing a general class of research is a major semantic issue. It would require formalizing in software what the patient does and does not allow — and no one has solved that problem.
How, then, do I suggest resolving the question of how we should handle patient data? First, patients need to control all data about themselves. All clinicians, pharmacies, labs, and other institutions exist to serve patients and support their health. They can certainly validate data — for instance, by providing digital signatures indicating the diagnoses, test results, and other information are accurate — but they do not own the data.
A look at how we’re protecting money on the Internet may help us understand the urgency of protecting health data: storing it securely, encrypting it, and making outside organizations jump through hoops to access it.
Ownership of patient data is currently as murky as personal data of other types, HIPAA notwithstanding. We can use many of the same arguments and concepts for health data that we’ve seen for other personal data. As with government data, we can hold interesting discussions about how much difference anonymization makes to ownership — do you have no right to restrict the use of your health or government data once it is supposedly anonymized?
Dr. Adrian Gropper, CTO of Patient Privacy Rights, says that the concept of “ownership” is not helpful for patient data. It is better in terms of both law and computer science to speak of authorization: who can look at the data and who grants the right to look at it. Gropper works on the open source HEART WG project, which is creating an OAuth-based system to support patient control, and which he and I have written about on the Radar site.
The corollary of this principle is that patients need repositories for their data that are easy to manage. HEART WG can tie together data in different repositories — the patient’s, the clinicians’, and others — and control the flow from one repository to another.
Finally, researchers must contact patients to explain how their data will be used and to request permission. With Internet tools, this should not be onerous for the researcher or the patient. Hey, everybody in medicine nowadays touts “patient engagement.” One is likely to get better data if one engages. So, let’s do it. And that way we can avoid the uncertain protection of anonymization or de-identification, which degrades patient data in order to render it harder to track back to an individual.
Researchers worry about request fatigue if individuals have to respond to every request manually, although I see this as a great opportunity for research projects to explain their goals and drum up public support. A number of organizations are trying to design systems to let individuals approve use of their data in advance, and I wish them the best, but all such attempts have shipwrecked on two unforgiving shoals. First is the impossibility of anticipating new research and the radically different directions it can take. Second is the trap of ontologies: who can define a useful concept such as “non-profit research” in terms strict enough to be written into computer programs? And how will the health care world agree on representations of the ontologies and produce perfectly interoperable computer programs to automate consent?
Value, ownership, and protection are difficult questions on an Internet that was designed in the 1960s and 1970s as a loose, open platform. We can fill the gaps through policy measures and technical protections based on well-grounded principles. Patients care about their data and its privacy. We can give them the control they crave and deserve.