Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Collaborating With Patients On Visit Agendas Improves Communication

Posted on April 26, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Maybe it’s because I spent many years as a reporter, but when I meet with a doctor I get all of my questions out, even if I don’t plan things out in advance. I realize that this barrage may be unnerving for some doctors, but if I need to fire off a bunch of questions to understand my care, I’m going to do it.

That being said, I realize most people are more like my family members. Both my husband and my mother feel overwhelmed at medical visits, and often fail to ask the questions they want answered. I don’t know if they feel pressured by the rapid pace of your typical medical visit, afraid to offend their doctor or have trouble figuring out what information will help them most effectively, but clearly, they don’t feel in control of the situation.

Given their concerns, I wasn’t surprised to learn that letting patients create and share an agenda for their medical visit – before they see their provider – seems to improve physician-patient communication substantially. New research suggests that when patients set the agenda for their visit, both the patient and their doctor like the results.

Study details

The paper, which appeared in the Annals of Family Medicine, said that researchers conducted their study at Harborview Medical Center, a safety-net county hospital in Seattle. The researchers recruited patients and clinicians for the study between June 9 and July 22, 2015 at the HMC Adult Medicine Clinic. The 67-clinician primary care clinic serves about 5,000 patients per year.

When participating patients came in for a visit, a researcher assistant met them in the waiting room and gave them a laptop computer with the EMR interface displayed. The participating patients then typed their agenda for the visit in the progress notes section of their medical record. Clinicians then reviewed that agenda, either before entering the exam room or upon entering.

After the visit, patients were given a survey asking them for demographic information, self-reported health status and perceptions of the agenda-driven visit. Meanwhile, clinicians filled out a separate survey asking them for their gender, age, role in the clinic and their own perceptions of the patient agenda.

After reviewing the survey data, researchers concluded that using a collaborative visit agenda is probably a good idea. Seventy nine percent of patients and 74 percent of clinicians felt the agendas improved patient-clinician communication, and both types of participants wanted to use visit agendas agenda (73 percent of patients and 82 percent of clinicians).

Flawed but still valuable

In closing, the authors admitted that the study had its technical limits, including the use of a small convenient sample at a single clinic with no comparison group, It’s also worth noting that the study drew from a vulnerable population which might not be representative of most healthcare consumers.

Nonetheless, researchers feel these data points to a broader trend, in which patients have become increasingly comfortable with electronic health data. “The patient cogeneration of visit notes, facilitated by new EMR functionality, reflects a shift in the authorship and “ownership” of [their data],” the study points out. (I can’t help but agree that this is the case, and moreover, that patients’ response to programs  like Open Notes support their conclusion.)

I’m not sure if my mom or hubby would buy into this approach, but I imagine that if they did, they might find it helpful. Let’s hope the idea catches fire, and helps ordinary consumers take more control of their clinical relationships.

EMR Data Use For Medical Research Sparks Demand For Intermediaries

Posted on February 7, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Over the last couple of years, it’s become increasingly common for clinical studies to draw on data gathered from EMRs — so common, in fact, that last year the FDA issued official guidance on how researchers should use such data.

Intermingling research observations and EMR-based clinical data poses different problems than provider-to-provider data exchanges. Specifically, the FDA recommends that when studies use EMR data in clinical investigations, researchers make sure that the source data are attributable, legible, contemporaneous, original and accurate, a formulation known as ALCOA by the feds.

It seems unlikely that most EMR data could meet the ALCOA standard at present. However, apparently the pharmas are working to solve this problem, according to a nice note I got from PR rep Jamie Adler-Palter of Bayleaf Communications.

For a number of reasons, clinical research has been somewhat paper-bound in the past. But that’s changing. In fact, a consortium of leading pharma companies known as TransCelerate Biopharma has been driving an initiative promoting “eSourcing,” the practice of using appropriate electronic sources of data for clinical trials.

eSourcing certainly sounds sensible, as it must speed up what has traditionally been the very long process of biopharma innovation. Also, I have to agree with my source that working with an electronic source beats paper any day (or as she notes, “paper does not have interactive features such as pop-up help.”) More importantly, I doubt pharmas will meet ALCOA objectives any other way.

According to Adler-Palter, thirteen companies have been launched to provide eSource solutions since 2014, including Clinical Research IO (presumably a Bayleaf client). I couldn’t find a neat and tidy list of these companies, as such solutions seem to overlap with other technologies. (But my sense is that this is a growing area for companies like Veeva, which offers cloud-based life science solutions.)

For its part CRIO, which has signed up 50 research sites in North America to date, offers some of the tools EMR users have come to expect. These include pre-configured templates which let researchers build in rules, alerts and calculations to prevent deviations from the standards they set.

CRIO also offers remote monitoring, allowing the monitor to view a research visit as soon as it’s finished and post virtual “sticky notes” for review by the research coordinator. Of course, remote monitoring is nothing new to readers, but my sense is that pharmas are just getting the hang of it, so this was interesting.

I’m not sure yet what the growth of this technology means for providers. But overall, anything that makes biopharma research more efficient is probably a net gain for patients, no?

Practice Management Market To Hit $17.6B Within Seven Years

Posted on February 1, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

A new research report has concluded that the global practice management systems market should hit $17.6 billion by 2024, fueled in part by the growth of value-adds like integration with other healthcare IT solutions.

The report, by London-based Grand View Research, includes a list of what it regards as key players in this industry. These include Henry Schein MicroMD, Allscripts Healthcare Solutions, AdvantEdge Healthcare Solutions, athenahealth, MediTouch, GE Healthcare, Practice Fusion, Greenway Medical, McKesson Corp, Accumedic Computer Systems and NextGen Healthcare.

The report argues that as PM systems are integrated with external systems EMRs, CPOE and laboratory information systems, practice management tools will increase in popularity. It says that this is happening because the complexity of medical billing and payment has grown over the last several years.

This is particularly the case in North America, where fast economic development, plus the presence of advanced research centers, hospitals, universities and medical device manufacturers keep up the flow of new product development and commercialization, researchers suggest.

In addition, researchers concluded that while PM software has accounted for the larger share of the market a couple of years ago, that’s changing. They predict that the services side of the business should grow substantially as practices demand training, support and system upgrades.

The report also says that cloud-based delivery of PM technology should grow rapidly in coming years. As Grand View reminds us, most PM systems historically have been based on-premise, but the move to cloud-based solutions is the future. This trend took off in 2015, researchers said.

This report, while worthwhile, probably doesn’t tell the whole story. Along with growing demand for PM systems,I’d contend that vendor sales strategies are playing a role here. After all, integration of PM systems with EMRs is part of a successful effort by many vendors to capture this parallel market along with their initial sale.

This may or may not be good for providers. I don’t have any information on how the various integrated practice management systems compare, but my sense is that generally, they’re a bit underpowered compared with their standalone competitors.

Grand View doesn’t take a stand on the comparative benefits of these two models, but it does concede that emerging integrated practice management systems linking EMRs, e-prescribing, patient engagement and other software with billing are actually different than standalone systems, which focus solely on scheduling, billing and administration. That does leave room to consider the possibility that the two models aren’t equal.

Meanwhile, one thing the report doesn’t – and probably can’t – address is how these systems will evolve under value-based care in the US. While appointment scheduling and administration will probably be much the same, it’s not clear to me how billing will evolve in such models. But we’ll need to wait and see on that. The question of how PM systems will work under value-based care probably won’t be critically important for a few years yet.

(Side note:  You may want to check out John’s post from a few years ago on practice management systems trends. It seems that the industry goes back and forth as to whether independent PM systems serve groups better than integrated ones.)

Diagnosis And Treatment Of “Epic Finger”

Posted on January 20, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

The following is a summary of an “academic” paper written by Andrew P. Ross, M.D., an emergency physician practicing in Savannah, GA. In the paper, Dr. Ross vents about the state of physician EMR issues and repetitive EMR clicking (in quite witty fashion!). Rather than try and elaborate on what he’s said so well, I leave you with his thoughts.

At long last, medical science has identified a subtle but dangerous condition which could harm generations of clinicians. A paper appearing in the Annals of Emergency Medicine this month has described and listed treatment options for “Epic finger,” an occupational injury similar to black lung, phossy jaw and miner’s nystagmus.

Article author Andrew Ross, MD, describes Epic finger, otherwise known as “Ross’s finger” or “the furious finger of clerical rage,” as a progressive repetitive use injury. Symptoms of Epic finger can include chronic-appearing tender and raised deformities, which may be followed by crepitus and locking of the finger. The joint may become enlarged and erythematous, resembling “a boa constrictor after it has eaten a small woodland mammal.”

Patients with Epic finger may experience severe psychiatric and comorbid conditions. Physical complications may include the inability to hail a cab with one’s finger extended, play a musical instrument or hold a pen due to intractable pain.  Meanwhile, job performance may suffer due to the inability to conduct standard tests such as the digital rectal exam and percussion of the abdomen, leading in turn to depression, unhappiness and increased physician burnout.

Dr. Ross notes that plain film imaging may show findings consistent with osteoarthritic changes of the joint space, and that blood work may show a mild leukocytosis and increased nonspecific markers of inflammation. Ultimately, however, this elusive yet disabling condition must be identified by the treating professional.

To treat Epic finger, Dr. Ross recommends anti-inflammatory medication, aluminum finger splinting and massage, as well as “an unwavering faith in the decency of humanity.”  But ultimately, to reverse this condition more is called for, including a sabbatical “in some magnificent locale with terrible wi-fi and a manageable patient load.”

Having identified the syndrome, Dr. Ross calls for recognition of this condition in the ICD-10 manual. Such recognition would help clinicians win acceptance of such a sabbatical by employers and obtain health and disability insurance coverage for treatment, he notes. In his view, the code for Epic finger would fit well in between “sucked into jet engine, subsequent encounter,” “burn due to water skis on fire” and “dependence on enabling machines and devices, not elsewhere classified.”

Meanwhile, hospitals can do their part by training patients to recognize when their healthcare providers are suffering from Epic finger. Patients can “provide appropriate and timely warnings to hospital administrators through critical Press Ganey patient satisfaction scorecards.”

Unfortunately, the prognosis for patients with Epic finger can be poor if it remains untreated. However, if the condition is recognized promptly, treated early, and bundled with time spent in actual patient care, the author believes that this condition can be reversed and perhaps even cured.

To accomplish this result, clinicians need to stand up for themselves, he suggests: “We as a profession need to recognize this condition as an occult manifestation of our own professional malaise,” he writes. “We must heal ourselves to heal others.”

Patients Frustrated By Lack Of Health Data Access

Posted on January 3, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

A new survey by Surescripts has concluded that patients are unhappy with their access to their healthcare data, and that they’d like to see the way in which their data is stored and shared change substantially.  Due to Surescripts’ focus on medication information management, many of the questions focus on meds, but the responses clearly reflect broader trends in health data sharing.

According to the 2016 Connected Care and the Patient Experience report, which drew on a survey of 1,000 Americans, most patients believe that their medical information should be stored electronically and shared in one central location. This, of course, flies in the face of current industry interoperability models, which largely focus on uniting countless distributed information sources.

Ninety-eight percent of respondents said that they felt that someone should have complete access to their medical records, though they don’t seem to have specified whom they’d prefer to play this role. They’re so concerned about having a complete medical record that 58% have attempted to compile their own medical history, Surescripts found.

Part of the reason they’re eager to see someone have full access to their health records is that it would make their care more efficient. For example, 93% said they felt doctors would save time if their patients’ medication history was in one location.

They’re also sick of retelling stories that could be found easily in a complete medical record, which is not too surprising given that they spend an average of 8 minutes on paperwork plus 8 minutes verbally sharing their medical history per doctor’s visit. To put this in perspective, 54% said that that renewing a driver’s license takes less work, 37% said opening a bank account was easier, and 32% said applying for a marriage license was simpler.

The respondents seemed very aware that improved data access would protect them, as well. Nine out of ten patients felt that their doctor would be less likely to prescribe the wrong medication if they had a more complete set of information. In fact, 90% of respondents said that they felt their lives could be endangered if their doctors don’t have access to their complete medication history.

Meanwhile, patients also seem more willing than ever to share their medical history. Researchers found that 77% will share physical information, 69% will share insurance information and 51% mental health information. I don’t have a comparable set of numbers to back this up, but my guess is that these are much higher levels than we’ve seen in the past.

On a separate note, the study noted that 52% of patients expect doctors to offer remote visits, and 36% believe that most doctor’s appointments will be remote in the next 10 years. Clearly, patients are demanding not just data access, but convenience.

News Flash: Physicians Still Very Dissatisfied With EMRs

Posted on October 18, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Anyone who reads this blog knows that many physicians still aren’t convinced that the big industry-wide EMR rollout was a good idea. But nonetheless, I was still surprised to learn — as you might be as well — that in the aggregate, physicians thoroughly dislike pretty much all of the ambulatory EMRs commonly used in medical practices today.

This conclusion, along with several other interesting factoids, comes from a new report from healthcare research firm peer60. The report is based on a survey from the firm conducted in August of this year, reaching out to 1,053 doctors in various specialties.

Generally speaking, the peer60 study found that EMR market for acute care facilities is consolidating quickly, and that Epic continues to add market share in the ambulatory EMR market (Although, it’s possible that’s also survey bias).  In fact, 50% of respondents reported using an Epic system, followed by 21% Cerner, 9% Allscripts and 4% the military EMR VistA.  Not surprisingly, respondents reporting Epic use accounted for 55% of hospitals with 751+ beds, but less predictably, a full 59% of hospitals of up to 300 beds were Epic shops as well. (For an alternate look at acute care EMR market share, check out the stats on systems with the highest number of certified users.)

When it came to which EMR the physician used in their own practice, however, the market looks a lot tighter. While 18% of respondents said they used Epic, 7% reported using Allscripts, 6% eClinicalWorks, 5% Cerner, 4% athenahealth, e-MDs and NextGen, 3% Greenway and Practice Fusion and 2% GE Healthcare. Clearly, have remained open to a far greater set of choices than hospitals. And that competition is likely to remain robust, as few practices seem to be willing to change to competitor systems — in fact, only 9% said they were interested in switching at present.

To me, where the report got particularly interesting was when peer60 offered data on the “net promoter scores” for some of the top vendors. The net promoter score method it uses is simple: it subtracts the percent of physicians who wouldn’t recommend an EMR from the percent who would recommend that EMR to get a number from 100 to -100. And obviously, if lots of physicians reported that they wouldn’t recommend a product the NPS fell into the negative.

While the report declines to name which NPS is associated with which vendor, it’s clear that virtually none have anything to write home about here. All but one of the NPS ratings were below zero, and one was rated at a nasty -73. The best NPS among the ambulatory care vendors was a 5, which as I read it suggests that either physicians feel they can tolerate it or simply believe the rest of the crop of competitors are even worse.

Clearly, something is out of order across the entire ambulatory EMR industry if a study like this — which drew on a fairly large number of respondents cutting across most hospital sizes and specialties — suggests that doctors are so unhappy with what they have. According to the report, the biggest physician frustrations are poor EMR usability and a lack of desired functionality, so what are we waiting for? Let’s get this right! The EMR revolution will never bear fruit if so many doctors are so frustrated with the tools they have.

Correlations and Research Results: Do They Match Up? (Part 2 of 2)

Posted on May 27, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous part of this article described the benefits of big data analysis, along with some of the formal, inherent risks of using it. We’ll go even more into the problems of real-life use now.

More hidden bias

Jeffrey Skopek pointed out that correlations can perpetuate bias as much as they undermine it. Everything in data analysis is affected by bias, ranging from what we choose to examine and what data we collect to who participates, what tests we run, and how we interpret results.

The potential for seemingly objective data analysis to create (or at least perpetuate) discrimination on the basis of race and other criteria was highlighted recently by a Bloomberg article on Amazon Price deliveries. Nobody thinks that any Amazon.com manager anywhere said, “Let’s not deliver Amazon Prime packages to black neighborhoods.” But that was the natural outcome of depending on data about purchases, incomes, or whatever other data was crunched by the company to produce decisions about deliveries. (Amazon.com quickly promised to eliminate the disparity.)

At the conference, Sarah Malanga went over the comparable disparities and harms that big data can cause in health care. Think of all the ways modern researchers interact with potential subjects over mobile devices, and how much data is collected from such devices for data analytics. Such data is used to recruit subjects, to design studies, to check compliance with treatment, and for epidemiology and the new Precision Medicine movement.

In all the same ways that the old, the young, the poor, the rural, ethnic minorities, and women can be left out of commerce, they can be left out of health data as well–with even worse impacts on their lives. Malanga reeled out some statistics:

  • 20% of Americans don’t go on the Internet at all.

  • 57% of African-Americans don’t have Internet connections at home.

  • 70% of Americans over 65 don’t have a smart phone.

Those are just examples of ways that collecting data may miss important populations. Often, those populations are sicker than the people we reach with big data, so they need more help while receiving less.

The use of electronic health records, too, is still limited to certain populations in certain regions. Thus, some patients may take a lot of medications but not have “medication histories” available to research. Ameet Sarpatwari said that the exclusion of some populations from research make post-approval research even more important; there we can find correlations that were missed during trials.

A crucial source of well-balanced health data is the All Payer Claims Databases that 18 states have set up to collect data across the board. But a glitch in employment law, highlighted by Carmel Shachar, releases self-funding employers from sending their health data to the databases. This will most likely take a fix from Congress. Unless they do so, researchers and public health will lack the comprehensive data they need to improve health outcomes, and the 12 states that have started their own APCD projects may abandon them.

Other rectifications cited by Malanga include an NIH requirement for studies funded by it to include women and minorities–a requirement Malanga would like other funders to adopt–and the FCC’s Lifeline program, which helps more low-income people get phone and Internet connections.

A recent article at the popular TechCrunch technology site suggests that the inscrutability of big data analytics is intrinsic to artificial intelligence. We must understand where computers outstrip our intuitive ability to understand correlations.

Correlations and Research Results: Do They Match Up? (Part 1 of 2)

Posted on May 26, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Eight years ago, a widely discussed issue of WIRED Magazine proclaimed cockily that current methods of scientific inquiry, dating back to Galileo, were becoming obsolete in the age of big data. Running controlled experiments on limited samples just have too many limitations and take too long. Instead, we will take any data we have conveniently at hand–purchasing habits for consumers, cell phone records for everybody, Internet-of-Things data generated in the natural world–and run statistical methods over them to find correlations.

Correlations were spotlighted at the annual conference of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. Although the speakers expressed a healthy respect for big data techniques, they pinpointed their limitations and affirmed the need for human intelligence in choosing what to research, as well as how to use the results.

Petrie-Flom annual 2016 conference

Petrie-Flom annual 2016 conference

A word from our administration

A new White House report also warns that “it is a mistake to assume [that big data techniques] are objective simply because they are data-driven.” The report highlights the risks of inherent discrimination in the use of big data, including:

  • Incomplete and incorrect data (particularly common in credit rating scores)

  • “Unintentional perpetuation and promotion of historical biases,”

  • Poorly designed algorithmic matches

  • “Personaliziaton and recommendation services that narrow instead of expand user options”

  • Assuming that correlation means causation

The report recommends “bias mitigation” (page 10) and “algorithmic systems accountability” (page 23) to overcome some of these distortions, and refers to a larger FTC report that lays out the legal terrain.

Like the WIRED articles mentioned earlier, this gives us some background for discussions of big data in health care.

Putting the promise of analytical research under the microscope

Conference speaker Tal Zarsky offered both fulsome praise and specific cautions regarding correlations. As the WIRED Magazine issue suggested, modern big data analysis can find new correlations between genetics, disease, cures, and side effects. The analysis can find them much cheaper and faster than randomized clinical trials. This can lead to more cures, and has the other salutory effect of opening a way for small, minimally funded start-up companies to enter health care. Jeffrey Senger even suggested that, if analytics such as those used by IBM Watson are good enough, doing diagnoses without them may constitute malpractice.

W. Nicholson Price, II focused on the danger of the FDA placing too many strict limits on the use of big data for developing drugs and other treatments. Instead of making data analysts back up everything with expensive, time-consuming clinical trials, he suggested that the FDA could set up models for the proper use of analytics and check that tools and practices meet requirements.

One of exciting impacts of correlations is that they bypass our assumptions and can uncover associations we never would have expected. The poster child for this effect is the notorious beer-and-diapers connection found by one retailer. This story has many nuances that tend to get lost in the retelling, but perhaps the most important point to note is that a retailer can depend on a correlation without having to ascertain the cause. In health, we feel much more comfortable knowing the cause of the correlation. Price called this aspect of big data search “black box” medicine.” Saying that something works, without knowing why, raises a whole list of ethical concerns.

A correlation stomach pain and disease can’t tell us whether the stomach pain led to the disease, the disease caused the stomach pain, or both are symptoms of a third underlying condition. Causation can make a big difference in health care. It can warn us to avoid a treatment that works 90% of the time (we’d like to know who the other 10% of patients are before they get a treatment that fails). It can help uncover side effects and other long-term effects–and perhaps valuable off-label uses as well.

Zarsky laid out several reasons why a correlation might be wrong.

  • It may reflect errors in the collected data. Good statisticians control for error through techniques such as discarding outliers, but if the original data contains enough apples, the barrel will go rotten.

  • Even if the correlation is accurate for the collected data, it may not be accurate in the larger population. The correlation could be a fluke, or the statistical sample could be unrepresentative of the larger world.

Zarsky suggests using correlations as a starting point for research, but backing them up by further randomized trials or by mathematical proofs that the correlation is correct.

Isaac Kohane described, from the clinical side, some of the pros and cons of using big data. For instance, data collection helps us see that choosing a gender for intersex patients right after birth produces a huge amount of misery, because the doctor guesses wrong half the time. However, he also cited times when data collection can be confusing for the reasons listed by Zarsky and others.

Senger pointed out that after drugs and medical devices are released into the field, data collected on patients can teach developers more about risks and benefits. But this also runs into the classic risks of big data. For instance, if a patient dies, did the drug or device contribute to death? Or did he just succumb to other causes?

We already have enough to make us puzzle over whether we can use big data at all–but there’s still more, as the next part of this article will describe.

Healthcare Consent and its Discontents (Part 3 of 3)

Posted on May 18, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article rated the pros and cons of new approaches to patient consent and control over data. Here we’ll look at emerging risks.

Privacy solidarity

Genetics present new ethical challenges–not just in the opportunity to change genes, but even just when sequencing them. These risks affect not only the individual: other members of her family and ethnic group can face discrimination thanks to genetic weaknesses revealed. Isaac Kohane said that the average person has 40 genetic markers indicating susceptibility to some disease or other. Furthermore, we sometimes disagree on what we consider a diseased condition.

Big data, particularly with genomic input, can lead to group harms, so Brent Mittelstadt called for moving beyond an individual view of privacy. Groups also have privacy needs (a topic I explored back in 1998). It’s not enough for an individual to consider the effect of releasing data on his own future, but on the future of family members, members of his racial group, etc. Similarly, Barbara Evans said we have to move from self-consciousness to social consciousness. But US and European laws consider privacy and data protection only on the basis of the individual.

The re-identification bogey man

A good many references were made at the conference to the increased risk of re-identifying patients from supposedly de-identified data. Headlines are made when some researcher manages to uncover a person who thought himself anonymous (and who database curators thought was anonymous when they released their data sets). In a study conducted by a team that included speaker Catherine M. Hammack, experts admitted that there is eventually a near 100% probability of re-identifying each person’s health data. The culprit in all this is burgeoning set of data collected from people as they purchase items and services, post seemingly benign news about themselves on social media, and otherwise participate in modern life.

I think the casual predictions of the end of anonymity we hear so often are unnecessarily alarmist. The field of anonymity has progressed a great deal since Latanya Sweeney famously re-identified a patient record for Governor William Weld of Massachusetts. Re-identifications carried out since then, by Sweeney and others, have taken advantage of data that was not anonymized (people just released it with an intuitive assumption that they could not be re-identified) or that was improperly anonymized, not using recommended methods.

Unfortunately, the “safe harbor” in HIPAA (designed precisely for medical sites lacking the skills to de-identify data properly) enshrines bad practices. Still, in a HIPAA challenge cited by Ameet Sarpatwari,only two of 15,000 individuals were re-identified. The mosaic effect is still more of a theoretical weakness, not an immediate threat.

I may be biased, because I edited a book on anonymization, but I would offer two challenges to people who cavalierly dismiss anonymization as a useful protection. First, if we threw up our hands and gave up on anonymization, we couldn’t even carry out a census, which is mandated in the U.S. Constitution.

Second, anonymization is comparable to encryption. We all know that computer speeds are increasing, just as are the sophistication of re-identification attacks. The first provides a near-guarantee that, eventually, our current encrypted conversations will be decrypted. The second, similarly, guarantees that anonymized data will eventually be re-identified. But we all still visit encrypted web sites and use encryption for communications. Why can’t we similarly use the best in anonymization?

A new article in the Journal of the American Medical Association exposes a gap between what doctors consider adequate consent and what’s meaningful for patients, blaming “professional indifference” and “organizational inertia” for the problem. In research, the “reasonable-patient standard” is even harder to define and achieve.

Patient consent doesn’t have to go away. But it’s getting harder and harder for patients to anticipate the uses of their data, or even to understand what data is being used to match and measure them. However, precisely because we don’t know how data will be used or how patients can tolerate it, I believe that incremental steps would be most useful in teasing out what will work for future research projects.

Healthcare Consent and its Discontents (Part 2 of 3)

Posted on May 17, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article laid out what is wrong with informed consent today. We’ll continue now to look at possible remedies.

Could we benefit from more opportunities for consent?

Donna Gitter said that the Common Rule governing research might be updated to cover de-identified data as well as personally identifiable information. The impact of this on research, of course, would be incalculable. But it might lead to more participation in research, because 72% of patients say they would like to be asked for permission before their data is shared even in de-identified form. Many researchers, such as conference speaker Liza Dawson, would rather give researchers the right to share de-identified data without consent, but put protections in place.

To link multiple data sets, according to speaker Barbara Evans, we need an iron-clad method of ensuring that the data for a single individual is accurately linked. This requirement butts up against the American reluctance to assign a single ID to a patient. The reluctance is well-founded, because tracking individuals throughout their lives can lead to all kinds of seamy abuses.

One solution would be to give each individual control over a repository where all of her data would go. That solution implies that the individual would also control each release of the data. A lot of data sets could easily vanish from the world of research, as individuals die and successors lose interest in their data. We must also remember that public health requires the collection of certain types of data even if consent is not given.

Another popular reform envisioned by health care technologists, mentioned by Evans, is a market for health information. This scenario is part of a larger movement known as Vendor Relationship Management, which I covered several years ago. There is no doubt that individuals generate thousands of dollars worth of information, in health care records and elsewhere. Speaker Margaret Foster Riley claimed that the data collected from your loyalty card by the grocer is worth more than the money you spend there.

So researchers could offer incentives to share information instead of informed consent. Individuals would probably hire brokers to check that the requested uses conform to the individuals’ ethics, and that the price offered is fair.

Giving individuals control and haggling over data makes it harder, unfortunately, for researchers to assemble useful databases. First of all, modern statistical techniques (which fish for correlations) need huge data sets. Even more troubling is that partial data sets are likely to be skewed demographically. Perhaps only people who need some extra cash will contribute their data. Or perhaps only highly-educated people. Someone can get left out.

These problems exist even today, because our clinical trials and insurance records are skewed by income, race, age, and gender. Theoretically, it could get even worse if we eliminate the waiver that lets researchers release de-identified data without patient consent. Disparities in data sets and research were heavily covered at the Petrie-Flom conference, as I discuss in a companion article.

Privacy, discrimination, and other legal regimes

Several speakers pointed out that informed consent loses much of its significance when multiple data sets can be combined. The mosaic effect adds another layer of uncertainty about what will happen to data and what people are consenting to when they release it.

Nicolas Terry pointed out that American law tends to address privacy on a sector-by-sector basis, having one law for health records, another for student records, and so forth. He seemed to indicate that the European data protection regime, which is comprehensive, would be more appropriate nowadays where the boundary between health data and other forms of data is getting blurred. Sharona Hoffman said that employers and insurers can judge applicants’ health on the basis of such unexpected data sources as purchases at bicycle stores, voting records (healthy people have more energy to get involved in politics), and credit scores.

Mobile apps notoriously bring new leaks to personal data. Mobile operating systems fastidiously divide up access rights and require apps to request these rights during installation, but most of us just click Accept for everything, including things the apps have no right to need, such as our contacts and calendar. After all, there’s no way to deny an app one specific access right while still installing it.

And lots of these apps abuse their access to data. So we remain in a contradictory situation where certain types of data (such as data entered by doctors into records) are strongly protected, and other types that are at least as sensitive lack minimal protections. Although the app developers are free to collect and sell our information, they often promise to aggregate and de-identify it, putting them at the same level as traditional researchers. But no one requires the app developers to be complete and accurate.

To make employers and insurers pause before seeking out personal information, Hoffman suggested requiring that data brokers, and those who purchase their data, to publish the rules and techniques they employ to make use of the data. She pointed to the precedent of medical tests for employment and insurance coverage, where such disclosure is necessary. But I’m sure this proposal would be fought so heavily, by those who currently carry out their data spelunking under cover of darkness, that we’d never get it into law unless some overwhelming scandal prompted extreme action. Adrian Gropper called for regulations requiring transparency in every use of health data, and for the use of open source algorithms.

Several speakers pointed out that privacy laws, which tend to cover the distribution of data, can be supplemented by laws regarding the use of data, such as anti-discrimination and consumer protection laws. For instance, Hoffman suggested extending the Americans with Disabilities Act to cover people with heightened risk of suffering from a disability in the future. The Genetic Information Nondiscrimination Act (GINA) of 2008 offers a precedent. Universal health insurance coverage won’t solve the problem, Hoffman said, because businesses may still fear the lost work time and need for workplace accommodations that spring from health problems.

Many researchers are not sure whether their use of big data–such as “data exhaust” generated by people in everyday activities–would be permitted under the Common Rule. In a particularly wonky presentation (even for this conference) Laura Odwazny suggested that the Common Rule could permit the use of data exhaust because the risks it presents are no greater than “daily life risks,” which are the keystone for applying the Common Rule.

The final section of this article will look toward emerging risks that we are just beginning to understand.