Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

The Healthcare AI Future, From Google’s DeepMind

Posted on February 22, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

While much of its promise is still emerging, it’s hard to argue that AI has arrived in the health IT world. As I’ve written in a previous article, AI can already be used to mine EMR data in a sophisticated way, at least if you understand its limitations. It also seems poised to help providers predict the incidence and progress of diseases like congestive heart failure. And of course, there are scores of companies working on other AI-based healthcare projects. It’s all heady stuff.

Given AI’s potential, I was excited – though not surprised – to see that world-spanning Google has a dog in this fight. Google, which acquired British AI firm DeepMind Technologies a few years ago, is working on its own AI-based healthcare solutions. And while there’s no assurance that DeepMind knows things that its competitors don’t, its status as part of the world’s biggest data collector certainly comes with some advantages.

According to the New Scientist, DeepMind has begun working with the Royal Free London NHS Foundation Trust, which oversees three hospitals. DeepMind has announced a five-year agreement with the trust, in which it will give it access to patient data. The Google-owned tech firm is using that data to develop and roll out its healthcare app, which is called Streams.

Streams is designed to help providers kick out alerts about a patient’s condition to the cellphone used by the doctor or nurse working with them, in the form of a news notification. At the outset, Streams will be used to find patients at risk of kidney problems, but over the term of the five-year agreement, the developers are likely to add other functions to the app, such as patient care coordination and detection of blood poisoning.

Streams will deliver its news to iPhones via push notifications, reminders or alerts. At present, given its focus on acute kidney injury, it will focus on processing information from key metrics like blood tests, patient observations and histories, then shoot a notice about any anomalies it finds to a clinician.

This is all part of an ongoing success story for DeepMind, which made quite a splash in 2016. For example, last year its AlphaGo program actually beat the world champion at Go, a 2,500-year-old strategy game invented in China which is still played today. DeepMind also achieved what it terms “the world’s most life-like speech synthesis” by creating raw waveforms. And that’s just a couple of examples of its prowess.

Oh, and did I mention – in an achievement that puts it in the “super-smart kid you love to hate” category – that DeepMind has seen three papers appear in prestigious journal Nature in less than two years? It’s nothing you wouldn’t expect from the brilliant minds at Google, which can afford the world’s biggest talents. But it’s still a bit intimidating.

In any event, if you haven’t heard of the company yet (and I admit I hadn’t) I’m confident you will soon. While the DeepMind team isn’t the only group of geniuses working on AI in healthcare, it can’t help but benefit immensely from being part of Google, which has not only unimaginable data sources but world-beating computing power at hand. If it can be done, they’re going to do it.

Switching Out EMRs For Broad-Based HIT Platforms

Posted on February 8, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

I’ve always enjoyed reading HISTalk, and today was no exception. This time, I came across a piece by a vendor-affiliated physician arguing that it’s time for providers to shift from isolated EMRs to broader, componentized health IT platforms. The piece, by Excelicare chief medical officer Toby Samo, MD, clearly serves his employer’s interests, but I still found the points he made to be worth discussing.

In his column, he notes that broad technical platforms, like those managed by Uber and Airbnb, have played a unique role in the industries they serve. And he contends that healthcare players would benefit from this approach. He envisions a kind of exchange allowing the use of multiple components by varied healthcare organizations, which could bring new relationships and possibilities.

“A platform is not just a technology,” he writes, “but also ‘a new business model that uses technology to connect people, organizations and resources in an interactive ecosystem.’”

He offers a long list of characteristics such a platform might have, including that it:

* Relies on apps and modules which can be reused to support varied projects and workflows
* Allows users to access workflows on smartphones and tablets as well as traditional PCs
* Presents the results of big data analytics processes in an accessible manner
* Includes an engine which allows clients to change workflows easily
* Lets users with proper security authorization to change templates and workflows on the fly
* Helps users identify, prioritize and address tasks
* Offers access to high-end clinical decision support tools, including artificial intelligence
* Provides a clean, easy-to-use interface validated by user experience experts

Now, the idea of shared, component-friendly platforms is not new. One example comes from the Healthcare Services Platform Consortium, which as of last August was working on a services-oriented architecture platform which will support a marketplace for interoperable healthcare applications. The HSPC offering will allow multiple providers to deliver different parts of a solution set rather than each having to develop their own complete solution. This is just one of what seem like scores of similar initiatives.

Excelicare, for its part, offers a cloud-based platform housing a clinical data repository. The company says its platform lets providers construct a patient-specific longitudinal health record on the fly by mining existing EHRs claims repositories and other data. This certainly seems like an interesting idea.

In all candor, my instinct is that these platforms need to be created by a neutral third party – such as travel information network SABRE – rather than connecting providers via a proprietary platform created by companies like Excelicare. Admittedly, I don’t have a deep understanding of Excelicare’s technology works, or how open its platform is, but I doubt it would be viable financially if it didn’t attempt to lock providers into its proprietary technology.

On the other hand, with no one interoperability approach having gained an unbeatable lead, one never knows what’s possible. Kudos to Samo and his colleagues for making an effort to advance the conversation around data sharing and collaboration.

A Look At The Role Of EMRs In Personalized Medicine

Posted on January 19, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

NPR recently published an interesting piece on how some researchers are developing ways to leverage day-to-day medical information as a means of personalizing medical care. This is obviously an important approach – whether or not you take the full-on big data approach drug researchers are – and I found the case studies it cited to be quite interesting.

In one instance cited by the article, researchers at Kaiser Permanente have begun pulling together a dashboard, driven by condition types, which both pulls together past data and provides real-life context.

“Patients are always saying, don’t just give me the averages, tell me what happened to others who look like me and made the same treatment decisions I did,” said Dr. Tracy Lieu, head of Kaiser’s research division, who spoke to NPR. “And tell me not only did they live or die, but tell me what their quality of life was about.”

Dr. Lieu and her fellow researchers can search a database on a term like “pancreatic cancer” and pull up data not only from an individual patient, but also broad information on other patients who were diagnosed with the condition. According to NPR, the search function also lets them sort data by cancer type, stage, patient age and treatment options, which helps researchers like Lieu spot trends and compare outcomes.

Kaiser has also supplemented the traditional clinical data with the results of a nine-question survey, which patients routinely fill out, looking at their perception of their health and emotional status. As the article notes, the ideal situation would be if patients were comfortable filling out longer surveys on a routine basis, but the information Kaiser already collects offers at least some context on how patients reacted to specific treatments, which might help future patients know what to expect from their care.

Another approach cited in the article has been implemented by Geisinger Health System, which is adding genetic data to EMRs. Geisinger has already compiled 50,000 genetic scans, and has set a current goal of 125,000 scans.

According to Dr. David Ledbetter, Geisinger’s chief scientific officer, the project has implications for current patients. “Even though this is primarily a research project, we’re identifying genomic variants that are actually important to people’s health and healthcare today,” he told the broadcaster.

Geisinger is using a form of genetic testing known as exome sequencing, which currently costs a few thousand dollars per patient. But prices for such tests are falling so quickly that they could hit the $300 level this year, which would make it more likely that patients would be willing to pay for their own tests to research their genetic proclivities, which in turn would help enrich databases like Geisinger’s.

“We think as the cost comes down it will be possible to sequence all of the genes of individual patients, store that information in the electronic medical record, and it will guide and individualize and optimize patient care,” Ledbetter told NPR.

As the story points out, we might be getting ahead of ourselves if we all got analyses of our genetic information, as doctors don’t know how to interpret many of the results. But it’s good to see institutions like these getting prepared, and making use of what information they do have in the mean time.

E-Patient Update: The Smart Medication Management Portal

Posted on December 16, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

As I work to stay on top of my mix of chronic conditions, one thing that stands out to me is that providers expect me to do most of my own medication tracking and management. What I mean by this is that their relationship to my med regimen is fairly static, with important pieces of the puzzle shared between multiple providers. Ultimately, there’s little coordination between prescribers unless I make it happen.

I’ve actually had to warn doctors about interactions between my medications, even when those interactions are fairly well-known and just a Google search away online. And in other cases, specialists have only asked about medications relevant to their treatment plan and gotten impatient when I tried to provide the entire list of prescriptions.

Sure, my primary care provider has collected the complete list of my meds, and even gets a updates when I’ve been prescribed a new drug elsewhere. But given the complexity of my medical needs, I would prefer to talk with her about how all of the various medications are working for me and why I need them, something that rarely if ever fits into our short meeting time.

Regardless of who’s responsible, this is a huge problem. Patients like me are being sent with some general drug information, a pat on the back, and if we experience side effects or are taking meds incorrectly we may not even know it.

So at this point you’re thinking, “Okay, genius, what would YOU do differently?” And that’s a fair question. So here’s what I’d like to see happen when doctors prescribe medications.

First, let’s skip over the issue of what it might take to integrate medication records across all providers’s HIT systems. Instead, let’s create a portal — aggregating all the medication records for all the pharmacies in a given ZIP Code — and allow anyone with a valid provider number and password to log in and review it.  The same site could run basic analytics examining interactions between drugs from all providers. (By the way, I’m familiar with Surescripts, which is addressing some of these gaps, but I’m envisioning a non-proprietary shared resource.)

Rather than serving as strictly a database, the site would include a rules engine which runs predictive analyses on what a patient’s next steps should be, given their entire regimen, then generate recommendations specific to that patient. If any of these were particularly important, the recommendations could be pushed to the provider (or if administrative, to staff members) by email or text.

These recommendations, which could range from reminding the patient to refill a critical drug to warning the clinician if an outside prescription interacts with their existing regimen. Smart analytics tools might even be able to predict whether a patient is doing well or poorly by what drugs have been added to their regimen, given the drug family and dosage.

Of course, these functions should ultimately be integrated into the physicians’ EMRs, but at first, hospitals and clinics could start by creating an interface to the portal and linking it to their EMR. Eventually, if this approach worked, one would hope that EMR vendors would start to integrate such capabilities into their platform.

Now I imagine there could be holes in these ideas and I realize how challenging it is to get disparate health systems and providers to work together. But what I do know is that patients like myself get far too little guidance on how to manage meds effectively, when to complain about problems and how to best advocate for ourselves when doctors whip out the prescription pad. And while I don’t think my overworked PCP can solve the problem on her own, I believe it may be possible to improve med management outcomes using smart automation.

Bottom line, I doubt anything will change here unless we create an HIT solution to the problem. After all, given how little time they have already, I don’t see clinicians spending a lot more time on meds. Until then, I’m stuck relying on obsessive research via Dr. Google, brief chats with my frantic retail pharmacist and instincts honed over time. So wish me luck!

Healthcare Needs Clinician Data Experts

Posted on November 2, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

This week I read an interesting article by a physician about the huge challenges clinicians face coping with unthinkably large clinical data sets — and what we should do about it. The doctor who wrote the article argues for the creation of a next-gen clinician/health IT hybrid expert that will bridge the gaps between technology and medicine.

In the article, the doctor noted that while he could conceivably answer any question he had about his patients using big data, he would have to tame literally billions of data rows to do so.

Right now, logs of all EHR activity are dumped into large databases every day, notes Alvin Rajkomar, MD. In theory, clinicians can access the data, but in reality most of the analysis and taming of data is done by report writers. The problem is, the HIT staff compiling reports don’t have the clinical context they need to sort such data adequately, he says:

“Clinical data is complex and contextual,” he writes. “[For example,] a heart rate may be listed under the formal vital sign table or under nursing documentation, where it is listed as a pulse. A report writer without clinical background may not appreciate that a request for heart rate should actually include data from both tables.“

Frustrated with the limitations of this process, Rajkomar decided to take the EHR database problem on. He went through an intense training process including 24 hours of in–person classes, a four-hour project and four hours of supervised training to obtain the skills needed to work with large clinical databases. In other words, he jumped right in the middle of the game.

Even having a trained physician in the mix isn’t enough, he argues. Ultimately, understanding such data calls for developing a multidisciplinary team. Clinicians need each others’ perspectives on the masses of data coming in, which include not only EHR data but also sensor, app and patient record outcomes. Moreover, a clinician data analyst is likely to be more comfortable than traditional IT staffers when working with nurses, pharmacists or laboratory technicians, he suggests.

Still, having even a single clinician in the mix can have a major impact, Rajkomar argues. He contends that the healthcare industry needs to create more people like him, a role he calls “clinician-data translator.” The skills needed by this translator would include expertise in clinical systems, the ability to extract data from large warehouses and deep understanding of how to rigorously analyze large data sets.

Not only would such a specialist help with data analysis, and help to determine where to apply novel  algorithms, they could also help other clinicians decide which questions are worth investigating further in the first place. What’s more, clinician data scientists would be well-equipped to integrate data-gathering activities into workflows, he points out.

The thing is, there aren’t any well-marked pathways to becoming a clinician data scientist, with most data science degrees offering training that doesn’t focus on a particular domain. But if you believe Rajkomar – and I do – finding clinicians who want to be data scientists makes a lot of sense for health systems and clinics. While their will always be a role for health IT experts with purely technical training, we need clinicians who will work alongside them and guide their decisions.

Artificial Intelligence Can Improve Healthcare

Posted on July 20, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

In recent times, there has been a lot of discussion of artificial intelligence in public forums, some generated by thought leaders like Bill Gates and Stephen Hawking. Late last year Hawking actually argued that artificial intelligence “could spell the end of the human race.”

But most scientists and researchers don’t seem to be as worried as Gates and Hawking. They contend that while machines and software may do an increasingly better job of imitating human intelligence, there’s no foreseeable way in which they could become a self-conscious threat to humanity.

In fact, it seems far more likely that AI will work to serve human needs, including healthcare improvement. Here’s five examples of how AI could help bring us smarter medicine (courtesy of Fast Company):

  1. Diagnosing disease:

Want to improve diagnostic accuracy? Companies like Enlitic may help. Enlitic is studying massive numbers of medical images to help radiologists pick up small details like tiny fractures and tumors.

  1. Medication management

Here’s a twist on traditional med management strategies. The AiCure app is leveraging a smartphone webcam, in tandem with AI technology, to learn whether patients are adhering to their prescription regimen.

  1. Virtual clinicians

Though it may sound daring, a few healthcare leaders are considering giving no-humans-involved health advice a try. Some are turning to startup Sense.ly, which offers a virtual nurse, Molly. The Sense.ly interface uses machine learning to help care for chronically-ill patients between doctor’s visits.

  1. Drug creation:

AI may soon speed up the development of pharmaceutical drugs. Vendors in this field include Atomwise, whose technology leverages supercomputers to dig up therapies for database of molecular structures, and Berg Health, which studies data on why some people survive diseases.

  1. Precision medicine:

Working as part of a broader effort seeking targeted diagnoses and treatments for individuals, startup Deep Genomics is wrangling huge data sets of genetic information in an effort to find mutations and linkages to disease.

In addition to all of these clinically-oriented efforts, which seem quite promising in and of themselves, it seems clear that there are endless ways in which computing firepower, big data and AI could come together to help healthcare business operations.

Just to name the first applications that popped into my head, consider the impact AI could have on patient scheduling, particularly in high-volume hostile environments. What about using such technology to do a better job of predicting what approaches work best for collecting patient balances, and even to execute those efforts is sophisticated way?

And of course, there are countless other ways in which AI could help providers leverage clinical data in real time. Sure, EMR vendors are already rolling out technology attempting to help hospitals target emergent conditions (such as sepsis), but what if AI logic could go beyond condition-specific modules to proactively predicting a much broader range of problems?

The truth is, I don’t claim to have a specific expertise in AI, so my guesses on what applications makes sense are no better than any other observer’s. On the other hand, though, if anyone reading this has cool stories to tell about what they’re doing with AI technology I’d love to hear them.

When Did A Doctor Last Worry About Social Determinants of Health (SDOH)?

Posted on June 16, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’ve heard over and over the importance of social determinants of health (SDOH) and their impact on healthcare costs. The concept is fascinating and challenging. There are thousands of examples. A simple one to illustrate the challenge is the patient who arrives at the emergency room with a fever. The doctor treats the fever and then sends them back to their home where they have no heat and are likely to get sick again.

I ask all the doctors that read this blog, when was the last time you worried about these various social determinants of health (SDOH) in the care you provided a patient?

I’ll be interested to hear people’s responses to this question. I’m sure it would create some incredible stories from doctors who really care about their patients and go above and beyond their job duties. In fact, it would be amazing to hear and share some of these stories. We could learn a lot from them. However, I’m also quite sure that almost all of those stories would end with the doctor saying “I wasn’t paid to help the patient this way but it was the right thing to do.”

Let me be clear. I’m not blaming doctors for not doing more for their patients. If I were a doctor, I’m sure I’d have made similar decisions to most of the doctors out there. They do what they’re paid to do.

As I’ve been sitting through the AHIP Institute conference, I’m pondering on if this will change. Will value based reimbursement force doctors to understand SDOH or will they just leave that to their health system or their various software systems to figure it out for them?

I’m torn on the answer to that question. A part of me thinks that most doctors won’t want to dive into that area of health. Their training wasn’t designed for that type of thinking and it would be a tough transition of mindset for many. On the other hand, I think there’s a really important human component that’s going to be required in SDOH. Doctors have an inherent level of trust that is extremely valuable with patients.

What do you think of SDOH? Will doctors need to learn about it? Will the systems just take care of it for them?

Correlations and Research Results: Do They Match Up? (Part 2 of 2)

Posted on May 27, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous part of this article described the benefits of big data analysis, along with some of the formal, inherent risks of using it. We’ll go even more into the problems of real-life use now.

More hidden bias

Jeffrey Skopek pointed out that correlations can perpetuate bias as much as they undermine it. Everything in data analysis is affected by bias, ranging from what we choose to examine and what data we collect to who participates, what tests we run, and how we interpret results.

The potential for seemingly objective data analysis to create (or at least perpetuate) discrimination on the basis of race and other criteria was highlighted recently by a Bloomberg article on Amazon Price deliveries. Nobody thinks that any Amazon.com manager anywhere said, “Let’s not deliver Amazon Prime packages to black neighborhoods.” But that was the natural outcome of depending on data about purchases, incomes, or whatever other data was crunched by the company to produce decisions about deliveries. (Amazon.com quickly promised to eliminate the disparity.)

At the conference, Sarah Malanga went over the comparable disparities and harms that big data can cause in health care. Think of all the ways modern researchers interact with potential subjects over mobile devices, and how much data is collected from such devices for data analytics. Such data is used to recruit subjects, to design studies, to check compliance with treatment, and for epidemiology and the new Precision Medicine movement.

In all the same ways that the old, the young, the poor, the rural, ethnic minorities, and women can be left out of commerce, they can be left out of health data as well–with even worse impacts on their lives. Malanga reeled out some statistics:

  • 20% of Americans don’t go on the Internet at all.

  • 57% of African-Americans don’t have Internet connections at home.

  • 70% of Americans over 65 don’t have a smart phone.

Those are just examples of ways that collecting data may miss important populations. Often, those populations are sicker than the people we reach with big data, so they need more help while receiving less.

The use of electronic health records, too, is still limited to certain populations in certain regions. Thus, some patients may take a lot of medications but not have “medication histories” available to research. Ameet Sarpatwari said that the exclusion of some populations from research make post-approval research even more important; there we can find correlations that were missed during trials.

A crucial source of well-balanced health data is the All Payer Claims Databases that 18 states have set up to collect data across the board. But a glitch in employment law, highlighted by Carmel Shachar, releases self-funding employers from sending their health data to the databases. This will most likely take a fix from Congress. Unless they do so, researchers and public health will lack the comprehensive data they need to improve health outcomes, and the 12 states that have started their own APCD projects may abandon them.

Other rectifications cited by Malanga include an NIH requirement for studies funded by it to include women and minorities–a requirement Malanga would like other funders to adopt–and the FCC’s Lifeline program, which helps more low-income people get phone and Internet connections.

A recent article at the popular TechCrunch technology site suggests that the inscrutability of big data analytics is intrinsic to artificial intelligence. We must understand where computers outstrip our intuitive ability to understand correlations.

Correlations and Research Results: Do They Match Up? (Part 1 of 2)

Posted on May 26, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Eight years ago, a widely discussed issue of WIRED Magazine proclaimed cockily that current methods of scientific inquiry, dating back to Galileo, were becoming obsolete in the age of big data. Running controlled experiments on limited samples just have too many limitations and take too long. Instead, we will take any data we have conveniently at hand–purchasing habits for consumers, cell phone records for everybody, Internet-of-Things data generated in the natural world–and run statistical methods over them to find correlations.

Correlations were spotlighted at the annual conference of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. Although the speakers expressed a healthy respect for big data techniques, they pinpointed their limitations and affirmed the need for human intelligence in choosing what to research, as well as how to use the results.

Petrie-Flom annual 2016 conference

Petrie-Flom annual 2016 conference

A word from our administration

A new White House report also warns that “it is a mistake to assume [that big data techniques] are objective simply because they are data-driven.” The report highlights the risks of inherent discrimination in the use of big data, including:

  • Incomplete and incorrect data (particularly common in credit rating scores)

  • “Unintentional perpetuation and promotion of historical biases,”

  • Poorly designed algorithmic matches

  • “Personaliziaton and recommendation services that narrow instead of expand user options”

  • Assuming that correlation means causation

The report recommends “bias mitigation” (page 10) and “algorithmic systems accountability” (page 23) to overcome some of these distortions, and refers to a larger FTC report that lays out the legal terrain.

Like the WIRED articles mentioned earlier, this gives us some background for discussions of big data in health care.

Putting the promise of analytical research under the microscope

Conference speaker Tal Zarsky offered both fulsome praise and specific cautions regarding correlations. As the WIRED Magazine issue suggested, modern big data analysis can find new correlations between genetics, disease, cures, and side effects. The analysis can find them much cheaper and faster than randomized clinical trials. This can lead to more cures, and has the other salutory effect of opening a way for small, minimally funded start-up companies to enter health care. Jeffrey Senger even suggested that, if analytics such as those used by IBM Watson are good enough, doing diagnoses without them may constitute malpractice.

W. Nicholson Price, II focused on the danger of the FDA placing too many strict limits on the use of big data for developing drugs and other treatments. Instead of making data analysts back up everything with expensive, time-consuming clinical trials, he suggested that the FDA could set up models for the proper use of analytics and check that tools and practices meet requirements.

One of exciting impacts of correlations is that they bypass our assumptions and can uncover associations we never would have expected. The poster child for this effect is the notorious beer-and-diapers connection found by one retailer. This story has many nuances that tend to get lost in the retelling, but perhaps the most important point to note is that a retailer can depend on a correlation without having to ascertain the cause. In health, we feel much more comfortable knowing the cause of the correlation. Price called this aspect of big data search “black box” medicine.” Saying that something works, without knowing why, raises a whole list of ethical concerns.

A correlation stomach pain and disease can’t tell us whether the stomach pain led to the disease, the disease caused the stomach pain, or both are symptoms of a third underlying condition. Causation can make a big difference in health care. It can warn us to avoid a treatment that works 90% of the time (we’d like to know who the other 10% of patients are before they get a treatment that fails). It can help uncover side effects and other long-term effects–and perhaps valuable off-label uses as well.

Zarsky laid out several reasons why a correlation might be wrong.

  • It may reflect errors in the collected data. Good statisticians control for error through techniques such as discarding outliers, but if the original data contains enough apples, the barrel will go rotten.

  • Even if the correlation is accurate for the collected data, it may not be accurate in the larger population. The correlation could be a fluke, or the statistical sample could be unrepresentative of the larger world.

Zarsky suggests using correlations as a starting point for research, but backing them up by further randomized trials or by mathematical proofs that the correlation is correct.

Isaac Kohane described, from the clinical side, some of the pros and cons of using big data. For instance, data collection helps us see that choosing a gender for intersex patients right after birth produces a huge amount of misery, because the doctor guesses wrong half the time. However, he also cited times when data collection can be confusing for the reasons listed by Zarsky and others.

Senger pointed out that after drugs and medical devices are released into the field, data collected on patients can teach developers more about risks and benefits. But this also runs into the classic risks of big data. For instance, if a patient dies, did the drug or device contribute to death? Or did he just succumb to other causes?

We already have enough to make us puzzle over whether we can use big data at all–but there’s still more, as the next part of this article will describe.

Healthcare Consent and its Discontents (Part 3 of 3)

Posted on May 18, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article rated the pros and cons of new approaches to patient consent and control over data. Here we’ll look at emerging risks.

Privacy solidarity

Genetics present new ethical challenges–not just in the opportunity to change genes, but even just when sequencing them. These risks affect not only the individual: other members of her family and ethnic group can face discrimination thanks to genetic weaknesses revealed. Isaac Kohane said that the average person has 40 genetic markers indicating susceptibility to some disease or other. Furthermore, we sometimes disagree on what we consider a diseased condition.

Big data, particularly with genomic input, can lead to group harms, so Brent Mittelstadt called for moving beyond an individual view of privacy. Groups also have privacy needs (a topic I explored back in 1998). It’s not enough for an individual to consider the effect of releasing data on his own future, but on the future of family members, members of his racial group, etc. Similarly, Barbara Evans said we have to move from self-consciousness to social consciousness. But US and European laws consider privacy and data protection only on the basis of the individual.

The re-identification bogey man

A good many references were made at the conference to the increased risk of re-identifying patients from supposedly de-identified data. Headlines are made when some researcher manages to uncover a person who thought himself anonymous (and who database curators thought was anonymous when they released their data sets). In a study conducted by a team that included speaker Catherine M. Hammack, experts admitted that there is eventually a near 100% probability of re-identifying each person’s health data. The culprit in all this is burgeoning set of data collected from people as they purchase items and services, post seemingly benign news about themselves on social media, and otherwise participate in modern life.

I think the casual predictions of the end of anonymity we hear so often are unnecessarily alarmist. The field of anonymity has progressed a great deal since Latanya Sweeney famously re-identified a patient record for Governor William Weld of Massachusetts. Re-identifications carried out since then, by Sweeney and others, have taken advantage of data that was not anonymized (people just released it with an intuitive assumption that they could not be re-identified) or that was improperly anonymized, not using recommended methods.

Unfortunately, the “safe harbor” in HIPAA (designed precisely for medical sites lacking the skills to de-identify data properly) enshrines bad practices. Still, in a HIPAA challenge cited by Ameet Sarpatwari,only two of 15,000 individuals were re-identified. The mosaic effect is still more of a theoretical weakness, not an immediate threat.

I may be biased, because I edited a book on anonymization, but I would offer two challenges to people who cavalierly dismiss anonymization as a useful protection. First, if we threw up our hands and gave up on anonymization, we couldn’t even carry out a census, which is mandated in the U.S. Constitution.

Second, anonymization is comparable to encryption. We all know that computer speeds are increasing, just as are the sophistication of re-identification attacks. The first provides a near-guarantee that, eventually, our current encrypted conversations will be decrypted. The second, similarly, guarantees that anonymized data will eventually be re-identified. But we all still visit encrypted web sites and use encryption for communications. Why can’t we similarly use the best in anonymization?

A new article in the Journal of the American Medical Association exposes a gap between what doctors consider adequate consent and what’s meaningful for patients, blaming “professional indifference” and “organizational inertia” for the problem. In research, the “reasonable-patient standard” is even harder to define and achieve.

Patient consent doesn’t have to go away. But it’s getting harder and harder for patients to anticipate the uses of their data, or even to understand what data is being used to match and measure them. However, precisely because we don’t know how data will be used or how patients can tolerate it, I believe that incremental steps would be most useful in teasing out what will work for future research projects.