Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Current Security Approaches May Encourage EMR Password Sharing

Posted on October 19, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

In theory, you want everyone who accesses a patient’s health data to leave a clear footprint. As a result, it’s standard to assign every clinician using EMR data to be assigned a unique user ID and password. Most healthcare organizations assume that this is a robust way to document who is using the system and what they do when they’re online.

Unfortunately, this may not be the case, which in turn means that providers may know far less about health data users than they think. In fact, this approach may actually undermine efforts to track health data access, according to a new study appearing in the journal Healthcare Informatics Research.

The researchers behind the study created a Google Forms-based survey asking medical and para-medical personnel whether they’d ever obtained another medical staff member’s password, and if so, how many times and what their reasons were.

They gathered a total of 299 responses to their questions. Of that total, 220 respondents (just under 74%) had “borrowed” another staff member’s password. Only 57% answered the question of how many times this had happened, but among those who did respond the average rate was 4.75 episodes. All of the residents taking part had obtained another medical staff member’s password, compared with 57.5 percent of nurses.

The reasons medical staffers gave for sharing passwords included that “I was not given a user account despite having to use the system to fulfill my duties.” This response was particularly prevalent among students. Researchers got similar results when naming the reason “the permissions granted to me did not allow me to a fulfill my duties.”

Given their working conditions, it may be hard for medical staff members to avoid bending the rules. For example, the authors suggest that doctors will at times feel compelled to share password information, as their duties are wide-ranging and may involve performing unplanned services. Also, during on-call hours, interns and residents may need to perform activities that require them to use others’ EMR account information.

The bottom line, researchers said, is that the existing approach to health data security are deeply flawed. The current password-based approach used by most healthcare organizations is “doomed” by how often clinicians share passwords, they argue.

In other words, putting these particular safeguards in effect may actually have the paradoxical effect. Though organizations might be tempted to strengthen the authentication process, doing so can actually worsen the situation by encouraging system workarounds.

To address this problem over the long-term, widely-accepted standards for information security may need to be rethought, they wrote. Specifically, while the ISO standard bases infosec on the principles of confidentiality, integrity and availability, organizations must add usability to the list. Otherwise, it will be difficult to get an-users to cooperate voluntarily, the article concludes.

USAA Tapping EHR To Gather Data From Life Insurance Applicants

Posted on August 10, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

I can’t believe I missed this. Apparently, financial giant USAA announced earlier this year that it’s collecting health data from life insurance applicants by interfacing with patient portals. While it may not be the first life insurer to do so, I haven’t been able to find any others, which makes this pretty interesting.

Usually, when someone applies for life insurance, they have to produce medical records which support their application. (We wouldn’t want someone to buy a policy and pop off the next day, would we?) In the past, applicants have had to push their providers to send medical records to the insurer. As anyone who’s tried to get health records for themselves knows, getting this done can be challenging and is likely to slow down policy approvals.

Thanks to USAA’s new technology implementation, however, the process is much simpler. The new offering, which is available to applicants at the Department of Veterans Affairs and Department of Defense, allows consumers to deliver their health data directly to the insurer via their patient portal.

To make this possible, USAA worked with Cerner on EHR retrieval technology. The technology, known as HealtheHistory, supports health data collection,  encrypts data transmission and limits access to EHR data to approved persons. No word yet as to whether Cerner has struck similar deals elsewhere but it wouldn’t surprise me.

USAA’s new EHR-based approach has paid off nicely. The life insurer has seen an average 30-day reduction in the time it takes to acquire health records for applicants, and though it doesn’t say what the average was back in the days of paper records, I assume that this is a big improvement.

And now on to the less attractive aspects of this deal. I don’t know about you, but I see a couple of red flags here.

First, while life insurers may know how to capture health data, I doubt they’re cognizant of HIPAA nuances. Even if they hire a truckload of HIPAA experts, they don’t have much context for maintaining HIPAA compliance. What’s more, they rarely if ever have to look a patient in the face, which serves as something of a natural deterrent to provider data carelessness.

Also, given the industry’s track record, is it really a good idea to give a life insurer that much data? For example, consider the case of a healthy 36-year-old woman with no current medical issues who was denied coverage because she had the BRCA 1 gene. That gene, as some readers may know, is associated with an increased risk of breast and ovarian cancer.

The life insurer apparently found out about the woman’s makeup as part of the application process, which included queries about genetic information. Apparently, the woman had had such testing, and as a result had to disclose it or risk being accused of fraud.

While the insurer in question may have the right, legally, to make such decisions, their doing so falls into a gray area ethically. What’s more, things would get foggier if, say, it decided to share such information with a sister health insurance division. Doing so may not be legal but I can easily see it happening.

Should someone’s genes be used to exclude them life or health insurance? Bar them from being approved for a mortgage from another sister company? Can insurers be trusted to meet HIPAA standards for use of PHI? It’ll be important to address such questions before we throw our weight behind open health data sharing with companies like USAA.

Google’s DeepMind Runs Afoul Of UK Regulators Over Patient Data Access

Posted on July 20, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Back in February, I recounted the tale of DeepMind, a standout AI startup acquired by Google a few years ago. In the story, I noted that DeepMind had announced that it would be working with the Royal Free London NHS Foundation Trust, which oversees three hospitals, to test out its healthcare app

DeepMind’s healthcare app, Streams, is designed to help providers kick out patient status updates to physicians and nurses working with them. Under the terms of the deal, which was to span five years, DeepMind was supposed to gain access to 1.6 million patient records managed by the hospitals.

Now, the agreement seems to have collapsed under regulatory scrutiny. The UK’s data protection watchdog has ruled that DeepMind’s deal with the Trust “failed to comply with data protection law,” according to a story in Business Insider. The watchdog, known as the Information Commissioner’s Office (ICO), has spent a year investigating the deal, BI reports.

As it turns out, the agreement empowered the Trust hospitals to share the data without the patients’ prior knowledge, something that presumably wouldn’t fly in the U.S. either. This includes data, intended for use in developing the Streams’ app kidney monitoring technology, which includes info on whether people are HIV-positive, along with details of drug overdoses and abortions.

In its defense, DeepMind and the Royal Free Trust argued that patients had provided “implied consent” for such data sharing, given that the app was delivering “direct care” to patients using it. (Nice try. Got any other bridges you wanna sell?) Not surprisngly, that didn’t satisfy the ICO, which found several other shortcomings and how the data was handled as well.

While the ICO has concluded that the DeepMind/Royal Free Trust deal was illegal, it doesn’t plan to sanction either party, despite having the power to hand out fines of up to £500,000, BI said. But DeepMind, which set up his own independent review panel to oversee its data sharing agreements, privacy and security measures and product roadmaps last year, is taking a closer look at this deal. Way to self-police, guys! (Or maybe not.)

Not to be provincial, but what worries me about this is less the politics of UK patient protection laws, and bore the potential for Google subsidiaries to engage in other questionable data sharing activities. DeepMind has always said that they do not share patient data with its corporate parent, but while this might be true now, Google could do incalculable harm to patient privacy if they don’t maintain this firewall.

Hey, just consider that even for an entity the size of Google, healthcare data is an incredibly valuable asset. Reportedly, even street-level data thieves pay 10x for healthcare data as they do for, say, credit card numbers. It’s hard to even imagine what an entity the size of Google could do with such data if crunched in incredibly advanced ways. Let’s just say I don’t want to find out.

Unfortunately, as far as I know U.S. law hasn’t caught up with the idea of crime-by-analytics, which could be an issue even if an entity has legal possession of healthcare data. But I hope it does soon. The amount of harm this kind of data manipulation could do is immense.

Dogged By Privacy Concerns, Consumers Wonder If Using HIT Is Worthwhile

Posted on May 17, 2017 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

I just came across a survey suggesting that while we in the health IT world see a world of possibilities in emerging technologies, consumers aren’t so sure. The researchers found that consumers question the value of many tech platforms popular with health execs, apparently because they don’t trust providers to keep their personal health data secure.

The study, which was conducted between September and December 2016, was done by technology research firm Black Book. To conduct the survey, Black Book reached out to 12,090 adult consumers across the United States.

The topline conclusion from the study was that 57 percent of consumers who had been exposed to HIT through physicians, hospitals or ancillary providers doubted its benefits. Their concerns extended not only to EHRs, but also to many commonly-deployed solutions such as patient portals and mobile apps. The survey also concluded that 70 percent of Americans distrusted HIT, up sharply from just 10 percent in 2014.

Black Book researchers tied consumers’ skepticism to their very substantial  privacy concerns. Survey data indicated that 87 percent of respondents weren’t willing to divulge all of their personal health data, even if it improved their care.

Some categories of health information were especially sensitive for consumers. Ninety-nine percent were worried about providers sharing their mental health data with anyone but payers, 90 percent didn’t want their prescription data shared and 81 percent didn’t want information on their chronic conditions shared.

And their data security worries go beyond clinical data. A full 93 percent responding said they were concerned about the security of their personal financial information, particularly as banking and credit card data are increasingly shared among providers.

As a result, at least some consumers said they weren’t disclosing all of their health information. Also, 69 percent of patients admitted that they were holding back information from their current primary care physicians because they doubted the PCPs knew enough about technology to protect patient data effectively.

One of the reason patients are so protective of their data is because many don’t understand health IT, the survey suggested. For example, Black Book found that 92 percent of nurse leaders in hospital under 200 beds said they had no time during the discharge process to improve patient tech literacy. (In contrast, only 55 percent of nurse leaders working in large hospitals had this complaint, one of the few bright spots in Black Book’s data.)

When it comes to tech training, medical practices aren’t much help either. A whopping 96 percent of patients said that physicians and staff didn’t do a good job of explaining how to use the patient portal. About 40 percent of patients tried to use their medical practice’s portal, but 83 percent said they had trouble using it when they were at home.

All that being said, consumers seemed to feel much differently about data they generate on their own. In fact, 91 percent of consumers with wearables reported that they’d like to see their physician practice’s medical record system store any health data they request. In fact, 91 percent of patients who feel that their apps and devices were important to improving their health were disappointed when providers wouldn’t store their personal data.

OIG Says HHS Needs To Play Health IT Catch-Up

Posted on December 1, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

A new analysis by the HHS Office of the Inspector General suggests that the agency still has work to do and appropriately managing health information technology and making sure it performs, according to Health Data Management. And unfortunately, the problems it highlights don’t seem likely to go away anytime soon.

The critique of HHS’s HIT capabilities came as part of an annual report from the OIG, in which the oversight body lists what it sees as the department’s top 10 management and performance issues. The OIG ranked HIT third on its list.

In that critique, auditors from the OIG pointed out that there are still major concerns over the future of health data sharing in the US, not just for HHS but also in the US healthcare system at large. Specifically, the OIG notes that while HHS has spent a great deal on health IT, it hasn’t gotten too far in enabling and supporting the flow of health data between various stakeholders.

In this analysis, the OIG sites several factors which auditors see as a challenge to HHS, including the lack of interoperability between health data sources, barriers imposed by federal and state privacy and security laws, the cost of health IT infrastructure and environmental issues such as information blocking by vendors. Of course, the problems it outlines are the same old pains in the patoot that we’ve been facing for several years, though it doesn’t hurt to point them out again.

In particular, the OIG’s report argues, it’s essential for HHS to improve the flow of up-to-date, accurate and complete electronic information between the agency and providers it serves. After all, it notes, having that data is important to processing Medicare and Medicaid payments, quality improvement efforts and even HHS’s internal program integrity and operations efforts. Given the importance of these activities, the report says, HHS leaders must find ways to better streamline and speed up internal data exchange as well as share that data with Medicare and Medicaid systems.

The OIG also critiqued HHS security and privacy efforts, particularly as the number of healthcare data breaches and potential cyber security threats like ransomware continue to expand. As things stand, HHS cybersecurity shortfalls abound, including inadequacies and access controls, patch management, encryption of data and website security vulnerabilities.  These vulnerabilities, it noted, include not only HHS, but also the states and other entities that do business with the agency, as well as healthcare providers.

Of course, the OIG is doing its job in drawing attention to these issues, which are stubborn and long-lasting. Unfortunately, hammering away at these issues over and over again isn’t likely to get us anywhere. I’m not sure the OIG should have wasted the pixels to remind us of challenges that seem intractable without offering some really nifty solutions, or at least new ideas.

AMA Approves List Of Best Principles For Mobile Health App Design

Posted on November 29, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

The American Medical Association has effectively thrown her weight behind the use of mobile health applications, at least if those apps meet the criteria members agreed on at a recent AMA meeting. That being said, the group also argues that the industry needs to expand the evidence base demonstrating that apps are accurate, effective, safe and secure. The principles, which were approved at its recent Interim Meeting, are intended to guide coverage and payment policies supporting the use of mHealth apps.

The AMA attendees agreed on the following principles, which are intended to guide the use of not only mobile health apps but also associated devices, trackers and sensors by patients, physicians and others. They require that mobile apps and devices meet the following somewhat predictable criteria:

  • Supporting the establishment or continuation of a valid patient-physician relationship
  • Having a clinical evidence base to support their use in order to ensure mHealth apps safety and effectiveness
  • Following evidence-based practice guidelines, to the degree they are available, to ensure patient safety, quality of care and positive health outcomes
  • Supporting data portability and interoperability in order to promote care coordination through medical home and accountable care models
  • Abiding by state licensure laws and state medical practice laws and requirements in the state in which the patient receives services facilitated by the app
  • Requiring that physicians and other health practitioners delivering services through the app be licensed in the state where the patient receives services, or will be providing these services is otherwise authorized by that state’s medical board
  • Ensuring that the delivery of any service via the app is consistent with the state scope of practice laws

In addition to laying out these principles, the AMA also looked at legal issues physicians might face in using mHealth apps. And that’s where things got interesting.

For one thing, the AMA argues that it’s at least partially on a physician’s head to school patients on how secure and private a given app may be (or fail to be). That implies that your average physician will probably have to become more aware of how well a range of apps handle such issues, something I doubt most have studied to date.

The AMA also charges physicians to become aware of whether mHealth apps and associated devices, trackers and sensors are abiding by all applicable privacy and security laws. In fact, according to the new policy, doctors are supposed to consult with an attorney if they don’t know whether mobile health apps meet federal or state privacy and security laws. That warning, while doubtless prudent, must not be helping members sleep at night.

Finally, the AMA notes that there are still questions remaining as to what risks physicians face who use, recommend or prescribe mobile apps. I have little doubt that they are right about this.

Just think of the malpractice lawsuit possibilities. Is the doctor liable because they relied on inaccurate app results collected by the patient? If the app they recommended presented inaccurate results? How about if the app was created by the practice or health system for which they work? What about if the physician relied on inaccurate data generated by a sensor or wearable — is a physician liable or the device manufacturer? If I can come up with these questions, you know a plaintiff’s attorney can do a lot better.

A 2 Prong Strategy for Healthcare Security – Going Beyond Compliance

Posted on November 7, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

This post is sponsored by Samsung Business. All thoughts and opinions are my own.

As if our security senses weren’t on heightened alert enough, I think all of us were hit by the recent distributed denial of service attacks that took down a number of major sites on the internet. The unique part of this attack was that it used a “botnet” of internet of things (IoT) devices. It’s amazing how creative these security attacks have become and healthcare is often the target.

The problem for healthcare is that too many organizations have spent their time and money on compliance versus security. Certainly, compliance is important (HIPAA Audits are real and expensive if you fail), but just because you’re compliant doesn’t mean you’re secure. Healthcare organizations need to move beyond compliance and make efforts to make their organizations more secure.

Here’s a 2 prong strategy that organizations should consider when it comes to securing their organization’s data and technology:

Build Enough Barriers
The first piece of every healthcare organization’s security strategy should be to ensure that you’ve created enough barriers to protect your organization’s health data. While we’ve seen an increase in targeted hacks, the most common attacks on healthcare organizations are still the hacker who randomly finds a weakness in your technology infrastructure. Once they find that weakness, they exploit it and are able to do all the damage.

The reality is that you’ll never make your health IT 100% secure. That’s impossible. However, if you create enough barriers to entry, you’ll keep out the majority of hackers that are just scouring the internet for opportunities. Building the right barriers to entry means that most hackers will move on to a more vulnerable target and leave you alone. Some of these barriers might be a high quality firewall, AI security, integrated mobile device security, user training, encryption (device and in transit), and much more.

Building these barriers has to be ingrained into your culture. You can’t just change to a secure organization overnight. It needs to be deeply embedded into everything you do as a company and all the decisions you make.

Create a Mitigation and Response Strategy
While we’d like to dream that a breach will never occur to us, hacks are becoming more a question of when and not if they will happen. This is why it’s absolutely essential that healthcare organizations create a proper mitigation and response strategy.

I recently heard about a piece of ransomware that hit a healthcare organization. In the 60 seconds from when the ransomware hit the organization, 6 devices were infected before they could mitigate any further spread. That’s incredible. Imagine if they didn’t have a mitigation strategy in place. The ransomware would have spread like wildfire across the organization. Do you have a mitigation strategy that will identify breaches so you can stop them before they spread?

Creating an appropriate response to breaches, infections, and hacks is also just as important. While no incident of this nature is fun, it is much better to be ahead of the incident versus learning about it when the news story, patient, or government organization comes to you with the information. Make sure you have a well thought out strategy on how you’ll handle a breach. They’re quickly becoming a reality for every organization.

As healthcare moves beyond compliance and focuses more on security, we’ll be much better positioned to protect patients’ data. Not only is this the right thing to do for our patients, it’s also the right thing to do for our businesses. Creating a good security plan which prevents incidents and then backing that up with a mitigation and response strategy are both great steps to ensuring your organization is prepared.

For more content like this, follow Samsung on Insights, Twitter, LinkedIn , YouTube and SlideShare.

Practice Fusion Settles FTC Charges Over “Deceptive” Consumer Marketing

Posted on June 20, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

In what may be a first for the EMR industry, ambulatory EMR vendor Practice Fusion has settled Federal Trade Commission charges that it misled consumers as part of a campaign to gather reviews for its doctors.

Under the terms of the settlement, Practice Fusion agreed to refrain from making deceptive statements about the privacy and confidentiality of the information it collects from consumers. It also promised that if it planned to make any consumer information publicly available, it would offer a clear and conspicuous notice of its plans before it went ahead, and get affirmative consent from those consumers before using their information.

Prior to getting entangled in these issues, Practice Fusion had launched Patient Fusion, a portal allowing patients whose providers used its EMR to download their health information, transmit that information to another provider or send and receive messages from their providers.

The problem targeted by the FTC began in 2012, when Practice Fusion was preparing to expand Patient Fusion to include a public directory allowing enrollees to search for doctors, read reviews and request appointments. To support the rollout, the company began sending emails to patients of providers who used Practice Fusion’s EMR, asking patients to review their provider. In theory, this was probably a clever move, as the reviews would have given Practice Fusion-using practices greater social credibility.

The problem was, however, that the request was marketed deceptively, the FTC found. Rather than admitting that this was an EMR marketing effort, Practice Fusion’s email messages appeared to come from patients’ doctors. And the patients were never informed that the information would be made public. And worse, a pre-checked “Keep this review anonymous” only withheld the patient’s name, leaving information in the text box visible.

So patients, who thought they were communicating privately with their physicians, shared a great deal of private and personal health information. Many entered their full name or phone number in a text box provided as part of the survey. Others shared intimate health information, including on consumer who asked for dosing information for “my Xanax prescription,” and another who asked for help with a suicidally depressed child.

The highly sensitive nature of some patient comments didn’t get much attention until a year later, when EMR and HIPAA broke the story and then Forbes published a follow up article on the subject. After the articles appeared, Practice Fusion put automated procedures in place to block the publication of reviews in which consumers entered personal information.

In the future, Practice Fusion is barred from misrepresenting the extent to which it uses, maintains and protects the privacy or confidentiality of data it collects. Also, it may not publicly display the reviews it collected from consumers during the time period covered by the complaint.

There’s many lessons to be gleaned from this case, but the most obvious seems to be that misleading communications that impact patients are a complete no-no. According to an FTC blog item on the case, they also include that health IT companies should never bury key facts in a dense privacy policy, and that disclosures should use the same eye-catching methods they use for marketing, such as striking graphics, bold colors, big print and prominent placement.

Correlations and Research Results: Do They Match Up? (Part 2 of 2)

Posted on May 27, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous part of this article described the benefits of big data analysis, along with some of the formal, inherent risks of using it. We’ll go even more into the problems of real-life use now.

More hidden bias

Jeffrey Skopek pointed out that correlations can perpetuate bias as much as they undermine it. Everything in data analysis is affected by bias, ranging from what we choose to examine and what data we collect to who participates, what tests we run, and how we interpret results.

The potential for seemingly objective data analysis to create (or at least perpetuate) discrimination on the basis of race and other criteria was highlighted recently by a Bloomberg article on Amazon Price deliveries. Nobody thinks that any Amazon.com manager anywhere said, “Let’s not deliver Amazon Prime packages to black neighborhoods.” But that was the natural outcome of depending on data about purchases, incomes, or whatever other data was crunched by the company to produce decisions about deliveries. (Amazon.com quickly promised to eliminate the disparity.)

At the conference, Sarah Malanga went over the comparable disparities and harms that big data can cause in health care. Think of all the ways modern researchers interact with potential subjects over mobile devices, and how much data is collected from such devices for data analytics. Such data is used to recruit subjects, to design studies, to check compliance with treatment, and for epidemiology and the new Precision Medicine movement.

In all the same ways that the old, the young, the poor, the rural, ethnic minorities, and women can be left out of commerce, they can be left out of health data as well–with even worse impacts on their lives. Malanga reeled out some statistics:

  • 20% of Americans don’t go on the Internet at all.

  • 57% of African-Americans don’t have Internet connections at home.

  • 70% of Americans over 65 don’t have a smart phone.

Those are just examples of ways that collecting data may miss important populations. Often, those populations are sicker than the people we reach with big data, so they need more help while receiving less.

The use of electronic health records, too, is still limited to certain populations in certain regions. Thus, some patients may take a lot of medications but not have “medication histories” available to research. Ameet Sarpatwari said that the exclusion of some populations from research make post-approval research even more important; there we can find correlations that were missed during trials.

A crucial source of well-balanced health data is the All Payer Claims Databases that 18 states have set up to collect data across the board. But a glitch in employment law, highlighted by Carmel Shachar, releases self-funding employers from sending their health data to the databases. This will most likely take a fix from Congress. Unless they do so, researchers and public health will lack the comprehensive data they need to improve health outcomes, and the 12 states that have started their own APCD projects may abandon them.

Other rectifications cited by Malanga include an NIH requirement for studies funded by it to include women and minorities–a requirement Malanga would like other funders to adopt–and the FCC’s Lifeline program, which helps more low-income people get phone and Internet connections.

A recent article at the popular TechCrunch technology site suggests that the inscrutability of big data analytics is intrinsic to artificial intelligence. We must understand where computers outstrip our intuitive ability to understand correlations.

Correlations and Research Results: Do They Match Up? (Part 1 of 2)

Posted on May 26, 2016 I Written By

Andy Oram is an editor at O’Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space.

Andy also writes often for O’Reilly’s Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O’Reilly’s Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Eight years ago, a widely discussed issue of WIRED Magazine proclaimed cockily that current methods of scientific inquiry, dating back to Galileo, were becoming obsolete in the age of big data. Running controlled experiments on limited samples just have too many limitations and take too long. Instead, we will take any data we have conveniently at hand–purchasing habits for consumers, cell phone records for everybody, Internet-of-Things data generated in the natural world–and run statistical methods over them to find correlations.

Correlations were spotlighted at the annual conference of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. Although the speakers expressed a healthy respect for big data techniques, they pinpointed their limitations and affirmed the need for human intelligence in choosing what to research, as well as how to use the results.

Petrie-Flom annual 2016 conference

Petrie-Flom annual 2016 conference

A word from our administration

A new White House report also warns that “it is a mistake to assume [that big data techniques] are objective simply because they are data-driven.” The report highlights the risks of inherent discrimination in the use of big data, including:

  • Incomplete and incorrect data (particularly common in credit rating scores)

  • “Unintentional perpetuation and promotion of historical biases,”

  • Poorly designed algorithmic matches

  • “Personaliziaton and recommendation services that narrow instead of expand user options”

  • Assuming that correlation means causation

The report recommends “bias mitigation” (page 10) and “algorithmic systems accountability” (page 23) to overcome some of these distortions, and refers to a larger FTC report that lays out the legal terrain.

Like the WIRED articles mentioned earlier, this gives us some background for discussions of big data in health care.

Putting the promise of analytical research under the microscope

Conference speaker Tal Zarsky offered both fulsome praise and specific cautions regarding correlations. As the WIRED Magazine issue suggested, modern big data analysis can find new correlations between genetics, disease, cures, and side effects. The analysis can find them much cheaper and faster than randomized clinical trials. This can lead to more cures, and has the other salutory effect of opening a way for small, minimally funded start-up companies to enter health care. Jeffrey Senger even suggested that, if analytics such as those used by IBM Watson are good enough, doing diagnoses without them may constitute malpractice.

W. Nicholson Price, II focused on the danger of the FDA placing too many strict limits on the use of big data for developing drugs and other treatments. Instead of making data analysts back up everything with expensive, time-consuming clinical trials, he suggested that the FDA could set up models for the proper use of analytics and check that tools and practices meet requirements.

One of exciting impacts of correlations is that they bypass our assumptions and can uncover associations we never would have expected. The poster child for this effect is the notorious beer-and-diapers connection found by one retailer. This story has many nuances that tend to get lost in the retelling, but perhaps the most important point to note is that a retailer can depend on a correlation without having to ascertain the cause. In health, we feel much more comfortable knowing the cause of the correlation. Price called this aspect of big data search “black box” medicine.” Saying that something works, without knowing why, raises a whole list of ethical concerns.

A correlation stomach pain and disease can’t tell us whether the stomach pain led to the disease, the disease caused the stomach pain, or both are symptoms of a third underlying condition. Causation can make a big difference in health care. It can warn us to avoid a treatment that works 90% of the time (we’d like to know who the other 10% of patients are before they get a treatment that fails). It can help uncover side effects and other long-term effects–and perhaps valuable off-label uses as well.

Zarsky laid out several reasons why a correlation might be wrong.

  • It may reflect errors in the collected data. Good statisticians control for error through techniques such as discarding outliers, but if the original data contains enough apples, the barrel will go rotten.

  • Even if the correlation is accurate for the collected data, it may not be accurate in the larger population. The correlation could be a fluke, or the statistical sample could be unrepresentative of the larger world.

Zarsky suggests using correlations as a starting point for research, but backing them up by further randomized trials or by mathematical proofs that the correlation is correct.

Isaac Kohane described, from the clinical side, some of the pros and cons of using big data. For instance, data collection helps us see that choosing a gender for intersex patients right after birth produces a huge amount of misery, because the doctor guesses wrong half the time. However, he also cited times when data collection can be confusing for the reasons listed by Zarsky and others.

Senger pointed out that after drugs and medical devices are released into the field, data collected on patients can teach developers more about risks and benefits. But this also runs into the classic risks of big data. For instance, if a patient dies, did the drug or device contribute to death? Or did he just succumb to other causes?

We already have enough to make us puzzle over whether we can use big data at all–but there’s still more, as the next part of this article will describe.