Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

OIG Says HHS Needs To Play Health IT Catch-Up

Posted on December 1, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

A new analysis by the HHS Office of the Inspector General suggests that the agency still has work to do and appropriately managing health information technology and making sure it performs, according to Health Data Management. And unfortunately, the problems it highlights don’t seem likely to go away anytime soon.

The critique of HHS’s HIT capabilities came as part of an annual report from the OIG, in which the oversight body lists what it sees as the department’s top 10 management and performance issues. The OIG ranked HIT third on its list.

In that critique, auditors from the OIG pointed out that there are still major concerns over the future of health data sharing in the US, not just for HHS but also in the US healthcare system at large. Specifically, the OIG notes that while HHS has spent a great deal on health IT, it hasn’t gotten too far in enabling and supporting the flow of health data between various stakeholders.

In this analysis, the OIG sites several factors which auditors see as a challenge to HHS, including the lack of interoperability between health data sources, barriers imposed by federal and state privacy and security laws, the cost of health IT infrastructure and environmental issues such as information blocking by vendors. Of course, the problems it outlines are the same old pains in the patoot that we’ve been facing for several years, though it doesn’t hurt to point them out again.

In particular, the OIG’s report argues, it’s essential for HHS to improve the flow of up-to-date, accurate and complete electronic information between the agency and providers it serves. After all, it notes, having that data is important to processing Medicare and Medicaid payments, quality improvement efforts and even HHS’s internal program integrity and operations efforts. Given the importance of these activities, the report says, HHS leaders must find ways to better streamline and speed up internal data exchange as well as share that data with Medicare and Medicaid systems.

The OIG also critiqued HHS security and privacy efforts, particularly as the number of healthcare data breaches and potential cyber security threats like ransomware continue to expand. As things stand, HHS cybersecurity shortfalls abound, including inadequacies and access controls, patch management, encryption of data and website security vulnerabilities.  These vulnerabilities, it noted, include not only HHS, but also the states and other entities that do business with the agency, as well as healthcare providers.

Of course, the OIG is doing its job in drawing attention to these issues, which are stubborn and long-lasting. Unfortunately, hammering away at these issues over and over again isn’t likely to get us anywhere. I’m not sure the OIG should have wasted the pixels to remind us of challenges that seem intractable without offering some really nifty solutions, or at least new ideas.

AMA Approves List Of Best Principles For Mobile Health App Design

Posted on November 29, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

The American Medical Association has effectively thrown her weight behind the use of mobile health applications, at least if those apps meet the criteria members agreed on at a recent AMA meeting. That being said, the group also argues that the industry needs to expand the evidence base demonstrating that apps are accurate, effective, safe and secure. The principles, which were approved at its recent Interim Meeting, are intended to guide coverage and payment policies supporting the use of mHealth apps.

The AMA attendees agreed on the following principles, which are intended to guide the use of not only mobile health apps but also associated devices, trackers and sensors by patients, physicians and others. They require that mobile apps and devices meet the following somewhat predictable criteria:

  • Supporting the establishment or continuation of a valid patient-physician relationship
  • Having a clinical evidence base to support their use in order to ensure mHealth apps safety and effectiveness
  • Following evidence-based practice guidelines, to the degree they are available, to ensure patient safety, quality of care and positive health outcomes
  • Supporting data portability and interoperability in order to promote care coordination through medical home and accountable care models
  • Abiding by state licensure laws and state medical practice laws and requirements in the state in which the patient receives services facilitated by the app
  • Requiring that physicians and other health practitioners delivering services through the app be licensed in the state where the patient receives services, or will be providing these services is otherwise authorized by that state’s medical board
  • Ensuring that the delivery of any service via the app is consistent with the state scope of practice laws

In addition to laying out these principles, the AMA also looked at legal issues physicians might face in using mHealth apps. And that’s where things got interesting.

For one thing, the AMA argues that it’s at least partially on a physician’s head to school patients on how secure and private a given app may be (or fail to be). That implies that your average physician will probably have to become more aware of how well a range of apps handle such issues, something I doubt most have studied to date.

The AMA also charges physicians to become aware of whether mHealth apps and associated devices, trackers and sensors are abiding by all applicable privacy and security laws. In fact, according to the new policy, doctors are supposed to consult with an attorney if they don’t know whether mobile health apps meet federal or state privacy and security laws. That warning, while doubtless prudent, must not be helping members sleep at night.

Finally, the AMA notes that there are still questions remaining as to what risks physicians face who use, recommend or prescribe mobile apps. I have little doubt that they are right about this.

Just think of the malpractice lawsuit possibilities. Is the doctor liable because they relied on inaccurate app results collected by the patient? If the app they recommended presented inaccurate results? How about if the app was created by the practice or health system for which they work? What about if the physician relied on inaccurate data generated by a sensor or wearable — is a physician liable or the device manufacturer? If I can come up with these questions, you know a plaintiff’s attorney can do a lot better.

A 2 Prong Strategy for Healthcare Security – Going Beyond Compliance

Posted on November 7, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

This post is sponsored by Samsung Business. All thoughts and opinions are my own.

As if our security senses weren’t on heightened alert enough, I think all of us were hit by the recent distributed denial of service attacks that took down a number of major sites on the internet. The unique part of this attack was that it used a “botnet” of internet of things (IoT) devices. It’s amazing how creative these security attacks have become and healthcare is often the target.

The problem for healthcare is that too many organizations have spent their time and money on compliance versus security. Certainly, compliance is important (HIPAA Audits are real and expensive if you fail), but just because you’re compliant doesn’t mean you’re secure. Healthcare organizations need to move beyond compliance and make efforts to make their organizations more secure.

Here’s a 2 prong strategy that organizations should consider when it comes to securing their organization’s data and technology:

Build Enough Barriers
The first piece of every healthcare organization’s security strategy should be to ensure that you’ve created enough barriers to protect your organization’s health data. While we’ve seen an increase in targeted hacks, the most common attacks on healthcare organizations are still the hacker who randomly finds a weakness in your technology infrastructure. Once they find that weakness, they exploit it and are able to do all the damage.

The reality is that you’ll never make your health IT 100% secure. That’s impossible. However, if you create enough barriers to entry, you’ll keep out the majority of hackers that are just scouring the internet for opportunities. Building the right barriers to entry means that most hackers will move on to a more vulnerable target and leave you alone. Some of these barriers might be a high quality firewall, AI security, integrated mobile device security, user training, encryption (device and in transit), and much more.

Building these barriers has to be ingrained into your culture. You can’t just change to a secure organization overnight. It needs to be deeply embedded into everything you do as a company and all the decisions you make.

Create a Mitigation and Response Strategy
While we’d like to dream that a breach will never occur to us, hacks are becoming more a question of when and not if they will happen. This is why it’s absolutely essential that healthcare organizations create a proper mitigation and response strategy.

I recently heard about a piece of ransomware that hit a healthcare organization. In the 60 seconds from when the ransomware hit the organization, 6 devices were infected before they could mitigate any further spread. That’s incredible. Imagine if they didn’t have a mitigation strategy in place. The ransomware would have spread like wildfire across the organization. Do you have a mitigation strategy that will identify breaches so you can stop them before they spread?

Creating an appropriate response to breaches, infections, and hacks is also just as important. While no incident of this nature is fun, it is much better to be ahead of the incident versus learning about it when the news story, patient, or government organization comes to you with the information. Make sure you have a well thought out strategy on how you’ll handle a breach. They’re quickly becoming a reality for every organization.

As healthcare moves beyond compliance and focuses more on security, we’ll be much better positioned to protect patients’ data. Not only is this the right thing to do for our patients, it’s also the right thing to do for our businesses. Creating a good security plan which prevents incidents and then backing that up with a mitigation and response strategy are both great steps to ensuring your organization is prepared.

For more content like this, follow Samsung on Insights, Twitter, LinkedIn , YouTube and SlideShare.

Practice Fusion Settles FTC Charges Over “Deceptive” Consumer Marketing

Posted on June 20, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

In what may be a first for the EMR industry, ambulatory EMR vendor Practice Fusion has settled Federal Trade Commission charges that it misled consumers as part of a campaign to gather reviews for its doctors.

Under the terms of the settlement, Practice Fusion agreed to refrain from making deceptive statements about the privacy and confidentiality of the information it collects from consumers. It also promised that if it planned to make any consumer information publicly available, it would offer a clear and conspicuous notice of its plans before it went ahead, and get affirmative consent from those consumers before using their information.

Prior to getting entangled in these issues, Practice Fusion had launched Patient Fusion, a portal allowing patients whose providers used its EMR to download their health information, transmit that information to another provider or send and receive messages from their providers.

The problem targeted by the FTC began in 2012, when Practice Fusion was preparing to expand Patient Fusion to include a public directory allowing enrollees to search for doctors, read reviews and request appointments. To support the rollout, the company began sending emails to patients of providers who used Practice Fusion’s EMR, asking patients to review their provider. In theory, this was probably a clever move, as the reviews would have given Practice Fusion-using practices greater social credibility.

The problem was, however, that the request was marketed deceptively, the FTC found. Rather than admitting that this was an EMR marketing effort, Practice Fusion’s email messages appeared to come from patients’ doctors. And the patients were never informed that the information would be made public. And worse, a pre-checked “Keep this review anonymous” only withheld the patient’s name, leaving information in the text box visible.

So patients, who thought they were communicating privately with their physicians, shared a great deal of private and personal health information. Many entered their full name or phone number in a text box provided as part of the survey. Others shared intimate health information, including on consumer who asked for dosing information for “my Xanax prescription,” and another who asked for help with a suicidally depressed child.

The highly sensitive nature of some patient comments didn’t get much attention until a year later, when EMR and HIPAA broke the story and then Forbes published a follow up article on the subject. After the articles appeared, Practice Fusion put automated procedures in place to block the publication of reviews in which consumers entered personal information.

In the future, Practice Fusion is barred from misrepresenting the extent to which it uses, maintains and protects the privacy or confidentiality of data it collects. Also, it may not publicly display the reviews it collected from consumers during the time period covered by the complaint.

There’s many lessons to be gleaned from this case, but the most obvious seems to be that misleading communications that impact patients are a complete no-no. According to an FTC blog item on the case, they also include that health IT companies should never bury key facts in a dense privacy policy, and that disclosures should use the same eye-catching methods they use for marketing, such as striking graphics, bold colors, big print and prominent placement.

Correlations and Research Results: Do They Match Up? (Part 2 of 2)

Posted on May 27, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous part of this article described the benefits of big data analysis, along with some of the formal, inherent risks of using it. We’ll go even more into the problems of real-life use now.

More hidden bias

Jeffrey Skopek pointed out that correlations can perpetuate bias as much as they undermine it. Everything in data analysis is affected by bias, ranging from what we choose to examine and what data we collect to who participates, what tests we run, and how we interpret results.

The potential for seemingly objective data analysis to create (or at least perpetuate) discrimination on the basis of race and other criteria was highlighted recently by a Bloomberg article on Amazon Price deliveries. Nobody thinks that any Amazon.com manager anywhere said, “Let’s not deliver Amazon Prime packages to black neighborhoods.” But that was the natural outcome of depending on data about purchases, incomes, or whatever other data was crunched by the company to produce decisions about deliveries. (Amazon.com quickly promised to eliminate the disparity.)

At the conference, Sarah Malanga went over the comparable disparities and harms that big data can cause in health care. Think of all the ways modern researchers interact with potential subjects over mobile devices, and how much data is collected from such devices for data analytics. Such data is used to recruit subjects, to design studies, to check compliance with treatment, and for epidemiology and the new Precision Medicine movement.

In all the same ways that the old, the young, the poor, the rural, ethnic minorities, and women can be left out of commerce, they can be left out of health data as well–with even worse impacts on their lives. Malanga reeled out some statistics:

  • 20% of Americans don’t go on the Internet at all.

  • 57% of African-Americans don’t have Internet connections at home.

  • 70% of Americans over 65 don’t have a smart phone.

Those are just examples of ways that collecting data may miss important populations. Often, those populations are sicker than the people we reach with big data, so they need more help while receiving less.

The use of electronic health records, too, is still limited to certain populations in certain regions. Thus, some patients may take a lot of medications but not have “medication histories” available to research. Ameet Sarpatwari said that the exclusion of some populations from research make post-approval research even more important; there we can find correlations that were missed during trials.

A crucial source of well-balanced health data is the All Payer Claims Databases that 18 states have set up to collect data across the board. But a glitch in employment law, highlighted by Carmel Shachar, releases self-funding employers from sending their health data to the databases. This will most likely take a fix from Congress. Unless they do so, researchers and public health will lack the comprehensive data they need to improve health outcomes, and the 12 states that have started their own APCD projects may abandon them.

Other rectifications cited by Malanga include an NIH requirement for studies funded by it to include women and minorities–a requirement Malanga would like other funders to adopt–and the FCC’s Lifeline program, which helps more low-income people get phone and Internet connections.

A recent article at the popular TechCrunch technology site suggests that the inscrutability of big data analytics is intrinsic to artificial intelligence. We must understand where computers outstrip our intuitive ability to understand correlations.

Correlations and Research Results: Do They Match Up? (Part 1 of 2)

Posted on May 26, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Eight years ago, a widely discussed issue of WIRED Magazine proclaimed cockily that current methods of scientific inquiry, dating back to Galileo, were becoming obsolete in the age of big data. Running controlled experiments on limited samples just have too many limitations and take too long. Instead, we will take any data we have conveniently at hand–purchasing habits for consumers, cell phone records for everybody, Internet-of-Things data generated in the natural world–and run statistical methods over them to find correlations.

Correlations were spotlighted at the annual conference of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. Although the speakers expressed a healthy respect for big data techniques, they pinpointed their limitations and affirmed the need for human intelligence in choosing what to research, as well as how to use the results.

Petrie-Flom annual 2016 conference

Petrie-Flom annual 2016 conference

A word from our administration

A new White House report also warns that “it is a mistake to assume [that big data techniques] are objective simply because they are data-driven.” The report highlights the risks of inherent discrimination in the use of big data, including:

  • Incomplete and incorrect data (particularly common in credit rating scores)

  • “Unintentional perpetuation and promotion of historical biases,”

  • Poorly designed algorithmic matches

  • “Personaliziaton and recommendation services that narrow instead of expand user options”

  • Assuming that correlation means causation

The report recommends “bias mitigation” (page 10) and “algorithmic systems accountability” (page 23) to overcome some of these distortions, and refers to a larger FTC report that lays out the legal terrain.

Like the WIRED articles mentioned earlier, this gives us some background for discussions of big data in health care.

Putting the promise of analytical research under the microscope

Conference speaker Tal Zarsky offered both fulsome praise and specific cautions regarding correlations. As the WIRED Magazine issue suggested, modern big data analysis can find new correlations between genetics, disease, cures, and side effects. The analysis can find them much cheaper and faster than randomized clinical trials. This can lead to more cures, and has the other salutory effect of opening a way for small, minimally funded start-up companies to enter health care. Jeffrey Senger even suggested that, if analytics such as those used by IBM Watson are good enough, doing diagnoses without them may constitute malpractice.

W. Nicholson Price, II focused on the danger of the FDA placing too many strict limits on the use of big data for developing drugs and other treatments. Instead of making data analysts back up everything with expensive, time-consuming clinical trials, he suggested that the FDA could set up models for the proper use of analytics and check that tools and practices meet requirements.

One of exciting impacts of correlations is that they bypass our assumptions and can uncover associations we never would have expected. The poster child for this effect is the notorious beer-and-diapers connection found by one retailer. This story has many nuances that tend to get lost in the retelling, but perhaps the most important point to note is that a retailer can depend on a correlation without having to ascertain the cause. In health, we feel much more comfortable knowing the cause of the correlation. Price called this aspect of big data search “black box” medicine.” Saying that something works, without knowing why, raises a whole list of ethical concerns.

A correlation stomach pain and disease can’t tell us whether the stomach pain led to the disease, the disease caused the stomach pain, or both are symptoms of a third underlying condition. Causation can make a big difference in health care. It can warn us to avoid a treatment that works 90% of the time (we’d like to know who the other 10% of patients are before they get a treatment that fails). It can help uncover side effects and other long-term effects–and perhaps valuable off-label uses as well.

Zarsky laid out several reasons why a correlation might be wrong.

  • It may reflect errors in the collected data. Good statisticians control for error through techniques such as discarding outliers, but if the original data contains enough apples, the barrel will go rotten.

  • Even if the correlation is accurate for the collected data, it may not be accurate in the larger population. The correlation could be a fluke, or the statistical sample could be unrepresentative of the larger world.

Zarsky suggests using correlations as a starting point for research, but backing them up by further randomized trials or by mathematical proofs that the correlation is correct.

Isaac Kohane described, from the clinical side, some of the pros and cons of using big data. For instance, data collection helps us see that choosing a gender for intersex patients right after birth produces a huge amount of misery, because the doctor guesses wrong half the time. However, he also cited times when data collection can be confusing for the reasons listed by Zarsky and others.

Senger pointed out that after drugs and medical devices are released into the field, data collected on patients can teach developers more about risks and benefits. But this also runs into the classic risks of big data. For instance, if a patient dies, did the drug or device contribute to death? Or did he just succumb to other causes?

We already have enough to make us puzzle over whether we can use big data at all–but there’s still more, as the next part of this article will describe.

Healthcare Consent and its Discontents (Part 3 of 3)

Posted on May 18, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article rated the pros and cons of new approaches to patient consent and control over data. Here we’ll look at emerging risks.

Privacy solidarity

Genetics present new ethical challenges–not just in the opportunity to change genes, but even just when sequencing them. These risks affect not only the individual: other members of her family and ethnic group can face discrimination thanks to genetic weaknesses revealed. Isaac Kohane said that the average person has 40 genetic markers indicating susceptibility to some disease or other. Furthermore, we sometimes disagree on what we consider a diseased condition.

Big data, particularly with genomic input, can lead to group harms, so Brent Mittelstadt called for moving beyond an individual view of privacy. Groups also have privacy needs (a topic I explored back in 1998). It’s not enough for an individual to consider the effect of releasing data on his own future, but on the future of family members, members of his racial group, etc. Similarly, Barbara Evans said we have to move from self-consciousness to social consciousness. But US and European laws consider privacy and data protection only on the basis of the individual.

The re-identification bogey man

A good many references were made at the conference to the increased risk of re-identifying patients from supposedly de-identified data. Headlines are made when some researcher manages to uncover a person who thought himself anonymous (and who database curators thought was anonymous when they released their data sets). In a study conducted by a team that included speaker Catherine M. Hammack, experts admitted that there is eventually a near 100% probability of re-identifying each person’s health data. The culprit in all this is burgeoning set of data collected from people as they purchase items and services, post seemingly benign news about themselves on social media, and otherwise participate in modern life.

I think the casual predictions of the end of anonymity we hear so often are unnecessarily alarmist. The field of anonymity has progressed a great deal since Latanya Sweeney famously re-identified a patient record for Governor William Weld of Massachusetts. Re-identifications carried out since then, by Sweeney and others, have taken advantage of data that was not anonymized (people just released it with an intuitive assumption that they could not be re-identified) or that was improperly anonymized, not using recommended methods.

Unfortunately, the “safe harbor” in HIPAA (designed precisely for medical sites lacking the skills to de-identify data properly) enshrines bad practices. Still, in a HIPAA challenge cited by Ameet Sarpatwari,only two of 15,000 individuals were re-identified. The mosaic effect is still more of a theoretical weakness, not an immediate threat.

I may be biased, because I edited a book on anonymization, but I would offer two challenges to people who cavalierly dismiss anonymization as a useful protection. First, if we threw up our hands and gave up on anonymization, we couldn’t even carry out a census, which is mandated in the U.S. Constitution.

Second, anonymization is comparable to encryption. We all know that computer speeds are increasing, just as are the sophistication of re-identification attacks. The first provides a near-guarantee that, eventually, our current encrypted conversations will be decrypted. The second, similarly, guarantees that anonymized data will eventually be re-identified. But we all still visit encrypted web sites and use encryption for communications. Why can’t we similarly use the best in anonymization?

A new article in the Journal of the American Medical Association exposes a gap between what doctors consider adequate consent and what’s meaningful for patients, blaming “professional indifference” and “organizational inertia” for the problem. In research, the “reasonable-patient standard” is even harder to define and achieve.

Patient consent doesn’t have to go away. But it’s getting harder and harder for patients to anticipate the uses of their data, or even to understand what data is being used to match and measure them. However, precisely because we don’t know how data will be used or how patients can tolerate it, I believe that incremental steps would be most useful in teasing out what will work for future research projects.

Healthcare Consent and its Discontents (Part 2 of 3)

Posted on May 17, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article laid out what is wrong with informed consent today. We’ll continue now to look at possible remedies.

Could we benefit from more opportunities for consent?

Donna Gitter said that the Common Rule governing research might be updated to cover de-identified data as well as personally identifiable information. The impact of this on research, of course, would be incalculable. But it might lead to more participation in research, because 72% of patients say they would like to be asked for permission before their data is shared even in de-identified form. Many researchers, such as conference speaker Liza Dawson, would rather give researchers the right to share de-identified data without consent, but put protections in place.

To link multiple data sets, according to speaker Barbara Evans, we need an iron-clad method of ensuring that the data for a single individual is accurately linked. This requirement butts up against the American reluctance to assign a single ID to a patient. The reluctance is well-founded, because tracking individuals throughout their lives can lead to all kinds of seamy abuses.

One solution would be to give each individual control over a repository where all of her data would go. That solution implies that the individual would also control each release of the data. A lot of data sets could easily vanish from the world of research, as individuals die and successors lose interest in their data. We must also remember that public health requires the collection of certain types of data even if consent is not given.

Another popular reform envisioned by health care technologists, mentioned by Evans, is a market for health information. This scenario is part of a larger movement known as Vendor Relationship Management, which I covered several years ago. There is no doubt that individuals generate thousands of dollars worth of information, in health care records and elsewhere. Speaker Margaret Foster Riley claimed that the data collected from your loyalty card by the grocer is worth more than the money you spend there.

So researchers could offer incentives to share information instead of informed consent. Individuals would probably hire brokers to check that the requested uses conform to the individuals’ ethics, and that the price offered is fair.

Giving individuals control and haggling over data makes it harder, unfortunately, for researchers to assemble useful databases. First of all, modern statistical techniques (which fish for correlations) need huge data sets. Even more troubling is that partial data sets are likely to be skewed demographically. Perhaps only people who need some extra cash will contribute their data. Or perhaps only highly-educated people. Someone can get left out.

These problems exist even today, because our clinical trials and insurance records are skewed by income, race, age, and gender. Theoretically, it could get even worse if we eliminate the waiver that lets researchers release de-identified data without patient consent. Disparities in data sets and research were heavily covered at the Petrie-Flom conference, as I discuss in a companion article.

Privacy, discrimination, and other legal regimes

Several speakers pointed out that informed consent loses much of its significance when multiple data sets can be combined. The mosaic effect adds another layer of uncertainty about what will happen to data and what people are consenting to when they release it.

Nicolas Terry pointed out that American law tends to address privacy on a sector-by-sector basis, having one law for health records, another for student records, and so forth. He seemed to indicate that the European data protection regime, which is comprehensive, would be more appropriate nowadays where the boundary between health data and other forms of data is getting blurred. Sharona Hoffman said that employers and insurers can judge applicants’ health on the basis of such unexpected data sources as purchases at bicycle stores, voting records (healthy people have more energy to get involved in politics), and credit scores.

Mobile apps notoriously bring new leaks to personal data. Mobile operating systems fastidiously divide up access rights and require apps to request these rights during installation, but most of us just click Accept for everything, including things the apps have no right to need, such as our contacts and calendar. After all, there’s no way to deny an app one specific access right while still installing it.

And lots of these apps abuse their access to data. So we remain in a contradictory situation where certain types of data (such as data entered by doctors into records) are strongly protected, and other types that are at least as sensitive lack minimal protections. Although the app developers are free to collect and sell our information, they often promise to aggregate and de-identify it, putting them at the same level as traditional researchers. But no one requires the app developers to be complete and accurate.

To make employers and insurers pause before seeking out personal information, Hoffman suggested requiring that data brokers, and those who purchase their data, to publish the rules and techniques they employ to make use of the data. She pointed to the precedent of medical tests for employment and insurance coverage, where such disclosure is necessary. But I’m sure this proposal would be fought so heavily, by those who currently carry out their data spelunking under cover of darkness, that we’d never get it into law unless some overwhelming scandal prompted extreme action. Adrian Gropper called for regulations requiring transparency in every use of health data, and for the use of open source algorithms.

Several speakers pointed out that privacy laws, which tend to cover the distribution of data, can be supplemented by laws regarding the use of data, such as anti-discrimination and consumer protection laws. For instance, Hoffman suggested extending the Americans with Disabilities Act to cover people with heightened risk of suffering from a disability in the future. The Genetic Information Nondiscrimination Act (GINA) of 2008 offers a precedent. Universal health insurance coverage won’t solve the problem, Hoffman said, because businesses may still fear the lost work time and need for workplace accommodations that spring from health problems.

Many researchers are not sure whether their use of big data–such as “data exhaust” generated by people in everyday activities–would be permitted under the Common Rule. In a particularly wonky presentation (even for this conference) Laura Odwazny suggested that the Common Rule could permit the use of data exhaust because the risks it presents are no greater than “daily life risks,” which are the keystone for applying the Common Rule.

The final section of this article will look toward emerging risks that we are just beginning to understand.

Healthcare Consent and its Discontents (Part 1 of 3)

Posted on May 16, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Not only is informed consent a joke flippantly perpetrated on patients; I expect that it has inspired numerous other institutions to shield themselves from the legal consequences of misbehavior by offering similar click-through “terms of service.” We now have a society where powerful forces can wring from the rest of us the few rights we have with a click. So it’s great to see informed consent reconsidered from the ground up at the annual conference of the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School.

Petrie-Flom annual 2016 conference

Petrie-Flom annual 2016 conference

By no means did the speakers and audience at this conference agree on what should be done to fix informed consent (only that it needs fixing). The question of informed consent opens up a rich dialog about the goals of medical research, the relationship between researchers and patients, and what doctors have a right to do. It also raises questions for developers and users of electronic health records, such as:

  • Is it ethical to save all available data on a person?

  • If consent practices get more complex, how are the person’s wishes represented in the record?

  • If preferences for the data released get more complex, can we segment and isolate different types of data?

  • Can we find and notify patients of research results that might affect them, if they choose to be notified?

  • Can we make patient matching and identification more robust?

  • Can we make anonymization more robust?

A few of these topics came up at the conference. The rest of this article summarizes the legal and ethical topics discussed there.

The end of an era: IRBs under attack

The annoying and opaque informed consent forms we all have to sign go back to the 1970s and the creation of Institutional Review Boards (IRBs). Before that lay the wild-west era of patient relationships documented in Rebecca Skloot’s famous Immortal Life of Henrietta Lacks.

IRBs were launched in a very different age, based on assumptions that are already being frayed and will probably no longer hold at all a few years from now:

  • Assumption: Research and treatment are two different activities. Challenge: Now they are being combined in many institutions, and the ideal of a “learning heath system” will make them inextricable.

  • Assumption: Each research project takes place within the walls of a single institution, governed by its IRB. Challenge: Modern research increasingly involves multiple institutions with different governance, as I have reported before.

  • Assumption: A research project is a time-limited activity, lasting generally only about a year. Challenge: Modern research can be longitudinal and combine data sets that go back decades.

  • Assumption: The purpose for which data is collected can be specified by the research project. Challenge: Big data generally runs off of data collected for other purposes, and often has unclear goals.

  • Assumption: Inclusion criteria for each project are narrow. Challenge: Big data ranges over widely different sets of people, often included arbitrarily in data sets.

  • Assumption: Rules are based on phenotypal data: diagnoses, behavior, etc. Challenge: Genetics introduces a whole new set of risks and requirements, including the “right not to know” if testing turns up an individual’s predisposition to disease.

  • Assumption: The risks of research are limited to the individuals who participate. Challenge: As we shall see, big data affects groups as well as individuals.

  • Assumption: Properly de-identified data has an acceptably low risk of being re-identified. Challenge: Privacy researchers are increasingly discovering new risks from combining multiple data sources, a trend called the “mosaic effect.” I will dissect the immediacy of this risk later in the article.

Now that we have a cornucopia of problems, let’s look at possible ways forward.

Chinese menu consent

In the Internet age, many hope, we can provide individuals with a wider range of ethical decisions than the binary, thumbs-up-thumbs-down choice thrust before them by an informed consent form.

What if you could let your specimens or test results be used only for cancer research, or stipulate that they not be used for stem cell research, or even ask for your contributions to be withdrawn from experiments that could lead to discrimination on the basis of race? The appeal of such fine-grained consent springs from our growing realization that (as in the Henrietta Lacks case) our specimens and data may travel far. What if a future government decides to genetically erase certain racial or gender traits? Eugenics is not a theoretical risk; it has been pursued before, and not just by Nazis.

As Catherine M. Hammack said, we cannot anticipate future uses for medical research–especially in the fast-evolving area of genetics, whose possibilities alternate between exciting and terrifying–so a lot of individuals would like to draw their own lines in the sand.

I don’t personally believe we could implement such personalized ethical statements. It’s a problem of ontology. Someone has to list all the potential restrictions individuals may want to impose–and the list has to be updated globally at all research sites when someone adds a new restriction. Then we need to explain the list and how to use it to patients signing up for research. Researchers must finally be trained in the ontology so they can gauge whether a particular use meets the requirements laid down by the patient, possibly decades earlier. This is not a technological problem and isn’t amenable to a technological solution.

More options for consent and control over data will appear in the next part of this article.

Smart Home Healthcare Tech Setting Up to Do Great Things

Posted on March 31, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Today, I read a report suggesting that technologies allowing frail elderly patients to age in place are really coming into their own. The new study by P & S Market Research is predicting that the global smart home healthcare market will expand at a combined annual growth rate of 38% between now and the year 2022.

This surge in demand, not surprisingly, is emerging as three powerful technical trends — the use of smart home technologies, the rapid emergence of mobile health apps and expanding remote monitoring of patients — converge and enhance each other. The growing use of IoT devices in home healthcare is also in the mix.

The researchers found that fall prevention and detection applications will see the biggest increase in demand between now and 2022. But many other applications combining smart home technology with healthcare IT are likely to catch fire as well, particularly when such applications can help avoid costly nursing home placements for frail older adults, researchers said. And everybody wants to get into the game:

  • According to P&S, important players operating in this market globally include AT&T, ABB Ltd, Siemens AG, Schneider Electric SE, GE, Honeywell Life Care Solutions, Smart Solutions, Essence Group and Koninkllijke Philips N.V.
  • Also, we can’t forget smart home technology players like Nest, and Ecobee will stake out a place in this territory, as well as health monitoring players like Fitbit and consumer tech giants like Apple and Microsoft.
  • Then, of course, it’s a no-brainer for mobile ecosystem behemoths like Samsung to stake out their place in this market as well.
  • What’s more, VC dollars will be poured into startups in this space over the next several years. It seems likely that with $1.1 billion in venture capital funding flowing into mHealth last year, VCs will continue to back mobile health in coming years, and some of it seems likely to creep into this sector.

Now, despite its enthusiasm for this sector, the research firm does note that there are challenges holding this market back from even greater growth. These include the need for large capital investments to play this game, and the reality that some privacy and security issues around smart home healthcare haven’t been resolved yet.

That being said, even a casual glimpse at this market makes it blazingly clear that growth here is good. Off the top of my head, I can think of few trends that could save healthcare system money more effectively than keeping frail elderly folks safe and out of the hospital.

Add to that the fact that when these technologies are smart enough, they could very well spare caregivers a lot of anxiety and preserve older people’s dignity, and you have a great thing in the works. Expect to see a lot of innovation here over the next few years.