Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Smart Home Healthcare Tech Setting Up to Do Great Things

Posted on March 31, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Today, I read a report suggesting that technologies allowing frail elderly patients to age in place are really coming into their own. The new study by P & S Market Research is predicting that the global smart home healthcare market will expand at a combined annual growth rate of 38% between now and the year 2022.

This surge in demand, not surprisingly, is emerging as three powerful technical trends — the use of smart home technologies, the rapid emergence of mobile health apps and expanding remote monitoring of patients — converge and enhance each other. The growing use of IoT devices in home healthcare is also in the mix.

The researchers found that fall prevention and detection applications will see the biggest increase in demand between now and 2022. But many other applications combining smart home technology with healthcare IT are likely to catch fire as well, particularly when such applications can help avoid costly nursing home placements for frail older adults, researchers said. And everybody wants to get into the game:

  • According to P&S, important players operating in this market globally include AT&T, ABB Ltd, Siemens AG, Schneider Electric SE, GE, Honeywell Life Care Solutions, Smart Solutions, Essence Group and Koninkllijke Philips N.V.
  • Also, we can’t forget smart home technology players like Nest, and Ecobee will stake out a place in this territory, as well as health monitoring players like Fitbit and consumer tech giants like Apple and Microsoft.
  • Then, of course, it’s a no-brainer for mobile ecosystem behemoths like Samsung to stake out their place in this market as well.
  • What’s more, VC dollars will be poured into startups in this space over the next several years. It seems likely that with $1.1 billion in venture capital funding flowing into mHealth last year, VCs will continue to back mobile health in coming years, and some of it seems likely to creep into this sector.

Now, despite its enthusiasm for this sector, the research firm does note that there are challenges holding this market back from even greater growth. These include the need for large capital investments to play this game, and the reality that some privacy and security issues around smart home healthcare haven’t been resolved yet.

That being said, even a casual glimpse at this market makes it blazingly clear that growth here is good. Off the top of my head, I can think of few trends that could save healthcare system money more effectively than keeping frail elderly folks safe and out of the hospital.

Add to that the fact that when these technologies are smart enough, they could very well spare caregivers a lot of anxiety and preserve older people’s dignity, and you have a great thing in the works. Expect to see a lot of innovation here over the next few years.

Could Blockchain Tech Tackle Health Data Security Problems?

Posted on March 25, 2016 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

While you might not own any them, you’ve probably heard of bitcoins, a floating currency backed by no government entity. You may also be aware that these coins are backed by blockchain technology, a decentralized system in which all participants track everyone’s holdings on their own individual systems. In this world, buyers and sellers can exchange bitcoins untraceably, making bitcoins perfect for criminal use.

In fact, some readers may have first heard about bitcoins when a Hollywood, CA hospital recently had all its data assets frozen by malware hackers, who demanded a ransom of $3.4 million in bitcoins before the hospital could have its data back. (The hospital ended up talking the ransomware attackers down to paying $17K, and when it paid that sum, IT leaders got back control.)

What’s intriguing, however, is that blockchain technology may also be a solution for some of healthcare’s most vexing health data security problems. That, at least, is the view of Peter Nichol, a veteran healthcare business and technology executive consultant. As he sees it, “blockchain addresses the legitimate previous concerns of security, scalability and privacy of electronic medical records.”

In his essay posted on LinkedIn Nichol describes a way in which the blockchain can be used in healthcare data management:

  1. Patient: The patient is provided a code (private key or hash) and an address that provides the codes to unlock their patient data.  While the patient data is not stored in the blockchain, the blockchain provides the authentication or required hashes (multi-signatures, also referred to as multi-sigs) to be used to enable access to the data (identification and authentication).
  2. Provider: Contributors to patient’s medical records (e.g. providers) are provided a separate universal signature (codes or hashes or multi-sigs). These hashes when combined with the patient’s hash establishes the required authentication to unlock the patient’s data.
  3. Profile: Then the patient defines in their profile, the access rules required to unlock their medical record.
  4. Access: If the patient defines 2-of-2 codes, then two separate computer machines (the hashes) would have to be compromised to gain unauthorized access to the data. (In this case, establishing unauthorized privileged access becomes very difficult when the machines types differ, operating systems differ and are hosted with different providers.)

As Nichol rightly notes, blockchain strategies offer some big advantages over existing security, particularly given that keys are distributed and that multiple computers but need to be compromised for attackers to gain access to illicit data.

Nichols’ essay also notes that blockchain technology can be used to provide patients with more sophisticated levels of privacy control over their personal health information. As he points out, the patient can use their own blockchain signature, combined with, say, that of a hospital to provide more secure access when seeking treatment. Meanwhile, when they want to limit access to the data it’s easy to do so.

And voila, health data maintenance problems are solved, he suggests. “This model lifts the costly burden of maintaining a patient’s medical histories away from the hospitals,” he argues. “Eventually cost savings will make it full cycle back to the patient receiving care.”

What’s even more interesting is that Nichols is clearly not just a voice in the wilderness. For example, Philips Healthcare recently made an early foray into blockchain technology, partnering with blockchain-based record-keeping startup Tierion.

Ultimately, whether Nichols is entirely on target or not, it seems clear that health IT players have much to gain by exploring use of blockchain technology in some form. In fact, I predict that 2016 will be a breakout year for this type of application.

Significant Articles in the Health IT Community in 2015

Posted on December 15, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Have you kept current with changes in device connectivity, Meaningful Use, analytics in healthcare, and other health IT topics during 2015? Here are some of the articles I find significant that came out over the past year.

The year kicked off with an ominous poll about Stage 2 Meaningful Use, with implications that came to a head later with the release of Stage 3 requirements. Out of 1800 physicians polled around the beginning of the year, more than half were throwing in the towel–they were not even going to try to qualify for Stage 2 payments. Negotiations over Stage 3 of Meaningful Use were intense and fierce. A January 2015 letter from medical associations to ONC asked for more certainty around testing and certification, and mentioned the need for better data exchange (which the health field likes to call interoperability) in the C-CDA, the most popular document exchange format.

A number of expert panels asked ONC to cut back on some requirements, including public health measures and patient view-download-transmit. One major industry group asked for a delay of Stage 3 till 2019, essentially tolerating a lack of communication among EHRs. The final rules, absurdly described as a simplification, backed down on nothing from patient data access to quality measure reporting. Beth Israel CIO John Halamka–who has shuttled back and forth between his Massachusetts home and Washington, DC to advise ONC on how to achieve health IT reform–took aim at Meaningful Use and several other federal initiatives.

Another harbinger of emerging issues in health IT came in January with a speech about privacy risks in connected devices by the head of the Federal Trade Commission (not an organization we hear from often in the health IT space). The FTC is concerned about the security of recent trends in what industry analysts like to call the Internet of Things, and medical devices rank high in these risks. The speech was a lead-up to a major report issued by the FTC on protecting devices in the Internet of Things. Articles in WIRED and Bloomberg described serious security flaws. In August, John Halamka wrote own warning about medical devices, which have not yet started taking security really seriously. Smart watches are just as vulnerable as other devices.

Because so much medical innovation is happening in fast-moving software, and low-budget developers are hankering for quick and cheap ways to release their applications, in February, the FDA started to chip away at its bureaucratic gamut by releasing guidelines releasing developers from FDA regulation medical apps without impacts on treatment and apps used just to transfer data or do similarly non-transformative operations. They also released a rule for unique IDs on medical devices, a long-overdue measure that helps hospitals and researchers integrate devices into monitoring systems. Without clear and unambiguous IDs, one cannot trace which safety problems are associated with which devices. Other forms of automation may also now become possible. In September, the FDA announced a public advisory committee on devices.

Another FDA decision with a potential long-range impact was allowing 23andMe to market its genetic testing to consumers.

The Department of Health and Human Services has taken on exceedingly ambitious goals during 2015. In addition to the daunting Stage 3 of Meaningful Use, they announced a substantial increase in the use of fee-for-value, although they would still leave half of providers on the old system of doling out individual payments for individual procedures. In December, National Coordinator Karen DeSalvo announced that Health Information Exchanges (which limit themselves only to a small geographic area, or sometimes one state) would be able to exchange data throughout the country within one year. Observers immediately pointed out that the state of interoperability is not ready for this transition (and they could well have added the need for better analytics as well). HHS’s five-year plan includes the use of patient-generated and non-clinical data.

The poor state of interoperability was highlighted in an article about fees charged by EHR vendors just for setting up a connection and for each data transfer.

In the perennial search for why doctors are not exchanging patient information, attention has turned to rumors of deliberate information blocking. It’s a difficult accusation to pin down. Is information blocked by health care providers or by vendors? Does charging a fee, refusing to support a particular form of information exchange, or using a unique data format constitute information blocking? On the positive side, unnecessary imaging procedures can be reduced through information exchange.

Accountable Care Organizations are also having trouble, both because they are information-poor and because the CMS version of fee-for-value is too timid, along with other financial blows and perhaps an inability to retain patients. An August article analyzed the positives and negatives in a CMS announcement. On a large scale, fee-for-value may work. But a key component of improvement in chronic conditions is behavioral health which EHRs are also unsuited for.

Pricing and consumer choice have become a major battleground in the current health insurance business. The steep rise in health insurance deductibles and copays has been justified (somewhat retroactively) by claiming that patients should have more responsibility to control health care costs. But the reality of health care shopping points in the other direction. A report card on state price transparency laws found the situation “bleak.” Another article shows that efforts to list prices are hampered by interoperability and other problems. One personal account of a billing disaster shows the state of price transparency today, and may be dangerous to read because it could trigger traumatic memories of your own interactions with health providers and insurers. Narrow and confusing insurance networks as well as fragmented delivery of services hamper doctor shopping. You may go to a doctor who your insurance plan assures you is in their network, only to be charged outrageous out-of-network costs. Tools are often out of date overly simplistic.

In regard to the quality ratings that are supposed to allow intelligent choices to patients, A study found that four hospital rating sites have very different ratings for the same hospitals. The criteria used to rate them is inconsistent. Quality measures provided by government databases are marred by incorrect data. The American Medical Association, always disturbed by public ratings of doctors for obvious reasons, recently complained of incorrect numbers from the Centers for Medicare & Medicaid Services. In July, the ProPublica site offered a search service called the Surgeon Scorecard. One article summarized the many positive and negative reactions. The New England Journal of Medicine has called ratings of surgeons unreliable.

2015 was the year of the intensely watched Department of Defense upgrade to its health care system. One long article offered an in-depth examination of DoD options and their implications for the evolution of health care. Another article promoted the advantages of open-source VistA, an argument that was not persuasive enough for the DoD. Still, openness was one of the criteria sought by the DoD.

The remote delivery of information, monitoring, and treatment (which goes by the quaint term “telemedicine”) has been the subject of much discussion. Those concerned with this development can follow the links in a summary article to see the various positions of major industry players. One advocate of patient empowerment interviewed doctors to find that, contrary to common fears, they can offer email access to patients without becoming overwhelmed. In fact, they think it leads to better outcomes. (However, it still isn’t reimbursed.)

Laws permitting reimbursement for telemedicine continued to spread among the states. But a major battle shaped up around a ruling in Texas that doctors have a pre-existing face-to-face meeting with any patient whom they want to treat remotely. The spread of telemedicine depends also on reform of state licensing laws to permit practices across state lines.

Much wailing and tears welled up over the required transition from ICD-9 to ICD-10. The AMA, with some good arguments, suggested just waiting for ICD-11. But the transition cost much less than anticipated, making ICD-10 much less of a hot button, although it may be harmful to diagnosis.

Formal studies of EHR strengths and weaknesses are rare, so I’ll mention this survey finding that EHRs aid with public health but are ungainly for the sophisticated uses required for long-term, accountable patient care. Meanwhile, half of hospitals surveyed are unhappy with their EHRs’ usability and functionality and doctors are increasingly frustrated with EHRs. Nurses complained about technologies’s time demands and the eternal lack of interoperability. A HIMSS survey turned up somewhat more postive feelings.

EHRs are also expensive enough to hurt hospital balance sheets and force them to forgo other important expenditures.

Electronic health records also took a hit from ONC’s Sentinel Events program. To err, it seems, is not only human but now computer-aided. A Sentinel Event Alert indicated that more errors in health IT products should be reported, claiming that many go unreported because patient harm was avoided. The FDA started checking self-reported problems on PatientsLikeMe for adverse drug events.

The ONC reported gains in patient ability to view, download, and transmit their health information online, but found patient portals still limited. Although one article praised patient portals by Epic, Allscripts, and NextGen, an overview of studies found that patient portals are disappointing, partly because elderly patients have trouble with them. A literature review highlighted where patient portals fall short. In contrast, giving patients full access to doctors’ notes increases compliance and reduces errors. HHS’s Office of Civil Rights released rules underlining patients’ rights to access their data.

While we’re wallowing in downers, review a study questioning the value of patient-centered medical homes.

Reuters published a warning about employee wellness programs, which are nowhere near as fair or accurate as they claim to be. They are turning into just another expression of unequal power between employer and employee, with tendencies to punish sick people.

An interesting article questioned the industry narrative about the medical device tax in the Affordable Care Act, saying that the industry is expanding robustly in the face of the tax. However, this tax is still a hot political issue.

Does anyone remember that Republican congressmen published an alternative health care reform plan to replace the ACA? An analysis finds both good and bad points in its approach to mandates, malpractice, and insurance coverage.

Early reports on use of Apple’s open ResearchKit suggested problems with selection bias and diversity.

An in-depth look at the use of devices to enhance mental activity examined where they might be useful or harmful.

A major genetic data mining effort by pharma companies and Britain’s National Health Service was announced. The FDA announced a site called precisionFDA for sharing resources related to genetic testing. A recent site invites people to upload health and fitness data to support research.

As data becomes more liquid and is collected by more entities, patient privacy suffers. An analysis of web sites turned up shocking practices in , even at supposedly reputable sites like WebMD. Lax security in health care networks was addressed in a Forbes article.

Of minor interest to health IT workers, but eagerly awaited by doctors, was Congress’s “doc fix” to Medicare’s sustainable growth rate formula. The bill did contain additional clauses that were called significant by a number of observers, including former National Coordinator Farzad Mostashari no less, for opening up new initiatives in interoperability, telehealth, patient monitoring, and especially fee-for-value.

Connected health took a step forward when CMS issued reimbursement guidelines for patient monitoring in the community.

A wonky but important dispute concerned whether self-insured employers should be required to report public health measures, because public health by definition needs to draw information from as wide a population as possible.

Data breaches always make lurid news, sometimes under surprising circumstances, and not always caused by health care providers. The 2015 security news was dominated by a massive breach at the Anthem health insurer.

Along with great fanfare in Scientific American for “precision medicine,” another Scientific American article covered its privacy risks.

A blog posting promoted early and intensive interactions with end users during app design.

A study found that HIT implementations hamper clinicians, but could not identify the reasons.

Natural language processing was praised for its potential for simplifying data entry, and to discover useful side effects and treatment issues.

CVS’s refusal to stock tobacco products was called “a major sea-change for public health” and part of a general trend of pharmacies toward whole care of the patient.

A long interview with FHIR leader Grahame Grieve described the progress of the project, and its the need for clinicians to take data exchange seriously. A quiet milestone was reached in October with a a production version from Cerner.

Given the frequent invocation of Uber (even more than the Cheesecake Factory) as a model for health IT innovation, it’s worth seeing the reasons that model is inapplicable.

A number of hot new sensors and devices were announced, including a tiny sensor from Intel, a device from Google to measure blood sugar and another for multiple vital signs, enhancements to Microsoft products, a temperature monitor for babies, a headset for detecting epilepsy, cheap cameras from New Zealand and MIT for doing retinal scans, a smart phone app for recognizing respiratory illnesses, a smart-phone connected device for detecting brain injuries and one for detecting cancer, a sleep-tracking ring, bed sensors, ultrasound-guided needle placement, a device for detecting pneumonia, and a pill that can track heartbeats.

The medical field isn’t making extensive use yet of data collection and analysis–or uses analytics for financial gain rather than patient care–the potential is demonstrated by many isolated success stories, including one from Johns Hopkins study using 25 patient measures to study sepsis and another from an Ontario hospital. In an intriguing peek at our possible future, IBM Watson has started to integrate patient data with its base of clinical research studies.

Frustrated enough with 2015? To end on an upbeat note, envision a future made bright by predictive analytics.

Consumers Take Risk Trading Health Data For Health Insurance Discounts

Posted on August 28, 2015 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

When Progressive Insurance began giving car owners the option of having their driving tracked in exchange for potential auto insurance discounts, nobody seemed to raise a fuss. After all, the program was voluntary, and nobody wants to pay more than they have to for coverage.

Do the same principles apply to healthcare? We may find out. According to a study by digital health research firm Parks Associates, at least some users are willing to make the same tradeoff. HIT Consultant reports that nearly half (42%) of digital pedometer users would be willing to share their personal data in exchange for a health insurance discount.

Consumer willingness to trade data for discounts varied by device, but didn’t fall to zero. For example, 35% of smart watch owners would trade their health data for health insurance discounts, while 26% of those with sleep-quality monitors would do so.

While the HIT Consultant story doesn’t dig into the profile of users who were prepared to sell their personal health data today — which is how I’d describe a data-for-discount scheme — I’d submit that they are, in short, pretty sharp.

Why do I say this? Because as things stand, at least, health insurers would get less than they were paying for unless the discount was paltry. (As the linked blog item notes, upstart health insurer Oscar Insurance already gives away free Misfit wearables. To date, though, it’s not clear from the write-up whether Oscar can quantify what benefit it gets from the giveaway.)

As wearables and health apps mature, however, consumers may end up compromising themselves if they give up personal health data freely. After all, if health insurance begins to look like car insurance, health plans could push up premiums every time they make a health “mistake” (such as overeating at a birthday dinner or staying up all night watching old movies). Moreoever, as such data gets absorbed into EMRs, then cross-linked with claims, health plans’ ability to punish you with actuarial tables could skyrocket.

In fact, if consumers permit health plans to keep too close a watch on them, it could give the health plans the ability to effectively engage in post-contract medical underwriting. This is an unwelcome prospect which could lead to court battles given the ACA’s ban on such activities.

Also, once health plans have the personal data, it’s not clear what they would do with it. I am not a lawyer, but it seems to me that health plans would have significant legal latitude in using freely given data, and might even be seen to sell that data in the aggregate to pharmas. Or they might pass it to their parent company’s life or auto divisions, which could potentially use the data to make coverage decisions.

Ultimately, I’d argue that unless the laws are changed to protect consumers who do so, selling personal health data to get lower insurance premiums is a very risky decision. The short-term benefit is unlikely to be enough to offset very real long-term consequences. Once you’ve compromised your privacy, you seldom get it back.

Health IT Security: What Can the Association for Computing Machinery (ACM) Contribute?

Posted on February 24, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

A dazed awareness of security risks in health IT has bubbled up from the shop floor administrators and conformance directors (who have always worried about them) to C-suite offices and the general public, thanks to a series of oversized data breaches that recentlh peaked in the Anthem Health Insurance break-in. Now the US Senate Health Committee is taking up security, explicitly referring to Anthem. The inquiry is extremely broad, though, promising to address “electronic health records, hospital networks, insurance records, and network-connected medical devices.”
Read more..

FTC Gingerly Takes On Privacy in Health Devices (Part 2 of 2)

Posted on February 11, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The first part of this series of articles laid out the difficulties of securing devices in the Internet of Things (particularly those used in the human body). Accepting that usability and security have to be traded off against one another sometimes, let’s look at how to make decisions most widely acceptable to the public.

The recent FTC paper on the Internet of Things demonstrates that they have developed a firm understanding of the problems in security and privacy. For this paper, they engaged top experts who had seen what happens when technology gets integrated into daily life, and they covered all the issues I know of. As devices grow in sophistication and spread to a wider population, the kinds of discussion the FTC held should be extended to the general public.

For instance, suppose a manufacturer planning a new way of tracking people–or a new use for their data–convened some forums in advance, calling on potential users of the device to discuss the benefits and risks. Collectively, the people most affected by the policies chosen by the manufacturer would determine which trade-offs to adopt.

Can ordinary people off the street develop enough concerned with their safety to put in the time necessary to grasp the trade-offs? We should try asking them–we may be pleasantly surprised. Here are some of the issues they need to consider.

  • What can malicious viewers determine from data? We all may feel nervous about our employer learning that we went to a drug treatment program, but how much might the employer learn just by knowing we went to a psychotherapist? We now know that many innocuous bits of data can be combined to show a pattern that exposes something we wished to keep secret.

  • How guarded do people feel about their data? This depends largely on the answer to the previous question–it’s not so much the individual statistics reported, but the patterns that can emerge.

  • What data does the device need to collect to fulfill its function? If the manufacturer, clinician, or other data collector gathers up more than the minimal amount, how are they planning to use that data, and do we approve of that use? This is an ethical issue faced constantly by health care researchers, because most patients would like their data applied to finding a cure, but both the researchers and the patients have trouble articulating what’s kosher and what isn’t. Even collecting data for marketing purposes isn’t necessarily evil. Some patients may be willing to share data in exchange for special deals.

  • How often do people want to be notified about the use of their data, or asked for permission? Several researchers are working on ways to let patients express approval for particular types of uses in advance.

  • How long is data being kept? Most data users, after a certain amount of time, want only aggregate data, which is supposedly anonymized. Are they using well-established techniques for anonymizing the data? (Yes, trustworthy techniques exist. Check out a book I edited for my employer, Anonymizing Health Data.)

I believe that manufacturers can find a cross-section of users to form discussion groups about the devices they use, and that these users can come to grips with the issues presented here. But even an engaged, educated public is not a perfect solution. For instance, a privacy-risking choice that’s OK for 95% of users may turn out harmful to the other 5%. Still, education for everyone–a goal expressed by the FTC as well–will undoubtedly help us all make safer choices.

FTC Gingerly Takes On Privacy in Health Devices (Part 1 of 2)

Posted on February 10, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Are you confused about risks to privacy when everything from keystrokes to footsteps is being monitored? The Federal Trade Commission is confused too. In January they released a 55-page paper summarizing results of discussions with privacy experts about the Internet of Things, plus some recommendations. After a big build-up citing all sorts of technological and business threats, the report kind of fizzles out. Legislation specific to the IoT was rejected, but several suggestions for “general privacy legislation” such as requiring security on devices.

Sensors and controls are certainly popping up everywhere, so the FTC investigation comes at an appropriate time. My senator, Ed Markey, who has been a leader in telecom and technology for decades in Congress, recently released a report focused on automobiles. But the same concerns show up everywhere in various configurations. In this article I’ll focus on health care, and on the dilemma of security in that area.

No doubt about it, pacemakers and other critical devices can be hacked. It could be a movie: in Scene 1 a non-descript individual is moving through a crowded city street, thumbing over a common notepad. In Scene 2, later, numerous people fall to the ground as their pacemakers fail. They just had the bad luck to be in the vicinity of the individual with the notepad, who implanted their implants with malicious code that took effect later.

But here are the problems with requiring more security. First, security in computers almost always rests on encryption, which leads to an increase in the size of the data being protected. The best-known FTC case regarding device security, where they forced changes for cameras used in baby monitors, was appropriate for these external devices that could absorb the extra overhead. But increased data size leads to an increase in memory use, which in turn requires more storage and computing power on a small embedded device, as well as more transmission time over the network. In the end, devices may have to be heavier and more costly, serious barriers to adoption.

Furthermore, software always has bugs. Some lie dormant for years, like the notorious Heartbleed bug in the very software that web sites around the world depend on for encrypted communications. To provide security fixes, a manufacturer has to make it easy for embedded devices to download updated software–and any bug in that procedure leaves a channel for attack.

Perhaps there is a middle ground, where devices could be designed to accept updates only from particular computers in particular geographic locations. A patient would then be notified through email or a text message to hike it down to the doctor, where the fix could be installed. And the movie scene where malicious code gets downloaded from the street would be less likely to happen.

In the next part of this article I’ll suggest how the FTC and device manufacturers can engage the public to make appropriate privacy and security decisions.

The Future of Privacy

Posted on December 24, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I realize that it’s Christmas Eve, but this tweet caught my eye and so I thought I’d share.

This is the season of over sharing. With that in mind, I think the discussion of privacy is an important one to have.

Fitbit Data Being Used In Personal Injury Case

Posted on December 8, 2014 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Lately, there’s been a lot of debate over whether data from wearable health bands is useful to clinicians or only benefits the consumer user. On the one hand, there are those that say that a patient’s medical care could be improved if doctors had data on their activity levels, heart rate, respirations and other standard metrics. Others, meanwhile, suggest that unless it can be integrated into an EMR and made usable, such data is just a distraction from other more important health indicators.

What hasn’t come up in these debates, but might far more frequently in the future,  is the idea that health band data can be used in personal injury cases to show the effects of an accident on a plaintiff. According to Forbes, a law firm in Calgary is working on what may be the first personal injury case to leverage smart band data, in this case activity data from a Fitbit.

The plaintiff, a young woman, was injured in an accident four years ago. While Fitbit hadn’t entered the market yet, her lawyers at McLeod Law believe they can establish the fact that she led an active lifestyle prior to her accident. They’ve now started processing data from her Fitbit to show that her activity levels have fallen under the baseline for someone of her age and profession.

It’s worth noting that rather than using Fitbit data directly, they’re processing it using analytics platform Vivametrica, which uses public research to compare people’s activity data with that of the general population. (Its core business is to analyze data from wearable sensor devices for the assessment of health and wellness.) The plaintiff will share her Fitbit data with Vivametrica for several months to present a rich picture of her activities.

Using even analyzed, processed data generated by a smart band is “unique,” according to her attorneys. “Till now we’ve always had to rely on clinical interpretation,” says Simon Muller of McLeod Law. “Now we’re looking at longer periods of time to the course of the day, and we have hard data.”

But even if the woman wins her case, there could be a downside to this trend. As Forbes notes, insurers will want wearable device data as much as plaintiffs will, and while they can’t force claimants to wear health bands, they can request a court order demanding the data from whoever holds the data. Dr. Rick Hu, co-founder and CEO of Vivametrica, tells Forbes that his company wouldn’t release such data, but doesn’t explain how he will be able to refuse to honor a court-ordered disclosure.

In fact, wearable devices could become a “black box” for the human body, according to Matthew Pearn, an associate lawyer with Canadian claims processing firm Foster & Company. In a piece for an insurance magazine, Pearn points out that it’s not clear, at least in his country, what privacy rights the wearers of health bands maintain over the data they generate once they file a personal injury suit.

Meanwhile, it’s still not clear how HIPAA protections apply to such data in the US. When FierceHealthIT recently spoke with Deven McGraw, a partner in the healthcare practice of Manatt, Phelps & Phillips, she pointed out that HIPAA only regulates data “in the hands of, with the control of, or within the purview of a medical provider, a health plan or other covered entity under the law.”  In other words, once the wearable data makes it into the doctor’s record, HIPAA protections are in force, but until then they are not.

All told, it’s pretty sobering to consider that millions of consumers are generating wearables data without knowing how vulnerable it is.

OCR Didn’t Meet HIPAA Security Requirements

Posted on December 27, 2013 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Oops — this doesn’t sound good. According to a report from the HHS OIG, the agency’s Office for Civil Rights has failed to meet the requirements for oversight and enforcement of the HIPAA security rule.

The 26-page report spells out several problems with OCR’s enforcement of the security rule, which was expanded by the HITECH ACT of 2009 to demand regular audits of covered healthcare organizations and their business associates. The vulnerabilities found leave procedural holes which could harm OCR’s ability to do its job regarding the security rule, the OIG said.

What was OCR failing to do? Well for one thing, the report contends, OCR had not assessed the risks, established priorities or implemented controls for the audits to ensure their compliance. Another example: OCRs investigation files didn’t contain the required documentation supporting key decisions made by staff, because the staff didn’t consistently follow the offices procedures by reviewing case documentation.

What’s more, the OCR apparently hasn’t been implementing sufficient controls, including supervisory review and documentation retention, to make sure investigators follow policies and procedures for properly managing security rule investigations.

The OIG also found that OCR wasn’t complying with federal cyber security requirements for its own information systems used to process and store data on investigations. Requirements it was neglecting included getting HHS authorizations to operate the system used to oversee and enforce security rule. OCR also failed to complete privacy impact assessments, risk analyses or system security plans for two of its three systems, the OIG concluded.

All told, it seems that if the OCR is going to oversee the privacy rule properly, it had better get its own act together.