Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

OpenUMA: New Privacy Tools for Health Care Data

Posted on August 10, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The health care field, becoming more computer-savvy, is starting to take advantage of conveniences and flexibilities that were developed over the past decade for the Web and mobile platforms. A couple weeks ago, a new open source project was announced to increase options for offering data over the Internet with proper controls–options with particular relevance for patient control over health data.

The User-Managed Access (UMA) standard supports privacy through a combination of encryption and network protocols that have a thirty-year history. UMA reached a stable release, 1.0 in April of this year. A number of implementations are being developed, some of them open source.

Before I try to navigate the complexities of privacy protocols and standards, let’s look at a few use cases (currently still hypothetical) for UMA:

  • A parent wants to show the child’s school records from the doctor’s office just long enough for the school nurse to verify that the child has received the necessary vaccinations.

  • A traveler taking a temporary job in a foreign city wants to grant a local clinic access to the health records stored by her primary care physician for the six months during which the job lasts.

The open source implementation I’ll highlight in this article is OpenUMA from a company named ForgeRock. ForgeRock specializes in identity management online and creates a number of open source projects that can be found on their web page. They are also a leading participant in the non-profit Kantara Initiative, where they helped develop UMA as part of the UMA Developer Resources Work Group.

The advantage of open source libraries and tools for UMA is that the standard involves many different pieces of software run by different parts of the system: anyone with data to share, and anyone who wants access to it. The technology is not aimed at any one field, but health IT experts are among its greatest enthusiasts.

The fundamental technology behind UMA is OAuth, a well-tested means of authorizing people on web sites. When you want to leave a comment on a news article and see a button that says, “Log in using Facebook” or some other popular site, OAuth is in use.

OAuth is an enabling technology, by which I mean that it opens up huge possibilities for more complex and feature-rich tools to be built on top. It provides hooks for such tools through its notion of profiles–new standards that anyone can create to work with it. UMA is one such profile.

What UMA contributes over and above OAuth was described to me by Eve Maler, a leading member of the UMA working group who wrote their work up in the specification I cited earlier, and who currently works for ForgeRock. OAuth lets you manage different services for yourself. When you run an app that posts to Twitter on your behalf, or log in to a new site through your Facebook account, OAuth lets your account on one service do something for your account on another service.

UMA, in contrast, lets you grant access to other people. It’s not your account at a doctor’s office that is accessing data, but the doctor himself.

UMA can take on some nitty-gritty real-life situations that are hard to handle with OAuth alone. OAuth provides a single yes/no decision: is a person authorized or not? It’s UMA that can handle the wide variety of conditions that affect whether you want information released. These vary from field to field, but the conditions of time and credentials mentioned earlier are important examples in health care. I covered one project using UMA in an earlier article.

With OAuth, you can grant access to an account and then revoke it later (with some technical dexterity). But UMA allows you to build a time limit into the original access. Of course, the recipient does not lose the data to which you granted access, but when the time expires he cannot return to get new data.

UMA also allows you to define resource sets to segment data. You could put vaccinations in a resource set that you share with others, withholding other kinds of data.

OpenUMA contains two crucial elements of a UMA implementation:

The authorization server

This server accepts a list of restrictions from the site holding the data and the credentials submitted by the person requesting access to the data. The server is a very generic function: any UMA request can use any authorization server, and the server can run anywhere. Theoretically, you could run your own. But it would be more practical for a site that hosts data–Microsoft HealthVault, for instance, or some general cloud provider–to run an authorization server. In any case, the site publicizes a URL where it can be contacted by people with data or people requesting data.

The resource server

These submit requests to the authorization server from applications and servers that hold people’s data. The resource server handles the complex interactions with the authorization server so that application developers can focus on their core business.

Instead of the OpenUMA resource server, apps can link in libraries that provide the same functions. These libraries are being developed by the Kantara Initiative.

So before we can safely share and withhold data, what’s missing?

The UMA standard doesn’t offer any way to specify a condition, such as “Release my data only this week.” This gap is filled by policy languages, which standards groups will have to develop and code up in a compatible manner. A few exist already.

Maler points out that developers could also benefit from tools for editing and testing code, along with other supporting software that projects build up over time. The UMA resource working group is still at the beginning of their efforts, but we can look forward to a time when fine-grained patient control over access to data becomes as simple as using any of the other RESTful APIs that have filled the programmer’s toolbox.

Live Hack of an Infusion Pump Medical Device

Posted on August 6, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

At the BlackBerry Security Summit, BlackBerry Chief Security Officer David Kleidermacher and Security Expert Graham Murphy showed how easy it is for hackers to take control of a medical device that’s not properly secured. Check out the video below to see the medical device hack:

What a compelling and scary demonstration!

I think most healthcare organizations assume that medical device manufacturers are taking care of securing the medical devices. Or that HIPAA will protect them from all of this. Many take the stance that “ignorance is bliss.” This demo should illustrate to everyone that you can’t leave security of your medical devices to the manufacturer or HIPAA. It takes both the medical device manufacturer and the healthcare organization to make sure a medical device is properly secured.

What Are You Doing To Protect Your Organization Against Your Biggest Security Threat? People

Posted on July 28, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.


This was a great tweet coming out of the HIM Summit that’s run by HealthPort. I agree with the comment 100%. Sure, we see lots of large HIPAA breaches that make all the news. However, I bet if we looked at the total number of breaches (as opposed to patient records breached), the top problem would likely be due to the people in an organization. Plus, they’re the breaches that are often hardest to track.

What’s the key to solving the people risk when it comes to privacy and security in your organization? I’d start with making security a priority in your organization. Many healthcare organizations I’ve seen only pay lip service to privacy and security. I call it the “just enough” approach to HIPAA compliance. The antithesis of that is a healthcare organization that’s create a culture of compliance and security.

Once you have this desire for security and privacy in your organization, you then need to promote that culture across every member of your organization. It’s not enough to put that on your chief security officer, chief privacy officer, or HIPAA compliance officer. Certainly those people should be advocating for strong security and privacy policies and procedures, but one voice can’t be a culture of compliance and security. Everyone needs to participate in making sure that healthcare data is protected. You’re only as strong as your weakest link.

One of the attendees at the session commented that she’d emailed her chief security officer about some possible security and compliance issues and the chief security officer replied with a polite request about why this HIM manager cared and that the HIM manager should just let her do her job. Obviously I’m summarizing, but this response is not a surprise. People are often protective of their job and afraid of comments that might be considered as a black mark on the work they’re doing. While understandable, this illustrates an organization that hasn’t created a culture of security and compliance across their organization.

The better response to these questions would be for the chief security officer to reply with what they’ve done and to outline ways that they could do better or the reasons that their organization doesn’t have the ability to do more. The HIM manager should be thanked for taking an interest in security and compliance as opposed to being shot down when the questions are raised. It takes everyone on board to ensure compliance and security in a healthcare organization. Burning bridges with people who take an interest in the topic is a great way to poison the culture.

Those are a few suggestions about where to start. It’s not easy work. Changing a culture never is, but it’s a worthwhile endeavor. Plus, this work is a lot better than dealing with the damaged reputation after a security breach.

Industry Tries To Steamroll Physician Complaints About EMR Impact On Patient Face Time

Posted on June 9, 2015 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Some doctors — and a goodly number of consumers, too — argue that the use of EMRs inevitably impairs the relationship between doctors and patients. After all, it’s just common sense that forcing a doctor to glue herself to the keyboard during an encounter undercuts that doctor’s ability to assess the patient, critics say.

Of course, EMR vendors don’t necessarily agree. And some researchers don’t share that view either. But having reviewed some comments by a firm studying physician EMR use, and the argument an EMR vendor made that screen-itis doesn’t worry docs, it seems to me that the “lack of face time” complaint remains an important one.

Consider how some analysts are approaching the issue. While admitting that one-third to one-half of the time doctors spend with patients is spent using an EMR, and that physicians have been complaining about this extensively over the past several years, doctors are at least using these systems more efficiently, reports James Avallone, Director of Physician Research, who spoke with EHRIntelligence.com.

What’s important is that doctors are getting adjusted to using EMRs, Avallone suggests:

Whether [time spent with EMRs] is too much or too little, it’s difficult for us to say from our perspective…It’s certainly something that physicians are getting used to as it becomes more ingrained in their day-to-day behaviors. They’ve had more time to streamline workflow and that’s something that we’re seeing in terms of how these devices are being used at the point of care.

Another attempt to minimize the impact of EMRs on patient encounters comes from ambulatory EMR vendor NueMD. In a recent blog post, the editor quoted a study suggesting that other issues were far more important to doctors:

According to a 2013 study published in Health Affairs, only 25.8 percent of physicians reported that EHRs were threatening the doctor-patient relationship. Administrative burdens like the ICD-10 transition and HIPAA compliance regulations, on the other hand, were noted by more than 41 percent of those surveyed.

It’s certainly true that doctors worry about HIPAA and ICD-10 compliance, and that they could threaten the patient relationship, but only to the extent that they affect the practice overall. Meanwhile, if one in four respondents to the Health Affairs study said that EMRs were a threat to patient relationships, that should be taken quite seriously.

Of course, both of the entities quoted in this story are entitled to their perspective. And yes, there are clearly benefits to physician use of EMRs, especially once they become adjusted to the interface and workflow.

But if this quick sample of opinions is any indication, the healthcare industry as a whole seems to be blowing past physicians’ (and patients’) well-grounded concerns about the role EMR documentation plays in patient visits.

Someday, a new form factor for EMRs will arise — maybe augmented or virtual reality encounters, for example — which will alleviate the eyes-on-the-screen problem. Until then, I’d submit, it’s best to tackle the issue head on, not brush it off.

Epic Belatedly Accepts Reality And Drops Interoperability Fees

Posted on April 21, 2015 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Unbeknownst to me, and perhaps some of you as well, Epic has been charging customers data usage fees for quite some time.  The EMR giant has been quietly dunning users 20 cents for each clinical message sent to a health information exchange and $2.35 for inbound messages from non-Epic users, fees which could surely mount up into the millions if across a substantial health system.  (The messages were delivered through an EMR module known as Care Everywhere.)

And now, Epic chose #HIMSS15 to announce grandly that it was no longer charging users any fees to share clinical data with organizations that don’t use its technology, at least until 2020, according to CEO Judy Faulkner.  In doing so, it has glossed over the fact that these questionable charges existed in the first place, apparently with some success. For an organization which has historically ducked the press routinely, Epic seems to have its eye on the PR ball.

To me, this announcement is troubling in several ways, including the following:

  • Charging fees of this kind smacks of a shakedown.  If a hospital or health system buys Epic, they can’t exactly back out of their hundreds-of-millions-of-dollars investment to ensure they can share data with outside organizations.
  • Forcing providers to pay fees to share data with non-Epic customers penalizes the customers for interoperability problems for which Epic itself is responsible. It may be legal but it sure ain’t kosher.
  • In a world where even existing Epic customers can’t share freely with other Epic customers, the vendor ought to be reinvesting these interoperability fees in making that happen. I see no signs that this is happening.
  • If Epic consciously makes it costly for health systems to share data, it can impact patient care both within and outside, arguably raising costs and increasing the odds of care mistakes. Doing so consciously seems less than ethical. After all, Epic has a 15% to 20% market share in both the hospital and ambulatory enterprise EMR sector, and any move it makes affects millions of patients.

But Epic’s leadership is unrepentant. In fact, it seems that Epic feels it’s being tremendously generous in letting the fees go.  Here’s Eric Helsher, Epic’s vice president of client success, as told to Becker’s Hospital Review: “We felt the fee was small and, in our opinion, fair and one of the least expensive…but it was confusing to our customers.”

Mr. Helsher, I submit that your customers understood the fees just fine, but balked at paying them — and for good reason. At this point in the history of clinical data networking, pay-as-you-go models make no sense, as they impose a large fluctuating expense on organizations already struggling to manage development and implementation costs.

But those of us, like myself, who stand amazed at the degree to which Epic blithely powers through criticism, may see the giant challenged someday. Members of Congress are beginning to “get it” about interoperability, and Epic is in their sights.

The Evolving Security and Privacy Discussion

Posted on April 1, 2015 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

HIMSS put out the great tweet above. The image itself is worthy of a laugh. Although, only a partial laugh since in healthcare many people don’t understand that a password doesn’t mean it’s encrypted. Plus, that’s just emblematic of how elementary healthcare’s implementation of security is in most healthcare organizations.

Yes, there are the outlier organizations and there are even the outlier security and privacy individuals within a large organization. However, on the whole healthcare is not secure. The hard thing is that it’s not because of bad intentions. Almost everyone I’ve met in healthcare really want to ensure the privacy and security of health information. However, there’s a general lack of understanding of what’s needed.

With that said, I have seen a greater focus on privacy and security in healthcare than I’ve ever seen before. HIMSS featuring so many sessions is just one indicator of that increased interest in the topic. It’s hard to ignore when every other day some major corporation inside and outside of healthcare is getting breached.

One of the biggest security holes in healthcare is business associates. Most don’t have a real understanding of how to be HIPAA compliant and that’s a massive risk for the healthcare organization and the business associate. That’s why I’m excited that people who get it like Mike Semel are offering HIPAA Compliance training for business associates. Doing HIPAA compliance right is not cheap, but it’s cheaper than getting caught in a breach.

Personally, I’ve seen a whole wave of HIPAA compliance products and services coming out. In fact, I’m looking at creating a feature on EMR and HIPAA which lists all of the various companies involved in the space. I’m sure I’ll hear a lot of discussion around this topic at HIMSS.

Health IT Security: What Can the Association for Computing Machinery (ACM) Contribute?

Posted on February 24, 2015 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://radar.oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

A dazed awareness of security risks in health IT has bubbled up from the shop floor administrators and conformance directors (who have always worried about them) to C-suite offices and the general public, thanks to a series of oversized data breaches that recentlh peaked in the Anthem Health Insurance break-in. Now the US Senate Health Committee is taking up security, explicitly referring to Anthem. The inquiry is extremely broad, though, promising to address “electronic health records, hospital networks, insurance records, and network-connected medical devices.”
Read more..

The New Congressional Rider: Unique Patient ID Lemonade?

Posted on January 8, 2015 I Written By

When Carl Bergman isn't rooting for the Washington Nationals or searching for a Steeler bar, he’s Managing Partner of EHRSelector.com, a free service for matching users and EHRs. For the last dozen years, he’s concentrated on EHR consulting and writing. He spent the 80s and 90s as an itinerant project manger doing his small part for the dot com bubble. Prior to that, Bergman served a ten year stretch in the District of Columbia government as a policy and fiscal analyst.

Note: Previous versions referred to Rand Paul as the author of the first congressional rider. That was in error. The first rider was authored by then Representative Ron Paul. I regret the error. CB

Last month, I posted that Ron Paul’s gag rule on a national patient identifier was gone. Shortly, thereafter, Brian Ahier noted that the gag rule wasn’t dead. It just used different words. Now, it looks as if we were both right and both wrong. Here’s why. Paul’s rider’s gone, but its replacement, though daunting, isn’t as restrictive.

The gag rules are appropriation bill riders. Paul’s, which began in 1998, was aimed at a HIPAA provision, which called for identifiers for:

…. [E]ach individual, employer, health plan, and health care provider for use in the health care system. 42 US Code Sec. 1320d-2(b)

It prohibited “[P]lanning, testing, piloting, or developing a national identification card.” This was interpreted to prohibit a national patient id.

As I noted in my post, Paul’s language was dropped from the CRomnibus appropriation act. Brian, however, found new, restrictive language in CRomnibus, which says:

Sec. 510. None of the funds made available in this Act may be used to promulgate or adopt any final standard under section 1173(b) of the Social Security Act providing for, or providing for the assignment of, a unique health identifier for an individual (except in an individual’s capacity as an employer or a health care provider), until legislation is enacted specifically approving the standard.

Gag Rule’s Replacement Language

Unlike Paul’s absolutist text, the new rider makes Congress the last, biggest step in a formal ID process. The new language lets ID development go ahead, but if HHS wants to adopt a standard, Congress must approve it.

This change creates two potential adoption paths. Along the first, and most obvious, HHS develops a mandatory, national patient ID through Medicare, or the Meaningful Use program, etc., and asks congress’ approval. This would be a long, hard, uphill fight.

The second is voluntary adoption. For example, NIST could develop a voluntary, industry standard. Until now, Paul’s rider stopped this approach.

NIST’s a Consensus Building Not a Rulemaking Agency

NIST’s potential ID role is well within its non regulatory, consensus standards development mandate. It could lead a patient ID building effort with EHR stakeholders. Given the high cost of current patient matching techniques, stakeholders may well welcome a uniform, voluntary standard. That would not solve all interoperability problems, but it would go a long way toward that end.

Congress has loosened its grip on a patient ID, now its up to ONC, NIST, etc., to use this new freedom.

Fitbit Data Being Used In Personal Injury Case

Posted on December 8, 2014 I Written By

Anne Zieger is veteran healthcare consultant and analyst with 20 years of industry experience. Zieger formerly served as editor-in-chief of FierceHealthcare.com and her commentaries have appeared in dozens of international business publications, including Forbes, Business Week and Information Week. She has also contributed content to hundreds of healthcare and health IT organizations, including several Fortune 500 companies. Contact her at @ziegerhealth on Twitter or visit her site at Zieger Healthcare.

Lately, there’s been a lot of debate over whether data from wearable health bands is useful to clinicians or only benefits the consumer user. On the one hand, there are those that say that a patient’s medical care could be improved if doctors had data on their activity levels, heart rate, respirations and other standard metrics. Others, meanwhile, suggest that unless it can be integrated into an EMR and made usable, such data is just a distraction from other more important health indicators.

What hasn’t come up in these debates, but might far more frequently in the future,  is the idea that health band data can be used in personal injury cases to show the effects of an accident on a plaintiff. According to Forbes, a law firm in Calgary is working on what may be the first personal injury case to leverage smart band data, in this case activity data from a Fitbit.

The plaintiff, a young woman, was injured in an accident four years ago. While Fitbit hadn’t entered the market yet, her lawyers at McLeod Law believe they can establish the fact that she led an active lifestyle prior to her accident. They’ve now started processing data from her Fitbit to show that her activity levels have fallen under the baseline for someone of her age and profession.

It’s worth noting that rather than using Fitbit data directly, they’re processing it using analytics platform Vivametrica, which uses public research to compare people’s activity data with that of the general population. (Its core business is to analyze data from wearable sensor devices for the assessment of health and wellness.) The plaintiff will share her Fitbit data with Vivametrica for several months to present a rich picture of her activities.

Using even analyzed, processed data generated by a smart band is “unique,” according to her attorneys. “Till now we’ve always had to rely on clinical interpretation,” says Simon Muller of McLeod Law. “Now we’re looking at longer periods of time to the course of the day, and we have hard data.”

But even if the woman wins her case, there could be a downside to this trend. As Forbes notes, insurers will want wearable device data as much as plaintiffs will, and while they can’t force claimants to wear health bands, they can request a court order demanding the data from whoever holds the data. Dr. Rick Hu, co-founder and CEO of Vivametrica, tells Forbes that his company wouldn’t release such data, but doesn’t explain how he will be able to refuse to honor a court-ordered disclosure.

In fact, wearable devices could become a “black box” for the human body, according to Matthew Pearn, an associate lawyer with Canadian claims processing firm Foster & Company. In a piece for an insurance magazine, Pearn points out that it’s not clear, at least in his country, what privacy rights the wearers of health bands maintain over the data they generate once they file a personal injury suit.

Meanwhile, it’s still not clear how HIPAA protections apply to such data in the US. When FierceHealthIT recently spoke with Deven McGraw, a partner in the healthcare practice of Manatt, Phelps & Phillips, she pointed out that HIPAA only regulates data “in the hands of, with the control of, or within the purview of a medical provider, a health plan or other covered entity under the law.”  In other words, once the wearable data makes it into the doctor’s record, HIPAA protections are in force, but until then they are not.

All told, it’s pretty sobering to consider that millions of consumers are generating wearables data without knowing how vulnerable it is.

Apple’s Security Issues and Their Move into Healthcare

Posted on September 3, 2014 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

I’m on the record as being skeptical of Apple’s entrance into healthcare with Apple Health and HealthKit. I just don’t think they’ll dive deep enough into the intricacies of healthcare to really make a difference. They underestimate the complexity.

With that disclosure, I found a number of recent tweets about Apple and healthcare quite interesting. We’ll start first with this tweet that ties the recent nude celebrity photos that were made public after someone hacked the celebrities’ iCloud account together with Apple’s HealthKit release.

For those who don’t follow Apple, they have a big announcement planned for September 9, 2014. Rumors have the new sizes of the iPhone 6 could be announced and the new iWatch (or whatever they finally call it) will be announced alongside the iPhone 6. We’ll see if the announcement also brings more details on Apple Health and HealthKit which has been short on concrete details.

Even if Apple Health and HealthKit aren’t involved in the announcement, every smartwatch I’ve seen has had some health element to it. Plus, we shouldn’t be surprised if the iPhone 6 incorporates health and wellness elements as well. Samsung has already embedded health sensors in the S5. I imagine iPhone will follow suit.

With Apple doing more and more in healthcare, it does bring up some new security and privacy issues for them. In fact, this next tweet highlights one healthcare reaction by Apple that is likely connected with the iCloud security issues mentioned above.

This reminds me of a recent business associate policy I saw from a backup software vendor. They were willing to sign a business associate agreement with a healthcare organization, but only if it was their most expensive product and only if it was used to backup your data to your own cloud or devices. Basically, they just wanted to provide the software and not have to be responsible for the storage and security of the data. Apple is taking a similar approach by not allowing private health data to be stored in iCloud. Makes you wonder if Apple will sign a business associate agreement.

We’ll continue to keep an eye on Apple’s entrance into healthcare. They have a lot to learn about healthcare if they want their work in healthcare to be a success. Security and privacy is just one of those areas.