Free EMR Newsletter Want to receive the latest news on EMR, Meaningful Use, ARRA and Healthcare IT sent straight to your email? Join thousands of healthcare pros who subscribe to EMR and EHR for FREE!

Should Clinical Research Options Be Integrated Into Every EHR?

Posted on August 19, 2016 I Written By

John Lynn is the Founder of the HealthcareScene.com blog network which currently consists of 10 blogs containing over 8000 articles with John having written over 4000 of the articles himself. These EMR and Healthcare IT related articles have been viewed over 16 million times. John also manages Healthcare IT Central and Healthcare IT Today, the leading career Health IT job board and blog. John is co-founder of InfluentialNetworks.com and Physia.com. John is highly involved in social media, and in addition to his blogs can also be found on Twitter: @techguy and @ehrandhit and LinkedIn.

One of the amazing things of the internet and technology is the democratization of information. I recently heard that it’s not that the world is getting worse, but that our information is getting better (ie. we hear about all the bad things happening in the world). That really resonated with me. Although, it annoys me when information that could be useful still isn’t making it to the right people at the right place and the right time. The point being that our information could still be better.

This tweet and infographic illustrated how this is true in the world of clinical trials and research:

Clinical Research and Doctor Referrals

How often do research studies not get done because they don’t have the right patients? Far too many. How many patients don’t get treatment from clinical trials that could save their lives because they don’t know about it? Far too many.

All of this happens because there’s a disconnect in the information that’s available. As someone who’s spent so much time in the EHR world, the question for me is should every clinical trial option be integrated into every EHR? Should we casually alert doctors to potential clinical trials that could benefit the patient? The EHR could already pre-qualify them in many ways so that the doctor was only seeing trials for which the patient likely could qualify for. How many more studies would get done and patients lives would be saved?

The lack of clinical trial information in the EHR is why I think the above infographic shows a disconnect between doctors presenting patients clinical trial options or not. Technology and EHRs are the way we can bridge the disconnect between patients expectations and reality. This is why I believe that EHR software can be an incredible foundation for innovation. We’re just sadly not there yet. We should be when it comes to clinical trials.

Randomized Controlled Trials and Longitudinal Analysis for Health Apps at Twine Health (Part 2 of 2)

Posted on February 18, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

The previous section of this article described the efforts of Dr. John Moore of Twine Health to rigorously demonstrate the effectiveness of a digital health treatment platform. As Moore puts it, Twine Health sought out two of the most effective treatment programs in the country–both Harvard’s diabetes treatment and MGH’s hypertension treatment are much more effective than the standard care found around the country–and then used their most effective programs for the control group of patients. The control group used face-to-face visits, phone calls, and text messages to keep in touch with their coaches and discuss their care plans.

The CollaboRhythm treatment worked markedly better than these exemplary programs. In the diabetes trial, they achieved a 3.2% reduction in diabetic patients’ A1C levels over three months (the control group achieved 2.0%). In the hypertension trial, 100% of patients reached a controlled blood pressure of less than 140/90 and the average reduction in blood pressure was 26mmHg (the control group had an average 16mmHg reduction and fewer than one-third of the patients went down less than 140/90).

What clinical studies can and cannot ensure

I see a few limitations with these clinical studies:

  • The digital program being tested combines several different intervention, as described before: reminders, messaging, virtual interactions, reports, and so on. Experiments show that all these things work together. But one can’t help wondering: what if you took out some time-consuming interaction? Could the platform be just as successful? But testing all the options would lead to a combinatorial explosion of tests.

    It’s important that interventions by coaches started out daily but decreased over the course of the study as the patient became more familiar and comfortable with the behavior called for in the care plans. The decrease in support required from the human coach suggests that the benefits are sustainable, because the subjects are demonstrating they can do more and more for themselves.

  • Outcomes were measured over short time frames. This is a perennial problem with clinical studies, and was noted as a problem in the papers. The researchers will contact subjects in about a year to see whether the benefits found in the studies were sustained. Even one year, although a good period to watch to see whether people bounce back to old behaviors, isn’t long enough to really tell the course of chronic illness. On the other hand, so many other life events intrude over time that it’s unfair to blame one intervention for what happens after a year.

  • Despite the short time frame for outcomes, the studies took years to set up, complete, and publish. This is another property of research practice that adds to its costs and slows down the dissemination of best practices through the medical field. The time frames involved explain why the researchers’ original Media Lab app was used for studies, even though they are now running a company on a totally different platform built on the same principles.

  • These studies also harbor all the well-known questions of external validity faced by all studies on human subjects. What if the populations at these Boston hospitals are unrepresentative of other areas? What if an element of self-selection skewed the results?

Bonnie Feldman, DDS, MBA, who went from dentistry to Wall Street and then to consulting in digital health, comments, “Creating an evidence base requires a delicate balancing act, as you describe, when technology is changing rapidly. Right now, chronic disease, especially autoimmune disease is affecting more young adults than ever before. These patients are in desperate need of new tools to support their self-care efforts. Twine’s early studies validate these important advances.”

Later research at Twine Health

Dr. Moore and his colleagues took stock of the tech landscape since the development of CollaboRhythm–for instance, the iPhone and its imitators had come out in the meantime–and developed a whole new platform on the principles of CollaboRhythm. Twine Health, of which Moore is co-founder and CEO, offers a platform based on these principles to more than 1,000 patients. The company expects to expand this number ten-fold in 2016. In addition to diabetes and hypertension, Twine Health’s platform is used for a wide range of conditions, such as depression, cholesterol control, fitness, and diet.

With a large cohort of patients to draw on, Twine Health can do more of the “big data” analysis that’s popular in the health care field. They don’t sponsor randomized trials like the two studies cited early, but they can compare patients’ progress to what they were doing before using Twine Health, as well as to patients who don’t use Twine Health. Moore says that results are positive and lasting, and that costs for treatment drop one-half to two-thirds.

Clinical studies bring the best scientific methods we know to validating health care apps. They are being found among a small but growing number of app developers. We still don’t know what the relation will be between randomized trials and the longitudinal analysis currently conducted by Twine Health; both seem of vital importance and they will probably complement each other. This is the path that developers have to take if they are to make a difference in health care.

Randomized Controlled Trials and Longitudinal Analysis for Health Apps at Twine Health (Part 1 of 2)

Posted on February 17, 2016 I Written By

Andy Oram is an editor at O'Reilly Media, a highly respected book publisher and technology information provider. An employee of the company since 1992, Andy currently specializes in open source, software engineering, and health IT, but his editorial output has ranged from a legal guide covering intellectual property to a graphic novel about teenage hackers. His articles have appeared often on EMR & EHR and other blogs in the health IT space. Andy also writes often for O'Reilly's Radar site (http://oreilly.com/) and other publications on policy issues related to the Internet and on trends affecting technical innovation and its effects on society. Print publications where his work has appeared include The Economist, Communications of the ACM, Copyright World, the Journal of Information Technology & Politics, Vanguardia Dossier, and Internet Law and Business. Conferences where he has presented talks include O'Reilly's Open Source Convention, FISL (Brazil), FOSDEM, and DebConf.

Walking into a restaurant or a bus is enough to see that any experience delivered through a mobile device is likely to have an enthusiastic uptake. In health care, the challenge is to find experiences that make a positive difference in people’s lives–and proving it.

Of course, science has a time-tested method for demonstrating the truth of a proposition: randomized tests. Reproducibility is a big problem, admittedly, and science has been shaken by the string of errors and outright frauds perpetrated in scientific journals. Still, knowledge advances bit by bit through this process, and the goal of every responsible app developer in the health care space is the blessing offered by a successful test.

Consumer apps versus clinical apps

Most of the 165,000 health apps will probably always be labeled “consumer” apps and be sold without the expense of testing. They occupy the same place in the health care field as the thousands of untested dietary supplements and stem cell injection therapies whose promise is purely anecdotal. Consumer anger over ill-considered claims have led to lawsuits against the Fitbit device manufacturer and Lumosity mental fitness app, leading to questions about the suitability of digital fitness apps for medical care plans.

The impenetrability of consumer apps to objective judgment comes through in a recent study from the Journal of Medical Internet Research (JMIR) that asked mHealth experts to review a number of apps. The authors found very little agreement about what makes a good app, thus suggesting that quality cannot be judged reliably, a theme in another recent article of mine. One might easily anticipate that subjective measures would produce wide variations in judgment. But in fact, many subjective measures produced more agreement (although not really strong agreement) than more “objective” measures such as effectiveness. If I am reading the data right, one of the measures found to be most unreliable was one of the most “objective”: whether an app has been tested for effectiveness.

Designing studies for these apps is an uncertain art. Sometimes a study may show that you don’t know what to measure or aren’t running the study long enough. These possible explanations–gentler than the obvious concern that maybe fitness devices don’t achieve their goals–swirl about the failure of the Scripps “Wired for Health” study.

The Twine Health randomized controlled trials

I won’t talk any more about consumer apps here, though–instead I’ll concentrate on apps meant for serious clinical use. What can randomized testing do for these?

Twine Health and MIT’s Media Lab took the leap into rigorous testing with two leading Boston-area partners in the health care field: a diabetes case study with the Joslin Diabetes Center and a hypertension case study with Massachusetts General Hospital. Both studies compared a digital platform for monitoring and guiding patients with pre-existing tools such as face-to-face visits and email. Both demonstrated better results through the digital platform–but certain built-in limitations of randomized studies leave open questions.

When Dr. John Moore decided to switch fields and concentrate on the user experience, he obtained a PhD at the Media Lab and helped develop an app called CollaboRhythm. He then used it for the two studies described in the papers, while founding and becoming CEO of Twine Health. CollaboRhythm is a pretty comprehensive platform, offering:

  • The ability to store a care plan and make it clear to the user through visualizations.

  • Patient self-tracking to report taking medications and resulting changes in vital signs, such as glycemic levels.

  • Visualizations showing the patient her medication adherence.

  • Reminders when to take medication and do other aspects of treatment, such as checking blood pressure.

  • Inferences about diet and exercise patterns based on reported data, shown to the patient.

  • Support from a human coach through secure text messages and virtual visits using audio, video, and shared screen control.

  • Decision support based on reported vital statistics and behaviors. For instance, when diabetic patients reported following their regimen but their glycemic levels were getting out of control, the app could suggest medication changes to the care team.

The collection of tools is not haphazard, but closely follows the modern model of digital health laid out by the head of Partners Connected Health, Joseph Kvedar, in his book The Internet of Healthy Things (which I reviewed at length). As in Kvedar’s model, the CollaboRhythm interventions rested on convenient digital technologies, put patients’ care into their own hands, and offered positive encouragement backed up by clinical staff.

As an example of the patient empowerment, the app designers deliberately chose not to send the patient an alarm if she forgets her medication. Instead, the patient is expected to learn and adopt responsibility over time by seeing the results of her actions in the visualizations. In exit interviews, some patients expressed appreciation for being asked to take responsibility for their own health.

The papers talk of situated learning, a classic education philosophy that teaches behavior in the context where the person has to practice the behavior, instead of an artificial classroom or lab setting. Technology can bring learning into the home, making it stick.

There is also some complex talk of the relative costs and time commitments between the digital interventions and the traditional ones. One important finding is that app users expressed significantly better feelings about the digital intervention. They became more conscious of their health and appreciated being able to be part of decisions such as changing insulin levels.

So how well does this treatment work? I’ll explore that tomorrow in the next section of this article, along with strengths and weaknesses of the studies.

The Random Results of Clinical Trials

Posted on June 23, 2014 I Written By

The following is a guest blog post by Andy Oram, writer and editor at O’Reilly Media.

For more than a century, doctors have put their faith in randomized, double-blind clinical trials. But this temple is being shaken to its foundations while radical sects of “big data” analysts challenge its orthodoxy. The schism came to a head earlier this month at the Health Datapalooza, the main conference covering the use of data in health care.

The themes of the conference–open data sets, statistical analysis, data sharing, and patient control over research–represent an implicit challenge to double-blind trials at every step of the way. Whereas trials recruit individuals using stringent critirea, ensuring proper matches, big data slurps in characteristics from everybody. Whereas trials march through rigid stages with niggling oversight, big data shoots files through a Hadoop computing cluster and spits out claims. Whereas trials scrupulously separate patients, big data analysis often draws on communities of people sharing ideas freely.

This year, the tension between clinical trials and big data was unmistakeable. One session was even called “Is the Randomized Clinical Trial (RCT) Dead?”

The background to the session is just as important as the points raised during the session. Basically, randomized trials have taken it on the chin for the past few years. Most have been shown to be unreproducible. Others have been repressed because they don’t show the results that their funders (usually pharmaceutical companies) would like to see. Scandals sometimes reach heights of absurdity that even a satirical novelist would have trouble matching.

We know that the subjects recruited to RCTs are unrepresentative of most people who receive treatments based on results. The subjects tend to be healthier (no comordities), younger, whiter, and more male than the general population. At the Datapalooza session, Robert Kaplan of NIH pointed out that a large number of clinical trials recruit patients from academic settings, even though only 1 in 100 of people suffering from a condition gets treated in such settings. He also pointed out that, since the federal government require clinical trials to register a few years ago, it has become clear that most don’t produce statistically significant results.

Two speakers from the Oak Ridge National Laboratory pushed the benefits of big data even further. Georgia Tourassi claimed that so far as data is concerned, “bigger can be better” even if the dat is “unusual, noisy, or sparse.” She suggested, however, that data analysis has roles to play before and after RCTs–on the one side, for instance, to generate hypotheses, and on the other to conduct longitudinal studies. Mallikarjun Shankar pointed out that we use big data successful in areas where randomized trials aren’t available, noticeably in enforcing test ban treaties and modeling climate change.

Robert Temple of the FDA came to the podium to defend RCTs. He opined that trials are required for clinical effectiveness–although I thought one of his examples undermined his claim–and pointed out that big data can have trouble finding important but small differences in populations. For example, an analysis of widely varying patients might miss the difference between two drugs, which may cause adverse effects in only 3% versus 4% of the population respectively. But for the people who suffer the adverse effects, that’s a 25% difference–something they’d like to know about.

RCTs received a battering in other parts of the Datapalooza as well, particularly in the keynote by Vinod Khosla, who has famously suggested that computing can replace doctors. While repeating the familiar statistics about the failures of RCTs, he waxed enthusiastic about the potential of big data to fix our ills. In his scenario, we will all collect large data sets about ourselves and compare them to other people to self-diagnose. Kathleen Sebelius, keynoting at the Datapalooza in one of her last acts as Secretary of Health and Human Services, said “We’ve been making health policy in this country for years based on anecdote, not information.”

Less present at the Datapalooza was the idea that there are ways to improve clinical trials. I have reported extensively on efforts at reform, which include getting patients involved in the goals and planning of trials, sharing raw data sets as well as published results, and creating teams that cross multiple organizations. The NIH is rightly proud of their open access policy, which requires publicly funded research to be published for free download at PubMed. But this policy doesn’t go far enough: it leaves a one-year gap after publication, which may itself take place a year after the paper was written, and the policy says nothing about the data used by the researcher.

I believe data analysis has many secrets to unlock in the universe, but its effectiveness in many areas is unproven. One may find a correlation between a certain gene and an effective treatment, but we still don’t know what other elements of the body have an impact. RCTs also have well tested rules for protecting patients that we need to explore and adapt to statistical analysis. It will be a long time before we know who is right, and I hope for a reconciliation along the way.