In recent years, developers have released a staggering number of mobile health applications, with nearly 150,000 available as of 2015. And the demand for such apps is rising, with the mHealth services market projected to reach $26 billion globally this year, according to analyst firm Research 2 Guidance.
Unfortunately, given the sheer volume of apps available, it’s tricky to separate the good from the bad. We haven’t even agreed on common standards by which to evaluate such apps, and neither regulatory agencies nor professional associations have taken a firm position on the subject.
For example, while we have seen groups like the American Medical Association endorse the use of mobile health applications, their acceptance came with several caveats. While the organization conceded that such apps might be OK, it noted that such approval applies only if the industry develops an evidence base demonstrating that the apps are accurate, effective, safe and secure. And other than broad practice guidelines, the trade group didn’t get into the details of how its members could evaluate app quality.
However, at least one researcher has made an attempt at developing standards which identify the best e-Health software apps and computer programs. Assistant professor Amit Baumel, PhD, of the Feinstein Institute for Medical Research, has recently led a team that created a tool to evaluate the quality and therapeutic potential of such applications.
To do his research, a write-up of which was published in the Journal of Medical Internet Research, Baumel developed an app-rating tool named Enlight. Rather than using automated analytics, Enlight was designed as a manual scale to be filled out by trained raters.
To create the foundation for Enlight, researchers reviewed existing literature to decide which criteria were relevant to determine app quality. The team identified a total of 476 criteria from 99 sources to build the tool. Later, the researchers tested Enlight on 42 mobile apps and 42 web-based programs targeting modifiable behaviors related to medical illness or mental health.
Once tested, researchers rolled out the tool. Enlight asked participants to score 11 different aspects of app quality, including usability, visual design, therapeutic persuasiveness and privacy. When they evaluated the responses, they found that Enlighten raters reached substantially similar results when rating a given app. They also found that all of the eHealth apps rated “fair” or above received the same range of scores for user engagement and content – which suggests that consumer app users have more consistent expectations than we might have expected.
That being said, Baumel’s team noted that even if raters like the content and found the design to be engaging, that didn’t necessarily mean that the app would change people’s behaviors. The researchers concluded that patients need not only a persuasive app design, but also qualities that support a therapeutic alliance.
In the future, the research team plans to research which aspects of app quality do a better job at predicting user behaviors. They’re also testing the feasibility of rolling out an Enlight-based recommendation system for clinicians and end users. If they do succeed, they’ll be addressing a real need. We can’t continue to integrate patient-generated app data until we can sort great apps from useless, inaccurate products.