There is a dearth of published literature regarding the design, implementation and evaluation of mobile apps for sleep disturbance, as well as little indication of any standardized best practices adopted by sleep app developers or evaluators, according to a systematic review published yesterday in JMIR.
The shortage of available data within this space is concerning in light of the 500-plus sleep apps available for users to download through the App Store and Google Play Store, the researchers wrote. Meanwhile, the absence of a clear framework for sleep app design and evaluation stands in contrast to other areas such as PTSD, bipolar disorder and hypertension, for which research groups have established and circulated clear guidance for app-makers.
“Despite the potential and ubiquity of mHealth apps, most apps lack evidence for their clinical efficacy among end users,” the researchers wrote. “Compounding this problem is a lack of framework to inform and standardize the process and reporting of design, development, and evaluation of mHealth apps. This may lead to clinical inefficacy, lack of medical-condition–specific content, poor patient engagement, or even harmful apps.”
Learn on-demand, earn credit, find products and solutions. Get Started >>
From an initial body of 6,015 results, the researchers whittled down papers that fell short of their inclusion criteria and ended up with 15 full-text papers for their review. These papers contained 27 studies regarding either the design, implementation or evaluation of eight sleep disturbance apps: CBT-I Coach, Somnometer, Interactive Resilience Enhancing Sleep Tactics (iREST), ShutEye, Sleepcare, Sleep Bunny, SleepFix and a single unnamed app. Five of these were considered to be prototypes, and only CBT-I Coach was available for commercial download.
The characteristics of these apps and their corresponding papers varied greatly, the researchers wrote.
In regard to the apps, while four delivered a cognitive behavioral therapy for insomnia, one was an app-delivered sleep-restriction therapy, one was a social alarm clock and one was a wallpaper display. All of the apps provided users with personalized feedback on their sleep, while most included features such as sleep diaries, reminders and full automation.
Design approaches for these apps detailed in the papers were more jumbled, and no information was outlined for four of the digital tools. Two-thirds of the papers detailed at least one metric of the apps’ implementations – acceptability, usability, adherence or engagement.
Only three apps had literature available regarding at least two of these metrics, and one app had evidence regarding all four.
Quantitative evaluation of treatment outcomes were only available for six of the eight apps, with self-reported sleep questionnaires being the most commonly used outcome. And of the 27 studies contained in the papers, the researchers wrote that only one was a randomized controlled trial that was adequately powered.
User and data privacy concerns within the apps were only reported by three of the papers, and none of the papers referenced regulation.
The researchers noted that their review was conducted only among published literature written in English, and that their review did not take into account information that could be communicated through the downloaded app itself. In addition, several of the researchers are named on provisional patents for the SleepFix app identified as part of the review.
HOW IT WAS DONE
To conduct their review, the researchers queried five electronic databases for literature with keywords related to sleep and “mHealth.”
Among other criteria, they included papers focused on apps that measured, tracked or improved sleep, and described either the design engineering, clinical implementation or clinical evaluation of an app. They excluded review papers and studies that: focused on disorders other than insomnia; described multimodal interventions targeting non-sleep health; or outlined interventions based on the Internet, phone messages or text messages.
Once relevant papers were identified, the researchers coded the data within them in regard to their descriptions of design engineering, clinical implementation and clinical evaluation.
In addition, the team used their findings to build a high-level framework for future development of evidence-based apps for sleep disturbance.
“The framework aims to address the need for (1) increased application and reporting of best-practice design approaches – for example, user-centered and multidisciplinary teams; (2) comprehensive implementation assessments involving multiple metrics, tools validated for sleep, and privacy and regulatory considerations; and (3) rigorous evaluations of clinical efficacy,” the researchers wrote.
THE LARGER TREND
Much of the mobile sleep app space is dominated by commercial products, the researchers wrote. The speed with which they are developed and released has generally outpaced academia-driven research regarding outcomes and design, which others have noted can raise issues regarding transparency and trust.
Apps and other connected products focused on sleep health and insomnia have long held a spot within the digital health sector. In more recent years, however, the conversations among app-makers has shifted toward evidence of outcomes as payers and other industry stakeholders gauge whether to support products branding themselves as digital therapeutics.
This has culminated in major names like Big Health funding and publishing investigations regarding their app’s outcomes and cost-effectiveness, and others like Pear Therapeutics pursuing FDA authorization for their prescription apps for insomnia therapy.
“Collaboration between academia and the industry may facilitate the development of evidence-based apps in the fast-paced mHealth technology environment,” the researchers concluded.