Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Mar 21;19(3):e82.
doi: 10.2196/jmir.7270.

Enlight: A Comprehensive Quality and Therapeutic Potential Evaluation Tool for Mobile and Web-Based eHealth Interventions

Affiliations

Enlight: A Comprehensive Quality and Therapeutic Potential Evaluation Tool for Mobile and Web-Based eHealth Interventions

Amit Baumel et al. J Med Internet Res. .

Abstract

Background: Studies of criteria-based assessment tools have demonstrated the feasibility of objectively evaluating eHealth interventions independent of empirical testing. However, current tools have not included some quality constructs associated with intervention outcome, such as persuasive design, behavior change, or therapeutic alliance. In addition, the generalizability of such tools has not been explicitly examined.

Objective: The aim is to introduce the development and further analysis of the Enlight suite of measures, developed to incorporate the aforementioned concepts and address generalizability aspects.

Methods: As a first step, a comprehensive systematic review was performed to identify relevant quality rating criteria in line with the PRISMA statement. These criteria were then categorized to create Enlight. The second step involved testing Enlight on 42 mobile apps and 42 Web-based programs (delivery mediums) targeting modifiable behaviors related to medical illness or mental health (clinical aims).

Results: A total of 476 criteria from 99 identified sources were used to build Enlight. The rating measures were divided into two sections: quality assessments and checklists. Quality assessments included usability, visual design, user engagement, content, therapeutic persuasiveness, therapeutic alliance, and general subjective evaluation. The checklists included credibility, privacy explanation, basic security, and evidence-based program ranking. The quality constructs exhibited excellent interrater reliability (intraclass correlations=.77-.98, median .91) and internal consistency (Cronbach alphas=.83-.90, median .88), with similar results when separated into delivery mediums or clinical aims. Conditional probability analysis revealed that 100% of the programs that received a score of fair or above (≥3.0) in therapeutic persuasiveness or therapeutic alliance received the same range of scores in user engagement and content-a pattern that did not appear in the opposite direction. Preliminary concurrent validity analysis pointed to positive correlations of combined quality scores with selected variables. The combined score that did not include therapeutic persuasiveness and therapeutic alliance descriptively underperformed the other combined scores.

Conclusions: This paper provides empirical evidence supporting the importance of persuasive design and therapeutic alliance within the context of a program's evaluation. Reliability metrics and preliminary concurrent validity analysis indicate the potential of Enlight in examining eHealth programs regardless of delivery mediums and clinical aims.

Keywords: assessment; behavior change; eHealth; evaluation; mHealth; persuasive design; quality; therapeutic alliance.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: None declared.

Figures

Figure 1
Figure 1
Percentages of eHealth intervention programs with a fair or above score (≥3.0) in quality constructs (columns) out of the sample of programs that received a score of fair or above (≥3.0) in another construct (rows). Within this study sample, higher percentages indicate that having the examined range of scores in one construct (row) improves the chances of receiving the same range of scores in the other construct (column). Fields are colored from higher to lower percentages by the following order: red (highest), orange, yellow, and green (lowest).

References

    1. Aitken M, Gauntlett C. Patient Apps for Improved Healthcare from Novelty to Mainstream. Parsippany, NJ: IMS Institute for Healthcare Informatics; 2013.
    1. Kumar S, Nilsen WJ, Abernethy A, Atienza A, Patrick K, Pavel M, Riley WT, Shar A, Spring B, Spruijt-Metz D, Hedeker D, Honavar V, Kravitz R, Lefebvre RC, Mohr DC, Murphy SA, Quinn C, Shusterman V, Swendeman D. Mobile health technology evaluation: the mHealth evidence workshop. Am J Prev Med. 2013 Aug;45(2):228–236. doi: 10.1016/j.amepre.2013.03.017. http://europepmc.org/abstract/MED/23867031 S0749-3797(13)00277-8 - DOI - PMC - PubMed
    1. Baumel A. Making the case for a feasible evaluation method of available e-mental health products. Adm Policy Ment Health. 2016 Sep 12;:1–4. doi: 10.1007/s10488-016-0764-z.10.1007/s10488-016-0764-z - DOI - PubMed
    1. Baumel A, Muench F. Heuristic evaluation of eHealth interventions: establishing standards that relate to the therapeutic process perspective. JMIR Ment Health. 2016 Jan 13;3(1):e5. doi: 10.2196/mental.4563. http://mental.jmir.org/2016/1/e5/ v3i1e5 - DOI - PMC - PubMed
    1. Hekler EB, Klasnja P, Riley WT, Buman MP, Huberty J, Rivera DE, Martin CA. Agile science: creating useful products for behavior change in the real world. Transl Behav Med. 2016 Jun;6(2):317–328. doi: 10.1007/s13142-016-0395-7. http://europepmc.org/abstract/MED/27357001 10.1007/s13142-016-0395-7 - DOI - PMC - PubMed