Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2021 Mar 19;11(3):e047001.
doi: 10.1136/bmjopen-2020-047001.

Evaluating evaluation frameworks: a scoping review of frameworks for assessing health apps

Affiliations

Evaluating evaluation frameworks: a scoping review of frameworks for assessing health apps

Sarah Lagan et al. BMJ Open. .

Abstract

Objectives: Despite an estimated 300 000 mobile health apps on the market, there remains no consensus around helping patients and clinicians select safe and effective apps. In 2018, our team drew on existing evaluation frameworks to identify salient categories and create a new framework endorsed by the American Psychiatric Association (APA). We have since created a more expanded and operational framework Mhealth Index and Navigation Database (MIND) that aligns with the APA categories but includes objective and auditable questions (105). We sought to survey the existing space, conducting a review of all mobile health app evaluation frameworks published since 2018, and demonstrate the comprehensiveness of this new model by comparing it to existing and emerging frameworks.

Design: We conducted a scoping review of mobile health app evaluation frameworks.

Data sources: References were identified through searches of PubMed, EMBASE and PsychINFO with publication date between January 2018 and October 2020.

Eligibility criteria: Papers were selected for inclusion if they meet the predetermined eligibility criteria-presenting an evaluation framework for mobile health apps with patient, clinician or end user-facing questions.

Data extraction and synthesis: Two reviewers screened the literature separately and applied the inclusion criteria. The data extracted from the papers included: author and dates of publication, source affiliation, country of origin, name of framework, study design, description of framework, intended audience/user and framework scoring system. We then compiled a collection of more than 1701 questions across 79 frameworks. We compared and grouped these questions using the MIND framework as a reference. We sought to identify the most common domains of evaluation while assessing the comprehensiveness and flexibility-as well as any potential gaps-of MIND.

Results: New app evaluation frameworks continue to emerge and expand. Since our 2019 review of the app evaluation framework space, more frameworks include questions around privacy (43) and clinical foundation (57), reflecting an increased focus on issues of app security and evidence base. The majority of mapped frameworks overlapped with at least half of the MIND categories. The results of this search have informed a database (apps.digitalpsych.org) that users can access today.

Conclusion: As the number of app evaluation frameworks continues to rise, it is becoming difficult for users to select both an appropriate evaluation tool and to find an appropriate health app. This review provides a comparison of what different app evaluation frameworks are offering, where the field is converging and new priorities for improving clinical guidance.

Keywords: information management; psychiatry; telemedicine.

PubMed Disclaimer

Conflict of interest statement

Competing interests: None declared.

Figures

Figure 1
Figure 1
A screenshot of MIND highlighting several of the app evaluation questions (green boxes) and ability to access more. MIND, Mhealth Index and Navigation Database.
Figure 2
Figure 2
Framework identification through database searches (PubMed, EMBASE, PsychINFO) and other sources (reviews since 2018, grey literature, government websites).
Figure 3
Figure 3
The most commonly addressed questions, grouped within the categories of MIND. The blue triangle constitutes MIND and its six main categories, while the green trapezoid represents questions pertaining to usability or ease of use, which are not covered by MIND. MIND, Mhealth Index and Navigation Database.

Similar articles

Cited by

References

    1. FDA . Health C for D and R. digital health policies and public health solutions for COVID-19, 2020. Available: https://www.fda.gov/medical-devices/coronavirus-covid-19-and-medical-dev...
    1. Henson P, David G, Albright K, et al. . Deriving a practical framework for the evaluation of health apps. Lancet Digit Health 2019;1:e52–4. 10.1016/S2589-7500(19)30013-5 - DOI - PubMed
    1. Ondersma SJ, Walters ST. Clinician’s Guide to Evaluating and Developing eHealth Interventions for Mental Health. Psychiatr Res Clin Pract 2020;2:26–33. 10.1176/appi.prcp.2020.20190036 - DOI - PMC - PubMed
    1. Bergin A, Davies EB. Technology Matters: Mental health apps - separating the wheat from the chaff. Child Adolesc Ment Health 2020;25:51–3. 10.1111/camh.12363 - DOI - PMC - PubMed
    1. Martinengo L, Van Galen L, Lum E, et al. . Suicide prevention and depression apps' suicide risk assessment and management: a systematic assessment of adherence to clinical guidelines. BMC Med 2019;17:231. 10.1186/s12916-019-1461-z - DOI - PMC - PubMed

Publication types