Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Jun 29;6(6):e148.
doi: 10.2196/mhealth.9826.

Learnability of a Configurator Empowering End Users to Create Mobile Data Collection Instruments: Usability Study

Affiliations

Learnability of a Configurator Empowering End Users to Create Mobile Data Collection Instruments: Usability Study

Johannes Schobel et al. JMIR Mhealth Uhealth. .

Abstract

Background: Many research domains still heavily rely on paper-based data collection procedures, despite numerous associated drawbacks. The QuestionSys framework is intended to empower researchers as well as clinicians without programming skills to develop their own smart mobile apps in order to collect data for their specific scenarios.

Objective: In order to validate the feasibility of this model-driven, end-user programming approach, we conducted a study with 80 participants.

Methods: Across 2 sessions (7 days between Session 1 and Session 2), participants had to model 10 data collection instruments (5 at each session) with the developed configurator component of the framework. In this context, performance measures like the time and operations needed as well as the resulting errors were evaluated. Participants were separated into two groups (ie, novices vs experts) based on prior knowledge in process modeling, which is one fundamental pillar of the QuestionSys framework.

Results: Statistical analysis (t tests) revealed that novices showed significant learning effects for errors (P=.04), operations (P<.001), and time (P<.001) from the first to the last use of the configurator. Experts showed significant learning effects for operations (P=.001) and time (P<.001), but not for errors as the experts' errors were already very low at the first modeling of the data collection instrument. Moreover, regarding the time and operations needed, novices got significantly better at the third modeling task than experts were at the first one (t tests; P<.001 for time and P=.002 for operations). Regarding errors, novices did not get significantly better at working with any of the 10 data collection instruments than experts were at the first modeling task, but novices' error rates for all 5 data collection instruments at Session 2 were not significantly different anymore from those of experts at the first modeling task. After 7 days of not using the configurator (from Session 1 to Session 2), the experts' learning effect at the end of Session 1 remained stable at the beginning of Session 2, but the novices' learning effect at the end of Session 1 showed a significant decay at the beginning of Session 2 regarding time and operations (t tests; P<.001 for time and P=.03 for operations).

Conclusions: In conclusion, novices were able to use the configurator properly and showed fast (but unstable) learning effects, resulting in their performances becoming as good as those of experts (which were already good) after having little experience with the configurator. Following this, researchers and clinicians can use the QuestionSys configurator to develop data collection apps for smart mobile devices on their own.

Keywords: data collection; mHealth; mobile apps.

PubMed Disclaimer

Conflict of interest statement

Conflicts of Interest: None declared.

Figures

Figure 1
Figure 1
A data collection instrument represented as BPMN 2.0 model.
Figure 2
Figure 2
The QuestionSys configurator: combining elements to pages.
Figure 3
Figure 3
The QuestionSys configurator: modeling a data collection instrument.
Figure 4
Figure 4
Study design.

References

    1. Fernandez-Ballesteros R. Self-report questionnaires. In: Hersen M, Haynes SN, Heiby EM, editors. Comprehensive Handbook of Psychological Assessment, Volume 3, Behavioral Assessment. Hoboken, New Jersey: John Wiley & Sons; 2004. pp. 194–221.
    1. Pavlović I, Kern T, Miklavcic D. Comparison of paper-based and electronic data collection process in clinical trials: costs simulation study. Contemp Clin Trials. 2009 Jul;30(4):300–316. doi: 10.1016/j.cct.2009.03.008.S1551-7144(09)00044-5 - DOI - PubMed
    1. Carlbring P, Brunt S, Bohman S, Austin D, Richards J, Öst L, Andersson G. Internet vs. paper and pencil administration of questionnaires commonly used in panic/agoraphobia research. Computers in Human Behavior. 2007 May;23(3):1421–1434. doi: 10.1016/j.chb.2005.05.002. - DOI
    1. Marcano BJS, Jamsek J, Huckvale K, O'Donoghue J, Morrison CP, Car J. Comparison of self-administered survey questionnaire responses collected using mobile apps versus other methods. Cochrane Database Syst Rev. 2015 Jul 27;(7):MR000042. doi: 10.1002/14651858.MR000042.pub2. - DOI - PMC - PubMed
    1. Palermo TM, Valenzuela D, Stork PP. A randomized trial of electronic versus paper pain diaries in children: impact on compliance, accuracy, and acceptability. Pain. 2004 Feb;107(3):213–219.S0304395903004135 - PubMed