Incorporating public values into evaluative criteria: Using crowdsourcing to identify criteria and standards
- PMID: 30165260
- DOI: 10.1016/j.evalprogplan.2018.08.004
Incorporating public values into evaluative criteria: Using crowdsourcing to identify criteria and standards
Abstract
At its core, evaluation involves the generation of value judgments. These evaluative judgments are based on comparing an evaluand's performance to what the evaluand is supposed to do (criteria) and how well it is supposed to do it (standards). The aim of this four-phase study was to test whether criteria and standards can be set via crowdsourcing, a potentially cost- and time-effective approach to collecting public opinion data. In the first three phases, participants were presented with a program description, then asked to complete a task to either identify criteria (phase one), weigh criteria (phase two), or set standards (phase three). Phase four found that the crowd-generated criteria were high quality; more specifically, that they were clear and concise, complete, non-overlapping, and realistic. Overall, the study concludes that crowdsourcing has the potential to be used in evaluation for setting stable, high-quality criteria and standards.
Keywords: Criteria; Crowdsourcing; Evaluation-Specific Methodology; MTurk; Mechanical Turk; Stability; Standards; Value Judgements.
Copyright © 2018 Elsevier Ltd. All rights reserved.
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources
