Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2016 Jul;101(7):958-75.
doi: 10.1037/apl0000108. Epub 2016 Apr 14.

Initial investigation into computer scoring of candidate essays for personnel selection

Affiliations

Initial investigation into computer scoring of candidate essays for personnel selection

Michael C Campion et al. J Appl Psychol. 2016 Jul.

Abstract

[Correction Notice: An Erratum for this article was reported in Vol 101(7) of Journal of Applied Psychology (see record 2016-32115-001). In the article the affiliations for Emily D. Campion and Matthew H. Reider were originally incorrect. All versions of this article have been corrected.] Emerging advancements including the exponentially growing availability of computer-collected data and increasingly sophisticated statistical software have led to a "Big Data Movement" wherein organizations have begun attempting to use large-scale data analysis to improve their effectiveness. Yet, little is known regarding how organizations can leverage these advancements to develop more effective personnel selection procedures, especially when the data are unstructured (text-based). Drawing on literature on natural language processing, we critically examine the possibility of leveraging advances in text mining and predictive modeling computer software programs as a surrogate for human raters in a selection context. We explain how to "train" a computer program to emulate a human rater when scoring accomplishment records. We then examine the reliability of the computer's scores, provide preliminary evidence of their construct validity, demonstrate that this practice does not produce scores that disadvantage minority groups, illustrate the positive financial impact of adopting this practice in an organization (N ∼ 46,000 candidates), and discuss implementation issues. Finally, we discuss the potential implications of using computer scoring to address the adverse impact-validity dilemma. We suggest that it may provide a cost-effective means of using predictors that have comparable validity but have previously been too expensive for large-scale screening. (PsycINFO Database Record

PubMed Disclaimer

LinkOut - more resources