Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2014 Oct 16:2:e589.
doi: 10.7717/peerj.589. eCollection 2014.

A randomized trial in a massive online open course shows people don't know what a statistically significant relationship looks like, but they can learn

Affiliations

A randomized trial in a massive online open course shows people don't know what a statistically significant relationship looks like, but they can learn

Aaron Fisher et al. PeerJ. .

Abstract

Scatterplots are the most common way for statisticians, scientists, and the public to visually detect relationships between measured variables. At the same time, and despite widely publicized controversy, P-values remain the most commonly used measure to statistically justify relationships identified between variables. Here we measure the ability to detect statistically significant relationships from scatterplots in a randomized trial of 2,039 students in a statistics massive open online course (MOOC). Each subject was shown a random set of scatterplots and asked to visually determine if the underlying relationships were statistically significant at the P < 0.05 level. Subjects correctly classified only 47.4% (95% CI [45.1%-49.7%]) of statistically significant relationships, and 74.6% (95% CI [72.5%-76.6%]) of non-significant relationships. Adding visual aids such as a best fit line or scatterplot smooth increased the probability a relationship was called significant, regardless of whether the relationship was actually significant. Classification of statistically significant relationships improved on repeat attempts of the survey, although classification of non-significant relationships did not. Our results suggest: (1) that evidence-based data analysis can be used to identify weaknesses in theoretical procedures in the hands of average users, (2) data analysts can be trained to improve detection of statistically significant results with practice, but (3) data analysts have incorrect intuition about what statistically significant relationships look like, particularly for small effects. We have built a web tool for people to compare scatterplots with their corresponding p-values which is available here: http://glimmer.rstudio.com/afisher/EDA/.

Keywords: Data visualization; Education; Evidenced based data analysis; MOOC; Randomized trial; Statistical significance; Statistics; p-values.

PubMed Disclaimer

Figures

Figure 1
Figure 1. Examples of plots shown to users.
Figure 2
Figure 2. Accuracy of significance classifications under different conditions.
Point estimates and confidence intervals for classification accuracy for each presentation style (Table 1). Accuracy rates for plots with truly significant underlying relationships (sensitivity) are shown in blue, and accuracy rates for plots with non-significant underlying relationships (specificity) are shown in red.
Figure 3
Figure 3. Classification accuracy on repeat attempts of the survey.
Each plot shows point estimates and confidence intervals for accuracy rates of human visual classifications of statistical significance on the first and second attempt of the survey. For the truly significant underlying P-values, users showed a significant increase in accuracy (sensitivity) on the second attempt of the survey for the “Reference,” “Smaller n,” and “Best Fit” presentation styles. For non-significant underlying P-values, accuracy (specificity) decreased significantly for the “Smaller n” category. Because these accuracy rates were estimated only based on the data from students who submitted more than one response to the survey, the confidence intervals here are wider than those in Fig. 2.

References

    1. Belia S, Fidler F, Williams J, Cumming G. Researchers misunderstand confidence intervals and standard error bars. Psychological Methods. 2005;10:389–396. doi: 10.1037/1082-989X.10.4.389. - DOI - PubMed
    1. Berk R, Brown L, Zhao L. Statistical inference after model selection. Journal of Quantitative Criminology. 2010;26:217–236. doi: 10.1007/s10940-009-9077-7. - DOI
    1. Beyth-Marom R, Fidler F, Cumming G. Statistical cognition: towards evidence-based practice in statistics and statistics education. Statistics Education Research Journal. 2008;7:20–39.
    1. Cassidy J. The reinhart and rogoff controversy: a summing up. The New Yorker. 2013 Available at http://www.newyorker.com/news/john-cassidy/the-reinhart-and-rogoff-contr... (accessed 21 August 2014)
    1. Cleveland WS, Diaconis P, McGill R. Variables on scatterplots look more highly correlated when the scales are increased. Science. 1982;216:1138–1141. doi: 10.1126/science.216.4550.1138. - DOI - PubMed

LinkOut - more resources