Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2018 Aug;15(8):587-590.
doi: 10.1038/s41592-018-0069-0. Epub 2018 Jul 31.

Quanti.us: a tool for rapid, flexible, crowd-based annotation of images

Affiliations

Quanti.us: a tool for rapid, flexible, crowd-based annotation of images

Alex J Hughes et al. Nat Methods. 2018 Aug.

Abstract

We describe Quanti.us , a crowd-based image-annotation platform that provides an accurate alternative to computational algorithms for difficult image-analysis problems. We used Quanti.us for a variety of medium-throughput image-analysis tasks and achieved 10-50× savings in analysis time compared with that required for the same task by a single expert annotator. We show equivalent deep learning performance for Quanti.us-derived and expert-derived annotations, which should allow scalable integration with tailored machine learning algorithms.

PubMed Disclaimer

Conflict of interest statement

Competing interests

J.D.M. holds an equity interest in Quanti.us LLC. Quanti.us passes payments from users to Amazon Mechanical Turk, which then distributes these payments to workers.

Figures

Fig. 1 |
Fig. 1 |. Leveraging the wisdom of crowds for scientific image analysis with Quanti.us.
a, Scientists designate a tool that human Turkers then use to annotate uploaded images according to a set of brief instructions. The resulting annotations can be interpreted in raw form and used as input to conventional algorithms, or be used as training data for machine learning algorithms. b, Left, raw example image of cell nuclei (true positives) and autofluorescent pores (true negatives). Right, corresponding overlay of expert, Turker, and clustered Turker crosshair annotations. False positive and false negative annotations were scored against those provided by a trained expert for individual Turkers, or for spatially clustered annotations from all Turkers (Methods). Each of 300 images was annotated by ten Turkers (a subset of 20 images was used to determine Turker performance). The scale bar applies to the higher-magnification (bottom) images, which represent the regions outlined by dashed squares in the corresponding images above; high magnification is 3× that in the lower-magnification image. c, Precision and recall metrics for individual Turkers (n = 46), for the clustered annotations from ten Turkers completing each image (“Turker collective”), for other experts not involved in ground truth annotation, and for a conventional FIJI object-detection pipeline over a range of particle-size thresholds. An inherent Turker quality score is shown. The gray dashed box indicates the portion of the graph highlighted in the inset to the right. Inset: the arrow indicates the effect of filtering out the bottom one-third of workers, assessed in terms of their performance, on the basis of this score. d, Annotations from every combination of a representative set of one to six ‘good’ Turkers and one ‘bad’ Turker who completed the same five image tasks were clustered and used to determine the indicated performance metrics. e, False positive errors contributed over the first k annotations submitted by a Turker (in chronological order), fit by a quadratic function (n = 29 Turkers). f, Spatial error of annotations versus the time between annotations, with Fitts’s law tradeoff (n = 129 Turkers). Fit envelopes are 95% confidence intervals. Data are representative of two experimental replicates.
Fig. 2 |
Fig. 2 |. Case studies and machine learning integration of Quanti.us.
a, Left, raw example image (top) and corresponding overlay of Turker annotations and clustered annotations (bottom) of fluorescent microtubules in a gliding assay, annotated with a polyline tool. Each of 50 images was annotated by ten Turkers. Right, FIESTA output (top). Plots (bottom) show microtubule speed and length distributions. The scale bar applies to the higher-magnification images, which represent the regions outlined by dashed squares in the corresponding images; high magnification is 2.75× that in the larger, lower-magnification images. b, Top, raw image frames of a 3D z-stack spanning an organoid. Middle, raw Turker outlines of nuclei, Turker consensus outlines, expert outlines, and MINS algorithm outlines associated with one frame of the stack (outlined by a dashed rectangle). Ten Turkers annotated 30 frames. The plot in the lower right shows performance metrics (prec., precision; rec., recall) for MINS for 18 runs spanning a range of parameter settings (Methods), and for the Turker collective, relative to results from an expert. c, Left and top, raw example images and corresponding overlays of Turker annotations and clustered annotations of the nose, digits, and tail of a walking mouse (images adapted with permission from ref. , Springer Nature). Each of 29 images was annotated by 20 Turkers. We input expert or spatially clustered Turker annotations into FIJI’s TrackMate to construct gait plots (bottom) and also compared them to results of a conventional segmentation (seg.) pipeline in FIJI. “Hind” and “fore” refer to limbs. d, Left, raw example images (top) and corresponding overlays (bottom) of Turker annotations and clustered annotations for 2 of 48 frames from a movie of mammary epithelial cell spreading (ten Turkers per frame). Right, F-score plotted for five experts; the Turker collective; automated Otsu segmentation; and convolutional neural networks trained on annotations from five randomly chosen Turkers, clustered Turker annotations, or expert annotations. Data are shown as mean ± s.d. and are representative of at least two experimental replicates.

Comment in

  • The crowd storms the ivory tower.
    Jones ML, Spiers H. Jones ML, et al. Nat Methods. 2018 Aug;15(8):579-580. doi: 10.1038/s41592-018-0077-0. Nat Methods. 2018. PMID: 30065367 No abstract available.

References

    1. Kim JS et al. Nature 509, 331–336 (2014). - PMC - PubMed
    1. Chen F et al. Nat. Methods 13, 679–684 (2016). - PMC - PubMed
    1. Lou X, Kang M, Xenopoulos P, Muñoz-Descalzo S & Hadjantonakis A-K Stem Cell Rep. 2, 382–397 (2014). - PMC - PubMed
    1. Ruhnow F, Zwicker D & Diez S Biophys. J. 100, 2820–2828 (2011). - PMC - PubMed
    1. Esteva A et al. Nature 542, 115–118 (2017). - PMC - PubMed

Publication types