Abstract
We describe Quanti.us, a crowd-based image-annotation platform that provides an accurate alternative to computational algorithms for difficult image-analysis problems. We used Quanti.us for a variety of medium-throughput image-analysis tasks and achieved 10–50× savings in analysis time compared with that required for the same task by a single expert annotator. We show equivalent deep learning performance for Quanti.us-derived and expert-derived annotations, which should allow scalable integration with tailored machine learning algorithms.
This is a preview of subscription content, access via your institution
Access options
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 print issues and online access
$259.00 per year
only $21.58 per issue
Buy this article
- Purchase on Springer Link
- Instant access to full article PDF
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
References
Kim, J. S. et al. Nature 509, 331–336 (2014).
Chen, F. et al. Nat. Methods 13, 679–684 (2016).
Lou, X., Kang, M., Xenopoulos, P., Muñoz-Descalzo, S. & Hadjantonakis, A.-K. Stem Cell Rep. 2, 382–397 (2014).
Ruhnow, F., Zwicker, D. & Diez, S. Biophys. J. 100, 2820–2828 (2011).
Esteva, A. et al. Nature 542, 115–118 (2017).
Goodfellow, I. J. et al. in Proc. Advances in Neural Information Processing Systems 27 (eds. Ghahramani, Z. et al.) 2672–2680 (NIPS, La Jolla, CA, 2014).
Salimans, T. et al. arXiv Preprint at https://arxiv.org/abs/1606.03498 (2016).
Thul, P. J. et al. Science 356, eaal3321 (2017).
Simpson, R., Page, K. R. & De Roure, D. in Proc. 23rd International Conference on World Wide Web 1049–1054 (ACM, New York, 2014).
Sauermann, H. & Franzoni, C. Proc. Natl. Acad. Sci. USA 112, 679–684 (2015).
Hitlin, P. Research in the Crowdsourcing Age, A Case Study (Pew Research Center, Washington, DC, 2016).
Bruggemann, J., Lander, G. C. & Su, A. I. bioRxiv Preprint at https://www.biorxiv.org/content/early/2017/11/15/220145 (2017).
Galton, F. Nature 75, 450–451 (1907).
Ipeirotis, P. G., Provost, F. & Wang, J. in Proc. ACM SIGKDD Workshop on Human Computation 64–67 (ACM, New York, 2010).
Zhai, S., Kong, J. & Ren, X. Int. J. Hum. Comput. Stud. 61, 823–856 (2004).
Ipeirotis, P. G. XRDS 17, 16–21 (2010).
Scharrel, L., Ma, R., Schneider, R., Jülicher, F. & Diez, S. Biophys. J. 107, 365–372 (2014).
Sadanandan, S. K., Ranefall, P., Le Guyader, S. & Wählby, C. Sci. Rep. 7, 7860 (2017).
Xie, W., Noble, J. A. & Zisserman, A. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 6, 283–292 (2016).
Vedaldi, A. & Lenc, K. in Proc. 23rd ACM International Conference on Multimedia 689–692 (ACM, New York, 2015).
Talpalar, A. E. et al. Nature 500, 85–88 (2013).
Marquardt, D. W. J. Soc. Ind. Appl. Math. 11, 431–441 (1963).
Tinevez, J. Y. et al. Methods 115, 80–90 (2017).
Adcock, R. J. Anal. (Lond.) 5, 53 (1878).
Arteta, C., Lempitsky, V., Noble, J. A. & Zisserman, A. Med. Image Anal. 27, 3–16 (2016).
LeCun, Y., Bengio, Y. & Hinton, G. Nature 521, 436–444 (2015).
Vedaldi, A. & Fulkerson, B. in Proc. 18th ACM International Conference on Multimedia 1469–1472 (ACM, New York, 2010).
Glorot, X. & Bengio, Y. in Proc. 13th International Conference on Artificial Intelligence and Statistics (eds. Teh, Y. W. & Titterington, M.) 249–256 (MLR Press/Microtome Publishing, Brookline, MA, 2010).
Acknowledgements
We thank A. Paulson, O. Kiehn, C. Krishnamurthy, C. Nelson, and D. Gordon for contributing microscopy images analyzed in Fig. 1, Fig. 2c, Fig. 2d, Supplementary Fig. 3, and Supplementary Fig. 4, respectively. We thank instructors and students of the 2014 Marine Biology Laboratory Physiology course at Woods Hole for assistance with microtubule gliding assays and image collection (especially R. Fischer). We recognize L. Bugaj, L. Sanders, C. Nilson, and B. Kawas for critical discussion. This work was funded by the Jane Coffin Childs Memorial Fund (postdoctoral fellowship to A.J.H.), the Department of Defense Breast Cancer Research Program (grants W81XWH-10-1-1023 and W81XWH-13-1-0221 to Z.J.G.), the NIH Common Fund (grant DP2 HD080351-01 to Z.J.G.), the NSF (grant MCB-1330864 to Z.J.G.), the UCSF Program in Breakthrough Biomedical Research and the UCSF Center for Cellular Construction (grant DBI-1548297 to S.K.B., S.B., and Z.J.G.), and an NSF Science and Technology Center. Z.J.G. is a Chan Zuckerberg Biohub Investigator.
Author information
Authors and Affiliations
Contributions
A.J.H., J.D.M., L.E.B., A.R., and Z.J.G. designed Quanti.us concepts and features. J.D.M. developed and implemented the website and platform. A.J.H., L.E.B., and D.P.B. developed pre- and post-processing code and analyzed raw Quanti.us data. S.K.B. and S.B. developed and implemented the machine learning analysis. All authors wrote and edited the manuscript.
Corresponding author
Ethics declarations
Competing interests
J.D.M. holds an equity interest in Quanti.us LLC. Quanti.us passes payments from users to Amazon Mechanical Turk, which then distributes these payments to workers.
Additional information
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Integrated supplementary information
Supplementary Figure 2 Effects of image complexity and pay on Turker performance.
(a) Turkers were tasked with annotating calibration images containing 2D Gaussian objects with known “ground truth” locations. (b) Cumulative number of annotations contributed over time by turkers for jobs consisting of 1,000 replicate images of the indicated complexity (number of objects per image). Note the heavily reduced annotation rates as jobs near completion. Pay per image was 2¢ (USD). (c) Recall for individual turkers, n = 65, 63, 63, 118, and 158 turkers for image complexities of 18, 33, 60, 110, and 200 objects per image. Marker size indicates the number of images annotated by the corresponding turker, or the average number in the average recall line. Note that recall and annotation persistence drop significantly above ~60 objects per image. (d) Time between annotations vs. spatial error, fit for each complexity level by Fitts’ law of human perceptual-motor function (Zhai et al.). (e)-(g) Similar plots for 1000-image jobs at the 110 object-per-image complexity level in which turkers were paid the indicated number of cents per image annotated (USD). n = 82, 86, 103, 88, 90, and 79 turkers for pay per image of 1, 2, 3, 4, 5 and 6¢. Zhai, S., Kong, J. & Ren, X. Speed–accuracy tradeoff in Fitts’ law tasks—on the equivalency of actual and nominal pointing precision. International Journal of Human-Computer Studies 61, 823–856 (2004).
Supplementary Figure 3 Dynamic background discrimination in bright-field organoid outlining.
(a) Turkers were tasked with outlining the smooth epithelial layer of a lung explant (images adapted with permission from Varner et al., 2015 (see below), PNAS). Each of 17 images was annotated by 20 turkers. (b) A consensus outline was determined after spatial clustering of outline centroids (see Online methods). (c) Tissue area estimates from the consensus outline and from outlines collected by a trained expert, relative to those from a conventional segmentation algorithm in Ambuhl et al. (2011). Ambuhl, M.E., Brepsant, C., Meister, J.-J., Verkhovsky, A.B. & Sbalzarini, I.F. High-resolution cell outline segmentation and tracking from phase-contrast microscopy images. J. Microscopy 245, 161-170 (2011). Varner, V.D., Gleghorn, J.P., Miller, E., Radisky, D.C. & Nelson, C.M. Mechanically patterning the embryonic airway epithelium. PNAS 112, 9230-9235 (2015).
Supplementary Figure 4 Turker performance in ant tracking approaches that of an expert when visual context is given.
(a) A single frame from a 10-frame stack showing ants moving on a low-contrast background near the mouth of a terrestrial nest. (b) A maximum-projection difference image, temporally color coded by frame number and overlaid with expert, individual turker, and turker collective annotations for a task in which all 10 frames were provided to annotators using a frame “slider” interface. Each image was annotated by 10 turkers. (c) Annotation precision (left) and recall (right) for expert, turker collective, and automated FIJI segmentation routine consisting of maximum projection of image difference, gaussian blur, thresholding, and particle analysis. Each metric is plotted over a range in the number of images provided using the slider.
Supplementary Figure 5 Machine learning performance.
(a) Performance of traditional cell detection by Otsu segmentation, and performance of machine learning algorithms trained on annotations from individual turkers (representative group of 5), a collective of 10 turkers, or a trained expert. Note that the “wisdom of the crowd” benefit is mainly accounted for by increases in recall of the algorithm trained on the annotations of the turker collective. (b) Representative performance of traditional Otsu and machine-learning cell detection methods.
Supplementary Figure 6 Example Quanti.us instruction sets and raw data output.
(a) Quantius instructions given to turkers for the cell/pore discrimination job described in Fig. 1b-f. Note the use of lay language, brevity, and inclusion of a clear example image with positive and negative reinforcement. (b) Instructions for the gliding assay job in Fig. 2a. (c) Raw data fields returned to the user for the gliding assay in (b). The researcher is provided a spreadsheet of all raw annotations collected in a Quantius job, including fields for the de-anonymized worker ID, the selected annotation type, time when an image was completed by a turker, image filename; and cartesian coordinates organized by object number (for polylines and other outlining tools), and x,y coordinate pairs and time in ms at which the click was made after the turker started the image task. (d) Instructions for the kidney organoid nucleus outlining job in Fig. 2b. (e) Instructions for the mouse annotation job in Fig. 2c. Note that an example image was not necessary here. (f) Instructions for the mammary epithelial spreading assay in Fig. 2d.
Supplementary information
Supplementary Text and Figures
Supplementary Figs. 1–6 and Supplementary Notes 1–4
Supplementary Table 1
Data post-processing and analysis listed by main figure experiment
Rights and permissions
About this article
Cite this article
Hughes, A.J., Mornin, J.D., Biswas, S.K. et al. Quanti.us: a tool for rapid, flexible, crowd-based annotation of images. Nat Methods 15, 587–590 (2018). https://doi.org/10.1038/s41592-018-0069-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1038/s41592-018-0069-0
This article is cited by
-
Online citizen science with the Zooniverse for analysis of biological volumetric data
Histochemistry and Cell Biology (2023)
-
Novel transfer learning schemes based on Siamese networks and synthetic data
Neural Computing and Applications (2023)
-
FathomNet: A global image database for enabling artificial intelligence in the ocean
Scientific Reports (2022)
-
Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning
Nature Biotechnology (2022)
-
Spatial proteomics: a powerful discovery tool for cell biology
Nature Reviews Molecular Cell Biology (2019)