Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Brief Communication
  • Published:

Quanti.us: a tool for rapid, flexible, crowd-based annotation of images

Abstract

We describe Quanti.us, a crowd-based image-annotation platform that provides an accurate alternative to computational algorithms for difficult image-analysis problems. We used Quanti.us for a variety of medium-throughput image-analysis tasks and achieved 10–50× savings in analysis time compared with that required for the same task by a single expert annotator. We show equivalent deep learning performance for Quanti.us-derived and expert-derived annotations, which should allow scalable integration with tailored machine learning algorithms.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Leveraging the wisdom of crowds for scientific image analysis with Quanti.us.
Fig. 2: Case studies and machine learning integration of Quanti.us.

Similar content being viewed by others

References

  1. Kim, J. S. et al. Nature 509, 331–336 (2014).

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  2. Chen, F. et al. Nat. Methods 13, 679–684 (2016).

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  3. Lou, X., Kang, M., Xenopoulos, P., Muñoz-Descalzo, S. & Hadjantonakis, A.-K. Stem Cell Rep. 2, 382–397 (2014).

    Article  CAS  Google Scholar 

  4. Ruhnow, F., Zwicker, D. & Diez, S. Biophys. J. 100, 2820–2828 (2011).

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  5. Esteva, A. et al. Nature 542, 115–118 (2017).

    Article  PubMed  CAS  Google Scholar 

  6. Goodfellow, I. J. et al. in Proc. Advances in Neural Information Processing Systems 27 (eds. Ghahramani, Z. et al.) 2672–2680 (NIPS, La Jolla, CA, 2014).

  7. Salimans, T. et al. arXiv Preprint at https://arxiv.org/abs/1606.03498 (2016).

  8. Thul, P. J. et al. Science 356, eaal3321 (2017).

    Article  PubMed  CAS  Google Scholar 

  9. Simpson, R., Page, K. R. & De Roure, D. in Proc. 23rd International Conference on World Wide Web 1049–1054 (ACM, New York, 2014).

  10. Sauermann, H. & Franzoni, C. Proc. Natl. Acad. Sci. USA 112, 679–684 (2015).

    Article  PubMed  CAS  Google Scholar 

  11. Hitlin, P. Research in the Crowdsourcing Age, A Case Study (Pew Research Center, Washington, DC, 2016).

  12. Bruggemann, J., Lander, G. C. & Su, A. I. bioRxiv Preprint at https://www.biorxiv.org/content/early/2017/11/15/220145 (2017).

  13. Galton, F. Nature 75, 450–451 (1907).

    Article  Google Scholar 

  14. Ipeirotis, P. G., Provost, F. & Wang, J. in Proc. ACM SIGKDD Workshop on Human Computation 64–67 (ACM, New York, 2010).

  15. Zhai, S., Kong, J. & Ren, X. Int. J. Hum. Comput. Stud. 61, 823–856 (2004).

    Article  Google Scholar 

  16. Ipeirotis, P. G. XRDS 17, 16–21 (2010).

    Article  Google Scholar 

  17. Scharrel, L., Ma, R., Schneider, R., Jülicher, F. & Diez, S. Biophys. J. 107, 365–372 (2014).

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  18. Sadanandan, S. K., Ranefall, P., Le Guyader, S. & Wählby, C. Sci. Rep. 7, 7860 (2017).

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  19. Xie, W., Noble, J. A. & Zisserman, A. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 6, 283–292 (2016).

    Article  Google Scholar 

  20. Vedaldi, A. & Lenc, K. in Proc. 23rd ACM International Conference on Multimedia 689–692 (ACM, New York, 2015).

  21. Talpalar, A. E. et al. Nature 500, 85–88 (2013).

    Article  PubMed  CAS  Google Scholar 

  22. Marquardt, D. W. J. Soc. Ind. Appl. Math. 11, 431–441 (1963).

    Article  Google Scholar 

  23. Tinevez, J. Y. et al. Methods 115, 80–90 (2017).

    Article  PubMed  CAS  Google Scholar 

  24. Adcock, R. J. Anal. (Lond.) 5, 53 (1878).

  25. Arteta, C., Lempitsky, V., Noble, J. A. & Zisserman, A. Med. Image Anal. 27, 3–16 (2016).

    Article  PubMed  Google Scholar 

  26. LeCun, Y., Bengio, Y. & Hinton, G. Nature 521, 436–444 (2015).

    Article  PubMed  CAS  Google Scholar 

  27. Vedaldi, A. & Fulkerson, B. in Proc. 18th ACM International Conference on Multimedia 1469–1472 (ACM, New York, 2010).

    Google Scholar 

  28. Glorot, X. & Bengio, Y. in Proc. 13th International Conference on Artificial Intelligence and Statistics (eds. Teh, Y. W. & Titterington, M.) 249–256 (MLR Press/Microtome Publishing, Brookline, MA, 2010).

Download references

Acknowledgements

We thank A. Paulson, O. Kiehn, C. Krishnamurthy, C. Nelson, and D. Gordon for contributing microscopy images analyzed in Fig. 1, Fig. 2c, Fig. 2d, Supplementary Fig. 3, and Supplementary Fig. 4, respectively. We thank instructors and students of the 2014 Marine Biology Laboratory Physiology course at Woods Hole for assistance with microtubule gliding assays and image collection (especially R. Fischer). We recognize L. Bugaj, L. Sanders, C. Nilson, and B. Kawas for critical discussion. This work was funded by the Jane Coffin Childs Memorial Fund (postdoctoral fellowship to A.J.H.), the Department of Defense Breast Cancer Research Program (grants W81XWH-10-1-1023 and W81XWH-13-1-0221 to Z.J.G.), the NIH Common Fund (grant DP2 HD080351-01 to Z.J.G.), the NSF (grant MCB-1330864 to Z.J.G.), the UCSF Program in Breakthrough Biomedical Research and the UCSF Center for Cellular Construction (grant DBI-1548297 to S.K.B., S.B., and Z.J.G.), and an NSF Science and Technology Center. Z.J.G. is a Chan Zuckerberg Biohub Investigator.

Author information

Authors and Affiliations

Authors

Contributions

A.J.H., J.D.M., L.E.B., A.R., and Z.J.G. designed Quanti.us concepts and features. J.D.M. developed and implemented the website and platform. A.J.H., L.E.B., and D.P.B. developed pre- and post-processing code and analyzed raw Quanti.us data. S.K.B. and S.B. developed and implemented the machine learning analysis. All authors wrote and edited the manuscript.

Corresponding author

Correspondence to Zev J. Gartner.

Ethics declarations

Competing interests

J.D.M. holds an equity interest in Quanti.us LLC. Quanti.us passes payments from users to Amazon Mechanical Turk, which then distributes these payments to workers.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Integrated supplementary information

Supplementary Figure 1 Unequal Turker contributions to Quanti.us jobs.

Lorenz curves for jobs described in Fig. 1 (“discrim.”), Supplementary Fig. 2b-d (“vs. # objects”), and Supplementary Fig. 2e-g (“vs. pay (¢)”). The curves plot cumulative annotation numbers contributed by turkers ranked by decreasing productivity along the abscissa.

Supplementary Figure 2 Effects of image complexity and pay on Turker performance.

(a) Turkers were tasked with annotating calibration images containing 2D Gaussian objects with known “ground truth” locations. (b) Cumulative number of annotations contributed over time by turkers for jobs consisting of 1,000 replicate images of the indicated complexity (number of objects per image). Note the heavily reduced annotation rates as jobs near completion. Pay per image was 2¢ (USD). (c) Recall for individual turkers, n = 65, 63, 63, 118, and 158 turkers for image complexities of 18, 33, 60, 110, and 200 objects per image. Marker size indicates the number of images annotated by the corresponding turker, or the average number in the average recall line. Note that recall and annotation persistence drop significantly above ~60 objects per image. (d) Time between annotations vs. spatial error, fit for each complexity level by Fitts’ law of human perceptual-motor function (Zhai et al.). (e)-(g) Similar plots for 1000-image jobs at the 110 object-per-image complexity level in which turkers were paid the indicated number of cents per image annotated (USD). n = 82, 86, 103, 88, 90, and 79 turkers for pay per image of 1, 2, 3, 4, 5 and 6¢. Zhai, S., Kong, J. & Ren, X. Speed–accuracy tradeoff in Fitts’ law tasks—on the equivalency of actual and nominal pointing precision. International Journal of Human-Computer Studies 61, 823–856 (2004).

Supplementary Figure 3 Dynamic background discrimination in bright-field organoid outlining.

(a) Turkers were tasked with outlining the smooth epithelial layer of a lung explant (images adapted with permission from Varner et al., 2015 (see below), PNAS). Each of 17 images was annotated by 20 turkers. (b) A consensus outline was determined after spatial clustering of outline centroids (see Online methods). (c) Tissue area estimates from the consensus outline and from outlines collected by a trained expert, relative to those from a conventional segmentation algorithm in Ambuhl et al. (2011). Ambuhl, M.E., Brepsant, C., Meister, J.-J., Verkhovsky, A.B. & Sbalzarini, I.F. High-resolution cell outline segmentation and tracking from phase-contrast microscopy images. J. Microscopy 245, 161-170 (2011). Varner, V.D., Gleghorn, J.P., Miller, E., Radisky, D.C. & Nelson, C.M. Mechanically patterning the embryonic airway epithelium. PNAS 112, 9230-9235 (2015).

Supplementary Figure 4 Turker performance in ant tracking approaches that of an expert when visual context is given.

(a) A single frame from a 10-frame stack showing ants moving on a low-contrast background near the mouth of a terrestrial nest. (b) A maximum-projection difference image, temporally color coded by frame number and overlaid with expert, individual turker, and turker collective annotations for a task in which all 10 frames were provided to annotators using a frame “slider” interface. Each image was annotated by 10 turkers. (c) Annotation precision (left) and recall (right) for expert, turker collective, and automated FIJI segmentation routine consisting of maximum projection of image difference, gaussian blur, thresholding, and particle analysis. Each metric is plotted over a range in the number of images provided using the slider.

Supplementary Figure 5 Machine learning performance.

(a) Performance of traditional cell detection by Otsu segmentation, and performance of machine learning algorithms trained on annotations from individual turkers (representative group of 5), a collective of 10 turkers, or a trained expert. Note that the “wisdom of the crowd” benefit is mainly accounted for by increases in recall of the algorithm trained on the annotations of the turker collective. (b) Representative performance of traditional Otsu and machine-learning cell detection methods.

Supplementary Figure 6 Example Quanti.us instruction sets and raw data output.

(a) Quantius instructions given to turkers for the cell/pore discrimination job described in Fig. 1b-f. Note the use of lay language, brevity, and inclusion of a clear example image with positive and negative reinforcement. (b) Instructions for the gliding assay job in Fig. 2a. (c) Raw data fields returned to the user for the gliding assay in (b). The researcher is provided a spreadsheet of all raw annotations collected in a Quantius job, including fields for the de-anonymized worker ID, the selected annotation type, time when an image was completed by a turker, image filename; and cartesian coordinates organized by object number (for polylines and other outlining tools), and x,y coordinate pairs and time in ms at which the click was made after the turker started the image task. (d) Instructions for the kidney organoid nucleus outlining job in Fig. 2b. (e) Instructions for the mouse annotation job in Fig. 2c. Note that an example image was not necessary here. (f) Instructions for the mammary epithelial spreading assay in Fig. 2d.

Supplementary information

Supplementary Text and Figures

Supplementary Figs. 1–6 and Supplementary Notes 1–4

Reporting Summary

Supplementary Table 1

Data post-processing and analysis listed by main figure experiment

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hughes, A.J., Mornin, J.D., Biswas, S.K. et al. Quanti.us: a tool for rapid, flexible, crowd-based annotation of images. Nat Methods 15, 587–590 (2018). https://doi.org/10.1038/s41592-018-0069-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s41592-018-0069-0

This article is cited by

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics