Abstract

We describe Quanti.us, a crowd-based image-annotation platform that provides an accurate alternative to computational algorithms for difficult image-analysis problems. We used Quanti.us for a variety of medium-throughput image-analysis tasks and achieved 10–50× savings in analysis time compared with that required for the same task by a single expert annotator. We show equivalent deep learning performance for Quanti.us-derived and expert-derived annotations, which should allow scalable integration with tailored machine learning algorithms.

Access optionsAccess options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. 1.

    Kim, J. S. et al. Nature 509, 331–336 (2014).

  2. 2.

    Chen, F. et al. Nat. Methods 13, 679–684 (2016).

  3. 3.

    Lou, X., Kang, M., Xenopoulos, P., Muñoz-Descalzo, S. & Hadjantonakis, A.-K. Stem Cell Rep. 2, 382–397 (2014).

  4. 4.

    Ruhnow, F., Zwicker, D. & Diez, S. Biophys. J. 100, 2820–2828 (2011).

  5. 5.

    Esteva, A. et al. Nature 542, 115–118 (2017).

  6. 6.

    Goodfellow, I. J. et al. in Proc. Advances in Neural Information Processing Systems 27 (eds. Ghahramani, Z. et al.) 2672–2680 (NIPS, La Jolla, CA, 2014).

  7. 7.

    Salimans, T. et al. arXiv Preprint at https://arxiv.org/abs/1606.03498 (2016).

  8. 8.

    Thul, P. J. et al. Science 356, eaal3321 (2017).

  9. 9.

    Simpson, R., Page, K. R. & De Roure, D. in Proc. 23rd International Conference on World Wide Web 1049–1054 (ACM, New York, 2014).

  10. 10.

    Sauermann, H. & Franzoni, C. Proc. Natl. Acad. Sci. USA 112, 679–684 (2015).

  11. 11.

    Hitlin, P. Research in the Crowdsourcing Age, A Case Study (Pew Research Center, Washington, DC, 2016).

  12. 12.

    Bruggemann, J., Lander, G. C. & Su, A. I. bioRxiv Preprint at https://www.biorxiv.org/content/early/2017/11/15/220145 (2017).

  13. 13.

    Galton, F. Nature 75, 450–451 (1907).

  14. 14.

    Ipeirotis, P. G., Provost, F. & Wang, J. in Proc. ACM SIGKDD Workshop on Human Computation 64–67 (ACM, New York, 2010).

  15. 15.

    Zhai, S., Kong, J. & Ren, X. Int. J. Hum. Comput. Stud. 61, 823–856 (2004).

  16. 16.

    Ipeirotis, P. G. XRDS 17, 16–21 (2010).

  17. 17.

    Scharrel, L., Ma, R., Schneider, R., Jülicher, F. & Diez, S. Biophys. J. 107, 365–372 (2014).

  18. 18.

    Sadanandan, S. K., Ranefall, P., Le Guyader, S. & Wählby, C. Sci. Rep. 7, 7860 (2017).

  19. 19.

    Xie, W., Noble, J. A. & Zisserman, A. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 6, 283–292 (2016).

  20. 20.

    Vedaldi, A. & Lenc, K. in Proc. 23rd ACM International Conference on Multimedia 689–692 (ACM, New York, 2015).

  21. 21.

    Talpalar, A. E. et al. Nature 500, 85–88 (2013).

  22. 22.

    Marquardt, D. W. J. Soc. Ind. Appl. Math. 11, 431–441 (1963).

  23. 23.

    Tinevez, J. Y. et al. Methods 115, 80–90 (2017).

  24. 24.

    Adcock, R. J. Anal. (Lond.) 5, 53 (1878).

  25. 25.

    Arteta, C., Lempitsky, V., Noble, J. A. & Zisserman, A. Med. Image Anal. 27, 3–16 (2016).

  26. 26.

    LeCun, Y., Bengio, Y. & Hinton, G. Nature 521, 436–444 (2015).

  27. 27.

    Vedaldi, A. & Fulkerson, B. in Proc. 18th ACM International Conference on Multimedia 1469–1472 (ACM, New York, 2010).

  28. 28.

    Glorot, X. & Bengio, Y. in Proc. 13th International Conference on Artificial Intelligence and Statistics (eds. Teh, Y. W. & Titterington, M.) 249–256 (MLR Press/Microtome Publishing, Brookline, MA, 2010).

Download references

Acknowledgements

We thank A. Paulson, O. Kiehn, C. Krishnamurthy, C. Nelson, and D. Gordon for contributing microscopy images analyzed in Fig. 1, Fig. 2c, Fig. 2d, Supplementary Fig. 3, and Supplementary Fig. 4, respectively. We thank instructors and students of the 2014 Marine Biology Laboratory Physiology course at Woods Hole for assistance with microtubule gliding assays and image collection (especially R. Fischer). We recognize L. Bugaj, L. Sanders, C. Nilson, and B. Kawas for critical discussion. This work was funded by the Jane Coffin Childs Memorial Fund (postdoctoral fellowship to A.J.H.), the Department of Defense Breast Cancer Research Program (grants W81XWH-10-1-1023 and W81XWH-13-1-0221 to Z.J.G.), the NIH Common Fund (grant DP2 HD080351-01 to Z.J.G.), the NSF (grant MCB-1330864 to Z.J.G.), the UCSF Program in Breakthrough Biomedical Research and the UCSF Center for Cellular Construction (grant DBI-1548297 to S.K.B., S.B., and Z.J.G.), and an NSF Science and Technology Center. Z.J.G. is a Chan Zuckerberg Biohub Investigator.

Author information

Author notes

    • Alex J. Hughes

    Present address: Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA

  1. These authors contributed equally: Alex J. Hughes and Joseph D. Mornin.

Affiliations

  1. Department of Pharmaceutical Chemistry, University of California, San Francisco, San Francisco, CA, USA

    • Alex J. Hughes
    •  & Zev J. Gartner
  2. NSF Center for Cellular Construction, University of California, San Francisco, San Francisco, CA, USA

    • Alex J. Hughes
    • , Sujoy K. Biswas
    • , David P. Bauer
    • , Simone Bianco
    •  & Zev J. Gartner
  3. Independent Researcher, Berkeley, CA, USA

    • Joseph D. Mornin
  4. Department of Industrial and Applied Genomics, IBM Accelerated Discovery Laboratory, IBM Almaden Research Center, San Jose, CA, USA

    • Sujoy K. Biswas
    •  & Simone Bianco
  5. Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, USA

    • Lauren E. Beck
    •  & Arjun Raj
  6. Department of Cellular and Molecular Pharmacology, University of California, San Francisco, San Francisco, CA, USA

    • David P. Bauer
  7. Chan Zuckerberg Biohub, San Francisco, CA, USA

    • Zev J. Gartner

Authors

  1. Search for Alex J. Hughes in:

  2. Search for Joseph D. Mornin in:

  3. Search for Sujoy K. Biswas in:

  4. Search for Lauren E. Beck in:

  5. Search for David P. Bauer in:

  6. Search for Arjun Raj in:

  7. Search for Simone Bianco in:

  8. Search for Zev J. Gartner in:

Contributions

A.J.H., J.D.M., L.E.B., A.R., and Z.J.G. designed Quanti.us concepts and features. J.D.M. developed and implemented the website and platform. A.J.H., L.E.B., and D.P.B. developed pre- and post-processing code and analyzed raw Quanti.us data. S.K.B. and S.B. developed and implemented the machine learning analysis. All authors wrote and edited the manuscript.

Competing interests

J.D.M. holds an equity interest in Quanti.us LLC. Quanti.us passes payments from users to Amazon Mechanical Turk, which then distributes these payments to workers.

Corresponding author

Correspondence to Zev J. Gartner.

Integrated supplementary information

  1. Supplementary Figure 1 Unequal Turker contributions to Quanti.us jobs.

    Lorenz curves for jobs described in Fig. 1 (“discrim.”), Supplementary Fig. 2b-d (“vs. # objects”), and Supplementary Fig. 2e-g (“vs. pay (¢)”). The curves plot cumulative annotation numbers contributed by turkers ranked by decreasing productivity along the abscissa.

  2. Supplementary Figure 2 Effects of image complexity and pay on Turker performance.

    (a) Turkers were tasked with annotating calibration images containing 2D Gaussian objects with known “ground truth” locations. (b) Cumulative number of annotations contributed over time by turkers for jobs consisting of 1,000 replicate images of the indicated complexity (number of objects per image). Note the heavily reduced annotation rates as jobs near completion. Pay per image was 2¢ (USD). (c) Recall for individual turkers, n = 65, 63, 63, 118, and 158 turkers for image complexities of 18, 33, 60, 110, and 200 objects per image. Marker size indicates the number of images annotated by the corresponding turker, or the average number in the average recall line. Note that recall and annotation persistence drop significantly above ~60 objects per image. (d) Time between annotations vs. spatial error, fit for each complexity level by Fitts’ law of human perceptual-motor function (Zhai et al.). (e)-(g) Similar plots for 1000-image jobs at the 110 object-per-image complexity level in which turkers were paid the indicated number of cents per image annotated (USD). n = 82, 86, 103, 88, 90, and 79 turkers for pay per image of 1, 2, 3, 4, 5 and 6¢. Zhai, S., Kong, J. & Ren, X. Speed–accuracy tradeoff in Fitts’ law tasks—on the equivalency of actual and nominal pointing precision. International Journal of Human-Computer Studies 61, 823–856 (2004).

  3. Supplementary Figure 3 Dynamic background discrimination in bright-field organoid outlining.

    (a) Turkers were tasked with outlining the smooth epithelial layer of a lung explant (images adapted with permission from Varner et al., 2015 (see below), PNAS). Each of 17 images was annotated by 20 turkers. (b) A consensus outline was determined after spatial clustering of outline centroids (see Online methods). (c) Tissue area estimates from the consensus outline and from outlines collected by a trained expert, relative to those from a conventional segmentation algorithm in Ambuhl et al. (2011). Ambuhl, M.E., Brepsant, C., Meister, J.-J., Verkhovsky, A.B. & Sbalzarini, I.F. High-resolution cell outline segmentation and tracking from phase-contrast microscopy images. J. Microscopy 245, 161-170 (2011). Varner, V.D., Gleghorn, J.P., Miller, E., Radisky, D.C. & Nelson, C.M. Mechanically patterning the embryonic airway epithelium. PNAS 112, 9230-9235 (2015).

  4. Supplementary Figure 4 Turker performance in ant tracking approaches that of an expert when visual context is given.

    (a) A single frame from a 10-frame stack showing ants moving on a low-contrast background near the mouth of a terrestrial nest. (b) A maximum-projection difference image, temporally color coded by frame number and overlaid with expert, individual turker, and turker collective annotations for a task in which all 10 frames were provided to annotators using a frame “slider” interface. Each image was annotated by 10 turkers. (c) Annotation precision (left) and recall (right) for expert, turker collective, and automated FIJI segmentation routine consisting of maximum projection of image difference, gaussian blur, thresholding, and particle analysis. Each metric is plotted over a range in the number of images provided using the slider.

  5. Supplementary Figure 5 Machine learning performance.

    (a) Performance of traditional cell detection by Otsu segmentation, and performance of machine learning algorithms trained on annotations from individual turkers (representative group of 5), a collective of 10 turkers, or a trained expert. Note that the “wisdom of the crowd” benefit is mainly accounted for by increases in recall of the algorithm trained on the annotations of the turker collective. (b) Representative performance of traditional Otsu and machine-learning cell detection methods.

  6. Supplementary Figure 6 Example Quanti.us instruction sets and raw data output.

    (a) Quantius instructions given to turkers for the cell/pore discrimination job described in Fig. 1b-f. Note the use of lay language, brevity, and inclusion of a clear example image with positive and negative reinforcement. (b) Instructions for the gliding assay job in Fig. 2a. (c) Raw data fields returned to the user for the gliding assay in (b). The researcher is provided a spreadsheet of all raw annotations collected in a Quantius job, including fields for the de-anonymized worker ID, the selected annotation type, time when an image was completed by a turker, image filename; and cartesian coordinates organized by object number (for polylines and other outlining tools), and x,y coordinate pairs and time in ms at which the click was made after the turker started the image task. (d) Instructions for the kidney organoid nucleus outlining job in Fig. 2b. (e) Instructions for the mouse annotation job in Fig. 2c. Note that an example image was not necessary here. (f) Instructions for the mammary epithelial spreading assay in Fig. 2d.

Supplementary information

  1. Supplementary Text and Figures

    Supplementary Figs. 1–6 and Supplementary Notes 1–4

  2. Reporting Summary

  3. Supplementary Table 1

    Data post-processing and analysis listed by main figure experiment

About this article

Publication history

Received

Accepted

Published

DOI

https://doi.org/10.1038/s41592-018-0069-0

Further reading