One of the most widely used approaches in natural language processing and information retrieval is the so-called bag-of-words model. A common component of such methods is the removal of uninformative words, commonly referred to as stopwords. Currently, most practitioners use manually curated stopword lists. This approach is problematic because it cannot be readily generalized across knowledge domains or languages. As a result of the difficulty in rigorously defining stopwords, there have been few systematic studies on the effect of stopword removal on algorithm performance, which is reflected in the ongoing debate on whether to keep or remove stopwords. Here we address this challenge by formulating an information theoretic framework that automatically identifies uninformative words in a corpus. We show that our framework not only outperforms other stopword heuristics, but also allows for a substantial reduction of document size in applications of topic modelling. Our findings can be readily generalized to other bag-of-words-type approaches beyond language such as in the statistical analysis of transcriptomics, audio or image corpora.
This is a preview of subscription content, access via your institution
Access Nature and 54 other Nature Portfolio journals
Get Nature+, our best-value online-access subscription
$29.99 / 30 days
cancel any time
Subscribe to this journal
Receive 12 digital issues and online access to articles
$119.00 per year
only $9.92 per issue
Rent or buy this article
Prices vary by article type
Prices may be subject to local taxes which are calculated during checkout
Similar content being viewed by others
The text data are available in the public repository https://github.com/amarallab/stopwords.
The code for this Article, along with an accompanying computational environment, is available in the public repository https://github.com/amarallab/stopwords and is executable online as a Code Ocean capsule. Code for the calculation of the information theoretic measure \(I\) and for the experiments with topic models can be found at https://doi.org/10.24433/CO.6204149.v142.
Manning, C. D. & Schütze, H. Foundations of Statistical Natural Language Processing (MIT Press, 1999).
Evans, J. A. & Aceves, P. Machine translation: mining text for social theory. Ann. Rev. Sociol. 42, 21–50 (2016).
Rebholz-Schuhmann, D., Oellrich, A. & Hoehndorf, R. Text-mining solutions for biomedical research: enabling integrative biology. Nat. Rev. Genet. 13, 829–839 (2012).
García, S., Luengo, J. & Herrera, F. Data Preprocessing in Data Mining (Springer, 2014).
Dasu, T. & Johnson, T. Exploratory Data Mining and Data Cleaning (John Wiley & Sons, 2003).
Schoenfeld, B., Giraud-Carrier, C., Poggemann, M., Christensen, J. & Seppi, K. Preprocessor selection for machine learning pipelines. Preprint at http://arXiv.org/abs/1810.09942 (2018).
Blei, D. M. Probabilistic topic models. Commun. ACM 55, 77–84 (2012).
Boyd-Graber, J., Hu, Y. & Mimno, D. Applications of topic models. Found. Trends Inf. Retr. 11, 143–296 (2017).
Luhn, H. P. The automatic creation of literature abstracts. IBM J. Res. Dev. 2, 159–165 (1958).
Rasmussen, E. in Encyclopedia of Database Systems (eds Liu, L. & Özsu, M. T.) (2009).
McCallum, A. K. Mallet: a machine learning for language toolkit. http://mallet.cs.umass.edu (2002).
Nothman, J., Qin, H. & Yurchak, R. Stop word lists in free open-source software packages. In Proc. Workshop for NLP Open Source Software (NLP-OSS) (eds Park, E. L. et al.) 7–12 (Association for Computational Linguistics, 2018).
Lo, R. T.-W., He, B. & Ounis, I. Automatically building a stopword list for an information retrieval system. J. Digit. Inf. Manag. 5, 17–24 (2005).
Zou, F., Wang, F. L., Deng, X., Han, S. & Wang, L. S. Automatic construction of Chinese stop word list. In Proc. 5th WSEAS International Conference on Applied Computer Science (ACOS’06) (Huang, W. et al.) 1009–1014 (World Scientific and Engineering Academy and Society, 2006).
Salton, G. & Yang, C. S. On the specification of term values in automatic indexing. J. Doc. 29, 351–372 (1973).
Blei, D. M., Ng, A. Y. & Jordan, M. I. Latent Dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003).
Wang, C., Paisley, J. & Blei, D. M. Online variational inference for the hierarchical Dirichelet process. In Proc. 14th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research Vol. 15, 752–760 (AISTAT, 2011).
Hoffman, M. D., Blei, D. M. & Bach, F. Online learning for latent dirichlet allocation. In Advances in Neural Information Processing Systems 23 (NIPS 2010) (eds Lafferty, J. D. et al.) 1–9 (Neural Information Processing Systems Foundation, 2010).
Blei, D. M., Griffiths, T. L. & Jordan, M. I. The nested Chinese restaurant process and Bayesian nonparametric inference of topic hierarchies. J. ACM 57, 1–30 (2010).
Blei, D. M. & Mcauliffe, J. D. Supervised topic models. In Advances in Neural Information Processing Systems (eds Platt J. C. et al.) vol. 20, 121–128 (NIPS 2007).
Achakulvisut, T., Acuna, D. E., Ruangrong, T. & Kording, K. Science concierge: A fast content-based recommendation system for scientific publications. PLoS ONE 11, e0158423 (2016).
Schofield, A., Magnusson, M. & Mimno, D. Pulling out the stops: rethinking stopword removal for topic models. In Proc. 15th Conference of the European Chapter of the Association for Computational Linguistics (eds Lapata, M. et al.) Vol. 2, 432–436 (Association for Computational Linguistics, 2017).
Montemurro, M. A. & Zanette, D. H. Towards the quantification of the semantic information encoded in written language. Adv. Complex Syst. 13, 135–153 (2010).
Gries, S. T. Dispersions and adjusted frequencies in corpora. Int. J. Corpus Linguist. 13, 403–437 (2008).
Zipf, G. K. Human Behaviour and the Principle of Least Effort (Addison-Wesley, 1949).
Fan, A., Doshi-Velez, F. & Miratrix, L. Prior matters: simple and general methods for evaluating and improving topic quality in topic modeling. Preprint at http://arXiv.org/abs/1701.03227 (2017).
Schofield, A. & Mimno, D. Comparing apples to apple: the effects of stemmers on topic models. Trans. Assoc. Comput. Linguist. 4, 287–300 (2016).
Shi, H., Gerlach, M., Diersen, I., Downey, D. & Amaral, L. A new evaluation framework for topic modeling algorithms based on synthetic corpora. In Proc. Machine Learning Research Vol. 89 (eds. Chaudhuri, K. & Sugiyama, M.) 816–826 (PMLR, 2019).
Peel, L., Larremore, D. B. & Clauset, A. The ground truth about metadata and community detection in networks. Sci. Adv. 3, e1602548 (2017).
Lancichinetti, A. et al. High-reproducibility and high-accuracy method for automated topic classification. Phys. Rev. X 5, 011007 (2015).
Aggarwal, C. C. & Zhai, C. in Mining Text Data (eds. Aggarwal, C. C. & Zhai, C.) 77–128 (Springer, 2012).
Uysal, A. K. & Gunal, S. The impact of preprocessing on text classification. Inf. Process. Manag. 50, 104–112 (2014).
Skinnider, M. A., Squair, J. W. & Foster, L. J. Evaluating measures of association for single-cell transcriptomics. Nat. Methods 16, 381–386 (2019).
Bravo González-Blas, C. et al. Cistopic: cis-regulatory topic modeling on single-cell ATAC-seq data. Nat. Methods 16, 397–400 (2019).
Alberts, B. et al. Molecular Biology of the Cell Sixth International Student Edition (W. W. Norton & Co., 2014).
Zheng, C. et al. Landscape of infiltrating T cells in liver cancer revealed by single-cell sequencing. Cell 169, 1342–1356.e16 (2017).
Solé-Boldo, L. et al. Single-cell transcriptomes of the aging human skin reveal loss of fibroblast priming. Preprint at bioRxiv https://doi.org/10.1101/633131 (2019).
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S. & Dean, J. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26 (eds Burges, C. J. C. et al.) 3111–3119 (Curran Associates, 2013).
Pritchard, J. K., Stephens, M. & Donnelly, P. Inference of population structure using multilocus genotype data. Genetics 155, 945–959 (2000).
Broderick, T., Mackey, L., Paisley, J. & Jordan, M. I. Combinatorial clustering and the beta negative binomial process. IEEE Trans. Pattern Anal. Mach. Intell. 37, 290–306 (2015).
Yan, X., Jeub, L. G. S., Flammini, A., Radicchi, F. & Fortunato, S. Weight thresholding on complex networks. Phys. Rev. E 98, 042304 (2018).
Gerlach, M., Shi, H. & Amaral, L. A. N. Stopwords-filtering. Code Ocean https://doi.org/10.24433/CO.6204149.v1 (2019).
L.A.N.A. acknowledges a John and Leslie McQuown Gift to NICO and support from the Department of Defense Army Research Office (grant number W911NF-14-1-0259). M.G. thanks T. Stoeger and Z. Ren for insightful discussion on scRNA-seq.
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
About this article
Cite this article
Gerlach, M., Shi, H. & Amaral, L.A.N. A universal information theoretic approach to the identification of stopwords. Nat Mach Intell 1, 606–612 (2019). https://doi.org/10.1038/s42256-019-0112-6
This article is cited by
Preprocessing of Unstructured Data Using 2D Coiflet Wavelet-Based Optimized Back-Propagation Neural Network for Opinion Mining
Arabian Journal for Science and Engineering (2023)
Semantic Academic Profiler (SAP): a framework for researcher assessment based on semantic topic modeling