Brief Communication | Open | Published:

Detecting neurodegenerative disorders from web search signals

npj Digital Medicinevolume 1, Article number: 8 (2018) | Download Citation

Abstract

Neurodegenerative disorders, such as Parkinson’s disease (PD) and Alzheimer’s disease (AD), are important public health problems warranting early detection. We trained machine-learned classifiers on the longitudinal search logs of 31,321,773 search engine users to automatically detect neurodegenerative disorders. Several digital phenotypes with high discriminatory weights for detecting these disorders are identified. Classifier sensitivities for PD detection are 94.2/83.1/42.0/34.6% at false positive rates (FPRs) of 20/10/1/0.1%, respectively. Preliminary analysis shows similar performance for AD detection. Subject to further refinement of accuracy and reproducibility, these findings show the promise of web search digital phenotypes as adjunctive screening tools for neurodegenerative disorders.

Introduction

Neurodegenerative disorders (NDs) are prevalent and a major source of healthcare expenditure.1 NDs progress slowly,2 and their symptoms may be subtle and mistaken for more common conditions.3, 4 Early detection of NDs enables earlier intervention, which can slow their progression. This study examines the use of digital phenotypes5 for detecting NDs, operationalized as patterns of search activity gathered during engagement with web search engines. Methods based on these observational data show promise in offering new pathways for the early detection of brain disease.

Prior studies with large-scale logs of the search activity of millions of people have highlighted opportunities for detection of cancer6, 7 and for disease surveillance.8, 9 This study investigates how analyses of longitudinal log data from search engines might help detect evidence of Parkinson’s disease (PD), a common progressive ND affecting some 7–10 million people worldwide. Dopaminergic deficiency in PD results in symptoms such as tremors and cognitive decline,10 evidence of which may be apparent in search log signals. PD is challenging to diagnose: the current accuracy of clinical diagnosis of probable PD for patients presenting with motor symptoms in primary care settings is around 80%, with limited improvements in the past 25 years, especially at early disease stages.11 Hence, there is a need for a simple scalable test that can be used for screening in the community or at home. This work also explores whether classifiers using search log signals can help with diagnostic challenges in PD, specifically distinguishing early PD from essential tremor (ET).3, 4

This study uses a total of 18 months of deidentified logs of United States search activity from the Microsoft Bing web search engine, comprising millions of English-speaking searchers from September 2015 to February 2017 inclusive. These data are routinely collected for improving search results and permitted through Bing’s Terms of Service. A range of observational features were computed per searcher over the duration of the logs: (1) Symptom: presence of PD symptom-related query terms (including synonyms) derived from published literature; (2) Motor: motor symptoms such as cursor movements, including speed, direction changes, tremors (defined as horizontal or vertical oscillations in cursor position up to 20 pixels in each direction), and vertical scrolling. Cursor position data were sampled while the cursor was in motion; (3) Repetition: presence of repeat queries, repeat result clicks, and repeat query-result click pairs, and (4) Risk Factors: presence of risk factors derived from previous work (e.g.,12,13,14). These included age and gender (inferred using proprietary Bing classifiers), and head trauma, toxin exposure, and familial factors based on terminology appearing in query text. For the Motor class, feature values are first computed per query instance and then averaged across all query instances for the searcher. Some features align with criteria used by physicians (e.g., tremors)10, 15 while others are more difficult to measure in clinical practice (e.g., memory loss).16

From the full set of logs, searchers who input queries containing first-person statements about PD diagnosis (e.g., “just diagnosed with parkinsons”) were identified. These experiential diagnostic queries are used as evidence of receiving a PD diagnosis. Cases exhibiting evidence that diagnostic queries were for others (e.g., father, spouse, etc.) were excluded. Multiple additive regression trees (MART) classifiers17 were trained to detect evidence of PD diagnosis from all PD symptom searchers. Advantages of MART include model interpretability, facility for rapid training and testing, and robustness against noisy labels and missing values. There were 703 positive cases, of searchers who queried for symptoms and issued at least one experiential diagnostic query (30.8% of the experiential diagnostic searchers), and 31,321,070 negative cases, of searchers who only issued queries on PD symptoms. The data were used in classifier training as is. The application of sampling methods to correct for class imbalance is left to future work. Since NDs progress slowly2 and the observation window is limited to 18 months, the classification task likely identifies the existence of PD rather than forecasting a future diagnosis.

Ten-fold cross validation was used to train and test the classifier. It predicts the input of an experiential diagnostic query for PD with strong performance (area under the receiver operating characteristic curve [AUROC] = 0.9357) using 18 months of search log data. AUROC drops to 0.8626 with 12 months of data, and 0.8151 with 6 months of data. Since false positives can generate unnecessary alarm and additional healthcare utilization in fielded uses (e.g., at population-scale in search engines), low false positive rates (FPRs) are desirable. Classifier sensitivities at FPR = 20/10/1/0.1% are 94.2/83.1/42.0/34.6%, respectively. The results offer evidence that the existence of NDs in searchers is detectable from streams of data from the use of search engines over time. Table 1 shows the list of observational features with non-zero discriminatory weights in the learned classifier. Features related to tremors—both from search terms (e.g., “hands shaking”) and from mouse cursor movements (e.g., estimated rate of cursor position oscillation), repeat queries, and repeat search-result clicks, and the inferred age and gender of searchers, had highest discriminatory weights.

Table 1 Features used in PD classifier, ranked by discriminative weight and scored with respect to the top-ranked feature: TimeBetweenRepeatQueries. Features are computed over all queries for each searcher. Features from the Motor class are first computed for each query instance and then averaged across all query instances for that searcher

Tremors have many explanations, including ET, which shares some symptoms with PD. Distinguishing between ET and early PD is important for tremor sufferers.3 Focusing on those who searched for tremors (n = 4,262,953), a MART classifier was trained to distinguish PD (n = 309) and ET (n = 307). Figure 1 shows the ROC curve illustrating strong classifier performance (AUROC = 0.9205) using all features available to the classifier. Features related to scrolling, cursor direction changes, tremor frequencies, and query repetition were important. This is corroborated by ablation studies, where the largest drop in AUROC (23%, Z = 7.10, p < 0.00118) occurs when Motor features are excluded. Motor symptoms, including tremor frequencies, are also important in distinguishing ET and PD during clinical examinations.19

Fig. 1
Fig. 1

Receiver-operator characteristic curve for the task of discriminating between Parkinson’s disease (PD) and essential tremor (ET), using all features and with feature ablations. Starting with the classifier using all features (All), ablations removed features of the repetition class (all minus repetition), repetition and motor classes (all minus repetition and motor), and repetition, motor, and risk factors classes (all minus repetition, motor, and risk factors). After each class is removed, the classifier is retrained and AUROC is recomputed. When all three classes are removed, the classifier uses only features from the Symptom class (purple line)

The classifiers learned from search query and motor interaction data show promise for developing new kinds of diagnostic tools for NDs. The periodic application of these methods may support the study of temporal dynamics in NDs for consenting searchers. They can also help discriminate between illnesses with similar symptoms, as shown with a case study of identifying searchers with experiential diagnostic queries for ET versus PD. The classifier leverages evidence unavailable to physicians (e.g., longitudinal query repetition, mouse cursor activity) that could aid in more traditional clinical diagnoses. Application of these classifiers could help screen for patients with higher ND likelihoods. Surfacing their predictions and confidence scores to physicians could offer additional evidence to help physicians discriminate between conditions. Identifying the specific digital phenotypes (e.g., estimated tremor frequencies) related to NDs that carry most weight for each patient may also have diagnostic utility. It is noted that while experiential diagnostic queries provide evidence of ND, definitive ground truth was unavailable in this study. Future work will expand this analysis to other NDs and perform prospective analyses with clinically diagnosed ND patients at different stages of illness to validate the diagnostic and prognostic utility of digital signals. Preliminary analysis shows that the methods in this study may scale to other NDs, such as Alzheimer’s disease (AUROC = 0.9135, classifier sensitivities at FPR = 20/10/1/0.1% are 91.0/81.5/38.8/26.1%, respectively). A recent study of keystroke typing patterns in verified PD patients20 found similar results to those on PD presented herein. The findings of the two studies taken together support the promise of using digital phenotypes for early detection of PD.

Data availability statement

The data that support the findings of this study are available from Microsoft, but restrictions apply to the availability of these data. Data are however available from the authors upon reasonable request and with permission of Microsoft.

Additional information

Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. 1.

    De Lau, L. M. & Breteler, M. M. Epidemiology of Parkinson’s disease. Lancet Neurol. 5, 525–535 (2006).

  2. 2.

    DeKosky, S. T. & Marek, K. Looking backward to move forward: early detection of neurodegenerative disorders. Science 302, 830–834 (2003).

  3. 3.

    Meara, J. O., Bhowmick, B. K. & Hobson, P. E. Accuracy of diagnosis in patients with presumed Parkinson’s disease. Age Ageing 28, 99–102 (1999).

  4. 4.

    Hughes, A. J., Daniel, S. E., Kilford, L. & Lees, A. J. Accuracy of clinical diagnosis of idiopathic Parkinson’s disease: a clinico-pathological study of 100 cases. J. Neurol. Neurosurg. Psychiatry 55, 181–184 (1992).

  5. 5.

    Jain, S. H., Powers, B. W., Hawkins, J. B. & Brownstein, J. S. The digital phenotype. Nat. Biotechnol. 33, 462–463 (2015).

  6. 6.

    White, R. W. & Horvitz, E. Evaluation of the feasibility of screening patients for early signs of lung carcinoma in web search logs. JAMA Oncol. 3, 398–401 (2016).

  7. 7.

    Paparrizos, J., White, R. W. & Horvitz, E. Screening for pancreatic adenocarcinoma using signals from web search logs: feasibility study and results. J. Oncol. Pract. 12, 737–744 (2016).

  8. 8.

    Ginsberg, J. et al. Detecting influenza epidemics using search engine query data. Nature 457, 1012–1014 (2009).

  9. 9.

    Brownstein, J. S., Freifeld, C. C. & Madoff, L. C. Digital disease detection: harnessing the web for public health surveillance. N. Engl. J. Med. 360, 2153–2157 (2009).

  10. 10.

    Jankovic, J. Parkinson’s disease: clinical features and diagnosis. J. Neurol. Neurosurg. Psychiatry 79, 368–376 (2008).

  11. 11.

    Rizzo, G. et al. Accuracy of clinical diagnosis of Parkinson disease: a systematic review and meta-analysis. Neurology 86, 566–576 (2016).

  12. 12.

    Bishop, N. A., Lu, T. & Yankner, B. A. Neural mechanisms of ageing and cognitive decline. Nature 464, 529 (2010).

  13. 13.

    Brown, R. C., Lockwood, A. H. & Sonawane, B. R. Neurodegenerative diseases: an overview of environmental risk factors. Environ. Health Perspect. 113, 1250 (2005).

  14. 14.

    Bertram, L. & Tanzi, R. E. The genetic epidemiology of neurodegenerative disease. J. Clin. Invest. 115, 1449 (2005).

  15. 15.

    Elble, R. J. Diagnostic criteria for essential tremor and differential diagnosis. Neurology 54, S2–S6 (2000).

  16. 16.

    Chaudhuri, K. R., Healy, D. G. & Schapira, A. H. Non-motor symptoms of Parkinson’s disease: diagnosis and management. Lancet Neurol. 5, 235–245 (2006).

  17. 17.

    Friedman, J. H. Greedy function approximation: a gradient boosting machine. Ann. Stat. 29, 1189–1232 (2001).

  18. 18.

    Hanley, J. A. & McNeil, B. J. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 143, 29–36 (1982).

  19. 19.

    Thenganatt, M. A. & Louis, E. D. Distinguishing essential tremor from Parkinson’s disease: bedside tests and laboratory evaluations. Expert. Rev. Neurother. 12, 687–696 (2012).

  20. 20.

    Adams, W. R. High-accuracy detection of early Parkinson’s disease using multiple characteristics of finger movement while typing. PLoS. One 12, e0188226 (2017).

  21. 21.

    Rodden, K., Fu, X., Aula, A. & Spiro, I. Eye-mouse coordination patterns on web search results pages. Proc. SIGCHI Ext. Abs. 2997–3002 (ACM, New York, NY, USA, 2008).

Download references

Author information

Affiliations

  1. Microsoft, Bellevue, WA, 98004, USA

    • Ryen W. White
  2. Duke University Health System and Duke Institute for Brain Sciences, Durham, NC, 27710, USA

    • P. Murali Doraiswamy
  3. Microsoft, Redmond, WA, 98052, USA

    • Eric Horvitz

Authors

  1. Search for Ryen W. White in:

  2. Search for P. Murali Doraiswamy in:

  3. Search for Eric Horvitz in:

Contributions

All authors designed the study and co-authored the manuscript. In addition, R.W.W. mined the logs, trained and tested the machine-learned models, and performed the statistical analysis.

Competing interests

P.M.D. has received grants and/or advisory fees from health and technology companies for other projects and owns stock in several companies whose products are not discussed here. R.W.W. and E.H. are employees of Microsoft Corporation and own stock in the company.

Corresponding author

Correspondence to Ryen W. White.

About this article

Publication history

Received

Revised

Accepted

Published

DOI

https://doi.org/10.1038/s41746-018-0016-6