Abstract
While studies show links between smartphone data and affective symptoms, we lack clarity on the temporal scale, specificity (e.g., to depression vs. anxiety), and person-specific (vs. group-level) nature of these associations. We conducted a large-scale (n = 1013) smartphone-based passive sensing study to identify within- and between-person digital markers of depression and anxiety symptoms over time. Participants (74.6% female; M age = 40.9) downloaded the LifeSense app, which facilitated continuous passive data collection (e.g., GPS, app and device use, communication) across 16 weeks. Hierarchical linear regression models tested the within- and between-person associations of 2-week windows of passively sensed data with depression (PHQ-8) or generalized anxiety (GAD-7). We used a shifting window to understand the time scale at which sensed features relate to mental health symptoms, predicting symptoms 2 weeks in the future (distal prediction), 1 week in the future (medial prediction), and 0 weeks in the future (proximal prediction). Spending more time at home relative to one’s average was an early signal of PHQ-8 severity (distal β = 0.219, p = 0.012) and continued to relate to PHQ-8 at medial (β = 0.198, p = 0.022) and proximal (β = 0.183, p = 0.045) windows. In contrast, circadian movement was proximally related to (β = −0.131, p = 0.035) but did not predict (distal β = 0.034, p = 0.577; medial β = −0.089, p = 0.138) PHQ-8. Distinct communication features (i.e., call/text or app-based messaging) related to PHQ-8 and GAD-7. Findings have implications for identifying novel treatment targets, personalizing digital mental health interventions, and enhancing traditional patient-provider interactions. Certain features (e.g., circadian movement) may represent correlates but not true prospective indicators of affective symptoms. Conversely, other features like home duration may be such early signals of intra-individual symptom change, indicating the potential utility of prophylactic intervention (e.g., behavioral activation) in response to person-specific increases in these signals.
Similar content being viewed by others
Introduction
Technological advances facilitating personal sensing, or passively collected signals from networked smartphone sensors1, stand to address critical gaps in measuring and treating affective symptoms. Features assessed using smartphones could signal novel treatment targets; for example, the daily number of calls and texts made may signal changes in social behavior relevant to depression2. Similarly, personal pronoun use in text messages has been linked with depression and anxiety symptoms3,4,5, and reductions in I-pronoun use track broad improvements in therapy6. Incorporating sensed data into clinical care may also enhance shared decision-making7. For instance, deviations in GPS-location-based features could signal relevant changes to patient depression severity that could trigger a provider notification. Finally, better understanding how personal sensing can be leveraged to reliably signal current or prospective deterioration may address a key question about existing digital mental health interventions8,9, which is how best to optimize the delivery of intervention components so that the right component is received at the right time, while minimizing user burden10,11,12,13.
As a foundational step in realizing this potential, studies have evaluated how sensed features relate to affective symptom severity. Prior work shows that different sensor signals such as the number and type (i.e., incoming or outgoing) of phone calls and text messages relate to affective symptoms14,15. Additional data suggest that the content of text messages predicts mood and anxiety symptoms3,4,5,16. Even mobile phone keystroke patterns have been associated with mood states17. Other smartphone signals such as GPS-location-derived features have demonstrated associations with affective symptoms across many different studies4,14,18,19; however, due to challenges with replication and generalizability, there are calls for these findings to be replicated in larger and more heterogeneous samples19,20.
Additional challenges stem from the dearth of studies on how temporal characteristics impact observed relationships between sensed features and symptoms, including the data window (i.e., interval over which sensor data are collapsed) and time lag (i.e., time between of predictor and outcome measurement). Previous studies of mental health outcomes have used 24-h data windows to predict mental health outcomes lagged by short timeframes such as 1 h or 1 day21. Other studies have used slightly larger data windows to predict mental health outcomes at lags of 1 or 2 weeks in the future3,22,23. The predictive power of different sensor types may be more or less clinically meaningful depending on the data window and time lag used22. For example, a recent study we conducted of text message language features as they related to depression symptom severity demonstrated that a data window of 4 weeks was the optimal aggregation for prediction5. Another example in social media data indicated that using a data window of 2 months to predict depression severity with time lags of between 2 and 4 weeks was the optimal analytic setup24. Understanding how the relationships between sensed features and affective symptoms change depending on data windows and time lags is essential to informing the clinical utility of sensed data for mental health.
Our primary objective for this study was to evaluate smartphone sensor-based markers that prospectively relate to depression and anxiety symptoms. We examined sensed features’ prospective relationships to symptom severity for depression and anxiety, as well as their utility as distal or proximal predictors of affective symptom severity, using a shifting 2-week sensor data window across various time lags to predict future affective symptoms.
Methods
Participants
Participants were recruited in 3 waves, with a total of 1,093 enrolled. Participants in wave 1 (July–September 2019) were recruited from the Center for Behavioral Intervention Technologies (CBITs) Health research registry and ResearchMatch.org, a national health volunteer registry supported by the National Institutes of Health. Participants in wave 2 (February–April 2020) were recruited from the CBITs Health and ResearchMatch.org registries, as well as from Focus Pointe Global, a market research data collection company. Participants in wave 3 (January–April 2021) were recruited from digital advertisements (e.g., posts on Instagram, Facebook, Twitter, craigslist, etc.), the CBITs Health and ResearchMatch.org registries, and Focus Pointe Global.
Inclusion and exclusion criteria for waves 1 and 2 did not differ. We conducted stratified sampling based on baseline PHQ-8 scores such that a minimum of 50% experienced at least moderate depression symptoms (PHQ-8 ≥ 10). In Wave 3, all participants were recruited to have at least moderate depression symptoms (PHQ-8 ≥ 10). Across all waves, participants were required to be at least 18 years old, a U.S. resident, able to read English, and own an Android smartphone with an active data and text messaging plan. Participants were excluded if they self-reported a diagnostic history of bipolar disorder, manic, or hypomanic episode, schizophrenia, or other psychotic disorder.
Participants were compensated up to $142 for completion of assessments, as well as bonuses delivered at the end of each assessment week for participants who were running the latest version of the app and had transmitted sensed data within the past 2 days.
Procedure
After providing written informed consent, participants enrolled in the study for 16 weeks. All participants downloaded the LifeSense app25, which automatically collected GPS-based sensor data, app, and device use data, and communication data from participants’ smartphones (see Supplementary Table S1 for a list of sensors used and frequency acquired, consistent with Saeb et al., 2015). Participants responded to web-based surveys (e.g., GAD-7)26 through the REDCap platform at baseline and every 3 weeks thereafter (i.e., weeks 1, 4, 7, 10, 13, 16)27,28. Participants also completed PHQ-8 surveys via the LifeSense app at the beginning and end of every third week in the study29. Because of this cadence, PHQ-8 instructions were modified to ask participants about their symptoms over the past week rather than past two weeks. All procedures were approved by the Northwestern University Institutional Review Board.
Analytic methods
Multilevel regression models were tested in R using the lmerTest package with maximum likelihood estimation30. Specifically, we evaluated the associations of clustered sensor features aggregated over a 2 week window (see Supplementary Table S2 for details on clustering) with subsequent depression and anxiety symptoms. The 2 week window was selected for three reasons: to permit sufficient density of sensor data, to align with gold-standard assessments of depression and anxiety symptoms that ask about the past 2 weeks26,29, and to be consistent with prior sensing studies4,31,32. The prediction window was shifted such that three different models were tested for each outcome: (1) medial prediction is at a 1-week lag (Fig. 1a), (2) distal prediction is shifted back 1 week for a 2-week lag (Fig. 1b), and (3) proximal prediction is shifted forward 1 week for a 0-week lag (Fig. 1c). While there was no overlap between the sensor window and symptom reporting for distal or medial prediction, proximal prediction involved taking sensor data from the week immediately before and the week concurrent with symptom reporting (e.g., weeks 3 and 4 of sensor data predicting the week 4 symptom assessment). Sensor predictors were person-mean centered, and for each sensor predictor, both a person mean term and a within-person deviation term were included in the model. Additional model terms included time (week; centered around zero), the random intercept, and the demographic covariates of age (centered), gender, and urbanicity/rurality. See Supplementary Materials for more detail on modeling.
Results
Data aggregation and demographics
Data were available from 1013 participants (74.6% female; mean age = 40.9 years [SD = 12.7]), including a total of 4731 PHQ-8 scores (of 5065 possible; 6.59% missing) and 4649 GAD-7 scores (of 5065 possible; 8.21% missing). Table 1 contains complete demographic data.
Primary results
Table 2 (PHQ-8) and 3 (GAD-7) present results for all within-person and between-person effects of sensor data on symptoms over time; for parsimony, only features with at least some significant relationships to outcomes are described below in the text.
Location features
Spending more time at home relative to one’s own average (i.e., within-person) was associated with increased future PHQ-8 severity across prediction windows (distal β = 0.219, p = 0.012; medial β = 0.198, p = 0.022; proximal β = 0.183, p = 0.045). Within-person time spent at home was not significantly associated with GAD-7 severity across any of the time windows (Table 2). We observed no evidence that between-person effects for time spent at home were related to PHQ-8 or GAD-7 severity. People with greater GPS variability and mobility less severe next-week PHQ-8 (medial β = −0.503, p = 0.046), but this signal was absent for distal (β = −0.464, p = 0.073) and proximal (β = −0.424, p = 0.093) associations. Table 3.
Two other sensed location features were reflective of near- or medial-term PHQ-8 severity but did not predict PHQ-8 severity far in the future. First, people spending time in more frequently visited venues relative to their own average were likely to have lower impending or concurrent PHQ-8 scores (medial β = −0.185, p = 0.003; proximal β = −0.168, p = 0.007); however, going to more frequently visited venues did not prospectively predict PHQ-8 severity in the more distant future (distal β = −0.064, p = 0.308). Second, people who showed more circadian movement (i.e., regularity in 24-h movement patterns) relative to their own average just before and at the time of reporting depression symptoms had less severe PHQ-8 scores than those who showed less circadian movement (proximal β = −0.131, p = 0.035); however, circadian movement did not prospectively predict PHQ-8 severity (distal β = 0.034, p = 0.577; medial β = −0.089, p = 0.138).
Communication features
People spending more time on messaging apps relative to their own average reported more severe impending or concurrent PHQ-8 symptoms (proximal β = 0.162, p = 0.015), but this effect was non-significant for distal (β = 0.059, p = 0.385) and medial (β = 0.115, p = 0.083) prediction. While we did not see a significant association between within-person app-based messaging and GAD-7 at any of the time points, people engaging in more app-based messaging at the between-person level were more likely to report higher distal (β = 0.486, p = 0.041) and medial (β = 0.481, p = 0.046) GAD-7 severity; however, the association of between-person app-based messaging and GAD-7 severity was non-significant for proximal prediction (β = 0.466, p = 0.053). Additionally, calling and texting more relative to one’s own average was associated with GAD-7 severity across all prediction windows (distal β = 0.279, p = 0.005; medial β = 0.386, p < 0.001; proximal β = 0.293, p = 0.003). There were no significant associations between PHQ-8 and call/text-based communication at either the within-person or between-person level.
Other phone use features
People who used the launcher more on average had lower PHQ-8 scores across time windows (distal β = −0.596, p = 0.008; medial β = −0.525, p = 0.018; proximal β = −0.653, p = 0.004). When people used the launcher more relative to their own average, they reported lower impending or concurrent PHQ-8 scores (proximal β = −0.161, p = 0.023). Launcher use was not found to be associated with GAD-7 severity at the within or between person level. People who on average had more screen-on time tended to have greater distal (β = 0.503, p = 0.016) and proximal (β = 0.541, p = 0.012) PHQ-8 severity; however, this association was non-significant for next-week prediction (medial β = 0.272, p = 0.196).
Demographic effects
Higher PHQ-8 and GAD-7 severity were found for younger people (β: [0.573–1.163], p: [<0.001–0.001]) and women (β: [0.360–0.563], p: [0.001–0.036]). People living in rural areas reported higher GAD-7 (β: [0.520–0.532], p: [0.002–0.002]), but not PHQ-8 (β: [0.307–0.320], p: [0.058–0.068]).
Time effects
There was a significant fixed effect of time, such that people reported decreasing PHQ-8 and GAD-7 severity over the course of the study (β: [−0.107 to −0.183], p: [<0.001 to <0.001]).
Overall variability explained
The models explained a modest amount of overall variability in PHQ-8 (distal R2 = 0.049; medial R2 = 0.048; proximal R2 = 0.053) and GAD-7 (distal R2 = 0.058; medial R2 = 0.056; proximal R2 = 0.057) symptom severity.
Discussion
In the present study, we aimed to identify passively sensed digital markers that relate to future depression and anxiety symptoms at both the within-person and between-person levels, and across multiple time windows. Location features were more strongly linked with depression symptoms, whereas communication features related to both depression and anxiety. Results highlighted the importance of the prediction lag in understanding personally sensed signals of affective symptoms: certain features (e.g., time spent at home) were consistent predictors of symptom severity across more distal and more proximal prediction windows, whereas others (e.g., circadian movement) were only associated with next-week or current symptoms.
Overall, location features—and time spent at home in particular—were more strongly linked with depression symptoms than anxiety symptoms. The most robust predictor of depression symptoms was spending more time at home relative to one’s own average, which signaled that a participant was likely to report increases in depressive symptoms 1–3 weeks later. This aligns with meta-analytic evidence indicating that greater time spent at home is one of the sensed features that most consistently relates to depression14. Broadly, spending more time at home may be reflective of reductions in motivation or hedonic capacity33; if this is the case, the finding that increases in time spent at home relate to future depression symptoms would align with the notion of anhedonia as an endophenotype of depression34.
In contrast to location features, communication features related to both depression and anxiety symptoms, with a dissociation for communication type: messaging apps signaled impending depression, and both messaging apps and calling/texting signaled future anxiety. Social media messaging apps are feature-rich35, such that their usage may reflect a range of different behaviors related to depression (e.g., “doomscrolling”; engaging in social comparison; ruminating; checking to see why others didn’t respond to a message), and they tend to involve indirect conversations about a shared visual stimulus. Conversely, calling and texting are feature-poor and primarily facilitate direct communication with others35; in the context of anxiety, within-person increases in these forms of communication may signal greater activation or reassurance seeking. In general, there were more consistent associations of communication data with anxiety symptoms than depression symptoms across prediction windows and communication modalities, suggesting that changes in communication—like changes in home duration for depression—may be an especially useful signal for understanding anxiety. While studies have linked changes in calling and texting with depression symptoms in bipolar disorder36,37, the absence of an association with depression in our study aligns with prior research reporting null findings around communication changes in unipolar depression31,38. Continued replication of these null findings may suggest that changes in call and text based communication are not a useful proxy for the social withdrawal and decreased motivational processes that characterize depression symptoms39.
By using multilevel models to disaggregate within- and between-person effects over time, we identified differential relationships of sensed features with affective symptoms across time windows that have implications for identifying novel treatment targets, personalizing digital mental health interventions, and enhancing traditional patient-provider interactions12. One of the predominant hypothesized methods for bringing personalized digital mental health interventions to fruition is understanding how personal sensing can be leveraged to reliably signal current or prospective worsening symptoms8,9. Our findings underscore that the sensing context and timing (i.e., prediction lag) are critical factors impacting the utility of sensed features as a marker of affective symptoms. For example, prior studies have shown a broad correlation between circadian movement and depression symptoms31,32. Given that within-person changes in circadian movement occur immediately before and contemporaneously with depression rather than predicting symptoms further in the future, interventions in response to decreased circadian movement may benefit from strategies focused on more immediate or impending depression symptoms. Conversely, in light of the prospective, within-person relationships between time at home and depression severity, developers may consider deploying prophylactic depression-focused content (e.g., behavioral activation) in response to person-specific increases in these signals. Finally, features that are significantly related to symptoms primarily at the between-person level (e.g., launcher use with PHQ-8 or app-based messaging with GAD-7) are unlikely to be helpful signals for individualized intervention or as signals of deterioration.
It is important to consider these implications in the context of the low overall amount of variance explained (approximately 5–6% across the different outcomes and lags), as compared to the larger effect sizes seen in early sensing studies, generally in small samples4,31,32. While we opted to use multilevel models for explainability, future studies may consider machine learning models to optimize variance explained in light of the high dimensionality of sensor data40,41; these models may also provide greater insight into prediction accuracy metrics (e.g., rates of false positives and false negatives) to inform algorithms designed to prospectively predict clinical symptoms. Additionally, although we lagged sensors and symptom assessments, these data are still correlational and should not be interpreted as implying causality. To the best of our knowledge, there has been no research to date that has attempted to change these sensed constructs through targeted interventions, which would provide stronger evidence of potential causality. It will also be important for future studies to vary the sensor data window—which we kept consistent at 2 weeks—along with the lag to determine impacts on predictive power, and to better understand the impact of missing data over time on observed relationships. Further, the declaration of a national emergency due to COVID-19 in March 2020 occurred partway through our second wave of data collection. We did not see differences across waves substantial enough to warrant separate analysis by wave. However, the variability in the environment since the onset of COVID-19 may have tempered some of the associations between certain features (e.g., geographic location) and symptoms due to changing routines. Additional limitations are the differences in delivery mechanism and timeframe of reporting instructions for the GAD-7 (REDCap; past 2 weeks) and PHQ-8 (in-app; past week), which may have influenced responses. Finally, given the relative lack of demographic diversity in our sample, it will be important for future studies to test whether these findings generalize across more diverse populations.
Overall, findings from this large-scale mobile sensing study point to location features as important in predicting depression symptoms, and communication features in predicting both depression and anxiety symptoms. The multilevel, longitudinal approach allowed us to identify that features such as home duration were true prospective markers of intraindividual change in depression symptoms, whereas others, such as circadian movement, may be more indicative of impending or concurrent depression symptoms.
Data availability
De-identified self-report data (PHQ-8 and GAD-7) will be made available through the NIMH Data Archive at the conclusion of the study. Passively collected data are not publicly available due to potentially identifying information that could compromise participant privacy.
Code availability
Code for all modeling is available from the authorship team upon request.
References
Mohr, D. C., Shilton, K. & Hotopf, M. Digital phenotyping, behavioral sensing, or personal sensing: names and transparency in the digital age. npj Digit. Med. 3, 1–2 (2020).
Pratap, A. et al. The accuracy of passive phone sensors in predicting daily mood. Depress. Anxiety 36, 72–81 (2019).
Stamatis, C. A. et al. Prospective associations of text‐message‐based sentiment with symptoms of depression, generalized anxiety, and social anxiety. Depress. Anxiety 39, 794–804 (2022).
Meyerhoff, J. et al. Evaluation of changes in depression, anxiety, and social anxiety using smartphone sensor features: Longitudinal cohort study. J. Med. Internet Res. 23, e22844 (2021).
Liu T., et al. The relationship between text message sentiment and self-reported depression. J. Affect. Disord. Published online December 25, 2021. https://doi.org/10.1016/j.jad.2021.12.048
Nook, E. C., Hull, T. D., Nock, M. K. & Somerville, L. H. Linguistic measures of psychological distance track symptom levels and treatment outcomes in a large set of psychotherapy transcripts. Proc. Natl Acad. Sci. 119, e2114737119 (2022).
Hsin, H., Torous, J. & Roberts, L. An adjuvant role for mobile health in psychiatry. JAMA Psychiatry 73, 103–104 (2016).
Onnela, J. P. & Rauch, S. L. Harnessing smartphone-based digital phenotyping to enhance behavioral and mental health. Neuropsychopharmacology 41, 1691–1696 (2016).
Torous, J. et al. The growing field of digital psychiatry: current evidence and the future of apps, social media, chatbots, and virtual reality. World Psychiatry 20, 318–335 (2021).
Bidargaddi, N., Schrader, G., Klasnja, P., Licinio, J. & Murphy, S. Designing m-Health interventions for precision mental health support. Transl. Psychiatry 10, 222 (2020).
Radhakrishnan, K. et al. The potential of digital phenotyping to advance the contributions of mobile health to self-management science. Nurs. Outlook 68, 548–559 (2020).
Wang, L. & Miller, L. C. Just-in-the-moment adaptive interventions (JITAI): a meta-analytical review. Health Commun. 35, 1531–1544 (2020).
Nahum-Shani, I. et al. Just-in-time adaptive interventions (JITAIs) in mobile health: key components and design principles for ongoing health behavior support. Ann. Behav. Med. 52, 446–462 (2018).
De Angel, V. et al. Digital health tools for the passive monitoring of depression: a systematic review of methods. NPJ Digit. Med. 5, 3 (2022).
Jacobson, N. C., Summers, B. & Wilhelm, S. Digital biomarkers of social anxiety severity: digital phenotyping using passive smartphone sensors. J. Med. Internet Res. 22, e16875 (2020).
Razavi, R., Gharipour, A. & Gharipour, M. Depression screening using mobile phone usage metadata: a machine learning approach. J. Am. Med. Inform. Assoc. 27, 522–530 (2020).
Zulueta, J. et al. Predicting mood disturbance severity with mobile phone keystroke metadata: a biaffect digital phenotyping study. J. Med. Internet Res. 20, e241 (2018).
Mullick, T., Radovic, A., Shaaban, S. & Doryab, A. Predicting depression in adolescents using mobile and wearable sensors: multimodal machine learning–based exploratory study. JMIR Form. Res. 6, e35807 (2022).
Currey, D. & Torous, J. Digital phenotyping correlations in larger mental health samples: analysis and replication. BJPsych Open 8, e106 (2022).
Müller, S. R., Chen, X., Peters, H., Chaintreau, A. & Matz, S. C. Depression predictions from GPS-based mobility do not generalize well to large demographically heterogeneous samples. Sci. Rep. 11, 14007 (2021).
Jacobson, N. C. & Chung, Y. J. Passive sensing of prediction of moment-to-moment depressed mood among undergraduates with clinical levels of depression sample using smartphones. Sensors 20, 3572 (2020).
Sano A., et al. Identifying objective physiological markers and modifiable behaviors for self-reported stress and mental health status using wearable sensors and mobile phones: observational study. J. Med. Internet Res. 20, e210 (2018).
Opoku Asare, K. et al. Predicting depression from smartphone behavioral markers using machine learning methods, hyperparameter optimization, and feature importance analysis: exploratory study. JMIR Mhealth Uhealth 9, e26540 (2021).
Hu, Q., Li, A., Heng, F., Li, J. & Zhu, T. Predicting depression of social media user on different observation windows. 2015 IEEE/WIC/ACM Int. Conf. Web Intell. Intell. Agent Technol. (WI-IAT) 1, 361–364 (2015).
Audacious Software. Passive Data Kit. Published online. https://passivedatakit.org/ (2018).
Spitzer, R. L., Kroenke, K., Williams, J. B. & Löwe, B. A brief measure for assessing generalized anxiety disorder: the GAD-7. Arch. Internal Med. 166, 1092–1097 (2006).
Harris, P. A. et al. The REDCap consortium: building an international community of software platform partners. J. Biomed. Inform. 95, 103208 (2019).
Harris, P. A. et al. Research electronic data capture (REDCap)–a metadata-driven methodology and workflow process for providing translational research informatics support. J. Biomed. Inform. 42, 377–381 (2009).
Kroenke, K. et al. The PHQ-8 as a measure of current depression in the general population. J. Affect Disord. 114, 163–173 (2009).
Kuznetsova, A., Brockhoff, P. B. & Christensen, R. H. B. lmerTest Package: tests in linear mixed effects models. J. Stat. Soft 82, 1–26 (2017).
Saeb, S. et al. Mobile phone sensor correlates of depressive symptom severity in daily-life behavior: an exploratory study. J. Med. Internet Res. 17, e175 (2015).
Saeb, S., Lattie, E. G., Schueller, S. M., Kording, K. P. & Mohr, D. C. The relationship between mobile phone location sensor data and depressive symptom severity. PeerJ 4, e2537 (2016).
Treadway, M. T. & Zald, D. H. Reconsidering anhedonia in depression: Lessons from translational neuroscience. Neurosci. Biobehav. Rev. 35, 537–555 (2011).
Pizzagalli, D. A. Depression, stress, and anhedonia: toward a synthesis and integrated model. Annu. Rev. Clin. Psychol. 10, 393–423 (2014).
Zhang, R., N. Bazarova, N, Reddy, M. Distress disclosure across social media platforms during the COVID-19 pandemic: untangling the effects of platforms, affordances, and audiences. In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. CHI ’21. Association for Computing Machinery. https://doi.org/10.1145/3411764.3445134 (2021).
Beiwinkel, T. et al. Using smartphones to monitor bipolar disorder symptoms: a pilot study. JMIR Mental Health 3, e4560 (2016).
Faurholt‐Jepsen, M. et al. Behavioral activities collected through smartphones and the association with illness activity in bipolar disorder. Int. J. Methods Psychiatr. Res. 25, 309–323 (2016).
Pedrelli, P. et al. Monitoring changes in depression severity using wearable and mobile sensors. Front. Psychiatry 11, 584711 (2020).
Kupferberg, A., Bicks, L. & Hasler, G. Social functioning in major depressive disorder. Neurosci. Biobehav. Rev. 69, 313–332 (2016).
Bishop, C. M., Nasrabadi, N. M. Pattern Recognition and Machine Learning. Vol 4. Springer. Accessed March 2, 2023. https://link.springer.com/book/9780387310732 (2006).
Mohr, D. C., Zhang, M. & Schueller, S. M. Personal sensing: understanding mental health using ubiquitous sensors and machine learning. Annu. Rev. Clin. Psychol. 13, 23–47 (2017).
Acknowledgements
We acknowledge support from the National Institute of Mental Health (NIMH) [Grants: R01MH111610, T32MH115882, R34MH124960, K08MH128640], the National Institute on Alcohol Abuse and Alcoholism (NIAAA) [Grant: 1R01AA028032], and the Intramural Research Program of the National Institutes of Health (NIH), National Institute on Drug Abuse (NIDA).
Author information
Authors and Affiliations
Contributions
C.A.S.: conceptualization, methodology, writing—original draft, writing—review & editing, visualization; J.M.: conceptualization, methodology, writing–original draft, writing—review & editing; Y.M.: formal analysis, writing–original draft, writing—review & editing; Z.C.C.L. and Y.M.C.: formal analysis; Tony Liu: formal analysis, writing—review & editing; C.J.K.: software, data curation; writing—review & editing; Tingting Liu: conceptualization, writing—review & editing; B.L.C.: conceptualization, writing—review & editing; L.H.U.: conceptualization, methodology, writing—review & editing, supervision, funding acquisition; D.C.M.: conceptualization, methodology, writing—review & editing, supervision, funding acquisition.
Corresponding author
Ethics declarations
Competing interests
J.M. has accepted consulting fees from Boehringer Ingelheim. C.A.S. has received salary and equity support from Google and Akili Interactive Labs. D.C.M. has accepted honoraria and consulting fees from Otsuka Pharmaceuticals, Optum Behavioral Health, Centerstone Research Institute, and the One Mind Foundation, royalties from Oxford Press, and has an ownership interest in Adaptive Health, Inc.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Stamatis, C.A., Meyerhoff, J., Meng, Y. et al. Differential temporal utility of passively sensed smartphone features for depression and anxiety symptom prediction: a longitudinal cohort study. npj Mental Health Res 3, 1 (2024). https://doi.org/10.1038/s44184-023-00041-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s44184-023-00041-y