There has never been more anxiety about the effects of our love of screens — which now bombard us with social-media updates, news (real and fake), advertising and blue-spectrum light that could disrupt our sleep. Concerns are growing about impacts on mental and physical health, education, relationships, even on politics and democracy. Just last year, the World Health Organization issued new guidelines about limiting children’s screen time; the US Congress investigated the influence of social media on political bias and voting; and California introduced a law (Assembly Bill 272) that allows schools to restrict pupils’ use of smartphones.
All the concerns expressed and actions taken, including by scientists, legislators, medical and public-health professionals and advocacy groups, are based on the assumption that digital media — in particular, social media — have powerful and invariably negative effects on human behaviour. Yet so far, it has been a challenge for researchers to demonstrate empirically what seems obvious experientially. Conversely, it has also been hard for them to demonstrate that such concerns are misplaced.
A major limitation of the thousands of studies, carried out over the past decade or so, of the effects of digital media is that they do not analyse the types of data that could reveal exactly what people are seeing and doing on their screens — especially in relation to the problems that doctors, legislators and parents worry most about. Most use self-reports of ‘screen time’. These are people’s own estimates of the time they spend engaging with screens or with platforms that are categorized as ‘smartphone’, ’television’, ‘social media’, ‘political news’ or ‘entertainment media’. Yet today’s media experiences defy such simplistic characterization: the range of content has become too broad, patterns of consumption too fragmented1, information diets too idiosyncratic2, experiences too interactive and devices too mobile.
Policies and advice must be informed by accurate assessments of media use. These should involve moment-by-moment capture of what people are doing and when, and machine analysis of the content on their screens and the order in which it appears.
Technology now allows researchers to record digital life in exquisite detail. And thanks to shifting norms around data sharing, and the accumulation of experience and tools in fields such as genomics, it is becoming easier to collect data while meeting expectations and legal requirements around data security and personal privacy.
We call for a Human Screenome Project — a collective effort to produce and analyse recordings of everything people see and do on their screens.
Screen time
According to a 2019 systematic review and meta-analysis3, over the past 12 years, 226 studies have examined how media use is related to psychological well-being. These studies consider mental-health problems such as anxiety, depression and thoughts of suicide, as well as degrees of loneliness, life satisfaction and social integration.
The meta-analysis found almost no systematic relationship between people’s levels of exposure to digital media and their well-being. But almost all of these 226 studies used responses to interviews or questionnaires about how long people had spent on social media, say, the previous day.
The expectation is that if someone reports being on Facebook a lot, then somewhere among all those hours of screen time are the ingredients that influence well-being, for better or worse. But ‘time spent on Facebook’ could involve finding out what your friends are doing, attending a business meeting, shopping, fundraising, reading a news article, bullying, even stalking someone. These are vastly different activities that are likely to have very different effects on a person’s health and behaviour.
Another problem is that people are unlikely to recollect exactly when they did what4,5. Recent studies that compared survey responses with computer logs of behaviour indicate that people both under- and over-report media exposure — often by as much as several hours per day6–8. In today’s complex media environment, survey questions about the past month or even the past day might be almost useless. How many times did you look at your phone yesterday?
The US National Institutes of Health (NIH) is currently spending US$300 million on a vast neuroimaging and child-development study, eventually involving more than 10,000 children aged 9 and 10. Part of this investigates whether media use influences brain and cognitive development. To indicate screen use, participants simply pick from a list of five standard time ranges, giving separate answers for each media category and for weekdays and weekends. (The first report about media use from this study, published last year, showed a small or no relationship between media exposure and brain characteristics or cognitive performance in computer-based tasks9.)
Digital life
Instead, researchers need to observe in exquisite detail all the media that people engage with, the platforms they use and the content they see and create. How do they switch between platforms and between content within those? How do the moments of engagement with various types of media interact and evolve? In other words, academics need a multidimensional map of digital life.
To illustrate, people tend to use their laptops and smartphones in bursts of, on average, 10–20 seconds10. Metrics that quantify the transitions people make between media segments within a session, and between media and the rest of life, would provide more temporally refined representations of actual use patterns. A session begins when the screen lights up and ends when it goes dark, and might last less than a second if it entails checking the time. Or it could start with a person responding to their friend’s post on Facebook, and end an hour later when they click on a link to read an article about politics.
Smartphones are bad for some teens, not all
Measures of media use must also take account of the scattering of content. Today’s devices allow digital content that used to be experienced as a whole (such as a film, news story or personal conversation) to be atomized, and the pieces viewed across several sessions, hours or days. We need measures that separate media use into content categories (political news, relationships, health information, work productivity and so on) — or, even better, weave dissimilar content into sequences that might not make sense to others but are meaningful for the user.
To try to capture more of the complexity, some researchers have begun to use logging software. This was developed predominantly to provide marketers with information on what websites people are viewing, where people are located, or the time they spend using various applications. Although these data can provide more-detailed and -accurate pictures than self-reports of total screen time, they don’t reveal exactly what people are seeing and doing at any given moment.
A better way
To record the moment-by-moment changes on a person’s screen2,11, we have built a platform called Screenomics. The software records, encrypts and transmits screenshots automatically and unobtrusively every 5 seconds, whenever a device is turned on (see go.nature.com/2fsy2j2). When it is deployed on several devices at once, the screenshots from each one are synced in time.
This approach differs from other attempts to track human–computer interactions — for instance, through the use of smartwatches and fitness trackers, or diaries. It is more accurate, it follows use across platforms, and it samples more frequently. In fact, we are working on software that makes recordings every second.
We have now collected more than 30 million screenshots — what we call ‘screenomes’ — from more than 600 people. Even just two of these reveal what can be learnt from a fine-grained look at media use (see ‘Under the microscope’).
This higher-resolution insight into media use could help answer long-held questions and lead to new ones. It might turn out, for instance, that levels of well-being are related to how fragmented people’s use of media is, or the content that they engage with. Differences in brain structure might be related to how quickly people move through cycles of production and consumption of content. Differences in performance in cognitive tasks might be related to how much of a person’s multitasking involves switching between content (say, from politics to health) and applications (social media to games), and how long they spend on each task before switching.
The Human Screenome Project
So, how can we do better? What’s needed is a collective effort to record and analyse everything people see and do on their screens, the order in which that seeing and doing occurs, and the associated metadata that are available from the software and sensors built into digital devices (for instance, on time of day, location, even keystroke velocity).
In any one screenome, screenshots are the fundamental unit of media use. But the particular pieces or features of the screenome that will be most valuable will depend on the question posed — as is true for other ‘omes’. If the concern is possible addiction to mobile devices, then arousal responses (detected by a change in heart rate, say) associated with the first screen experienced during a session might be important to measure. If the concern is the extent to which social relationships dictate how political news is evaluated, then the screenshots that exist between ‘social’ and ‘political’ fragments in the screenome sequence might be the crucial data to analyse. (News items flagged by a close friend might be perceived as more trustworthy than the same news obtained independently, for example.)
How can researchers get access to such high-resolution data? And how can they extract meaning from data sets comprising millions of screenshots?
One option is for investigators to collaborate with the companies that own the data, and that have already developed sophisticated ways to monitor people’s digital lives, at least in certain domains, such as Google, Facebook, Amazon, Apple and Microsoft. The Social Science One programme, established in 2018 at Harvard University in Cambridge, Massachusetts, involves academics partnering with companies for exactly this purpose12. Researchers can request to use certain anonymized Facebook data to study social media and democracy, for example.
Largely because of fears about data leaks or study findings that might adversely affect business, such collaborations can require compromises in how research questions are defined and which data are made available, and involve lengthy and legally cumbersome administration. And ultimately, there is nothing to compel companies to share data relevant to academic research.
To explore more freely, academics need to collect the data themselves. The same is true if they are to tackle questions that need answers within days — say, to better understand the effects of a terrorist attack, political scandal or financial catastrophe.
Thankfully, Screenomics and similar platforms are making this possible.
In our experience, people are willing to share their data with academics. The harder problem is that collecting screenomics data rightly raises concerns about privacy and surveillance. Through measures such as encryption, secure storage and de-identification, it is possible to collect screenomes with due attention to personal privacy. (All our project proposals are vetted by university institutional review boards, charged with protecting human participants.) Certainly, social scientists can learn a lot from best practice in the protection and sharing of electronic medical records13 and genomic data.
Screenomics data should be sifted using a gamut of approaches — from deep-dive qualitative analyses to algorithms that mine and classify patterns and structures. Given how quickly people’s screens change, studies should focus on the variation in an individual’s use of media over time as much as on differences between individuals and groups. Ultimately, researchers will be able to investigate moment-by-moment influences on physiological and psychological states, the sociological dynamics of interpersonal and group relations over days and weeks, and even cultural and historical changes that accrue over months and years.
Some might argue that screenomics data are so fine-grained that they invite researchers to focus on the minutiae rather than the big picture. We would counter that today’s digital technology is all about diffused shards of experience. Also, through the approach we propose, it is possible to zoom in and out, to investigate how the smallest pieces of the screenome relate to the whole. Others might argue that even with this better microscope, we will not find anything significant. But if relationships between the use of media and people’s thoughts, feelings and behaviours continue to be weak or non-existent, at least we could have greater confidence as to whether current concerns are overblown.
The approach we propose is complex, but no more so than the assessment of genetic predictors of mental and physical states and behaviours. Many years and billions of US dollars have been invested in other ‘omics’ projects. In genomics, as in neuroscience, planetary science and particle physics, governments and private funders have stepped up to help researchers gather the right data, and to ensure that those data are accessible to investigators globally. Now that so much of our lives play out on our screens, that strategy could prove just as valuable for the study of media.