Cambridge Analytica offices in London

The London offices of political consultancy Cambridge Analytica.Credit: Chris J. Ratcliffe/Getty

The practices of Cambridge Analytica, a data-analytics firm involved in US President Donald Trump’s 2016 election campaign, have made headlines around the world this month. It’s alleged that the firm received data from millions of Facebook users, gathered without their explicit consent. Media reports suggest that this data hoard was later used to target voters with messages personalized to their personality traits — a strategy known as psychographic marketing — although the firm has denied that it used the Facebook data in its work.

But could these tactics actually have swayed voters? Here, Nature takes a look at the science behind psychographic targeting.

What is psychographic targeting — and how could Facebook be used for it?

Facebook already offers advertisers and campaigners numerous ways to send particular messages to particular audiences. It segments its users by demographic information such as age, gender, education or interest in specific issues. But psychographic marketing — which many firms around the world now claim to do — targets people on the basis of their personality traits.

In 2013, psychologists David Stillwell and Michal Kosinski reported that that by examining which posts or pages a user ‘liked’ on Facebook, it was possible to accurately predict sensitive information such as sexual orientation and personality traits1. The academics, then both at the University of Cambridge, UK, gave 58,000 volunteers a test to measure their openness, conscientiousness, extraversion, agreeableness and neuroticism: five aspects of personality that researchers view as consistent and stable across languages and cultures. (This kind of personality test is known as the Big Five scale.) Then they correlated these traits with the volunteers’ Facebook likes. Liking the singer Nicki Minaj, for instance, was strongly correlated with being extroverted. Liking the character Hello Kitty was associated with openness.

The Big Five chart is fairly broad-brush, Kosinski says, but a machine-learning algorithm could pull out correlations between likes, answers to a personality test and other aspects of a person’s digital footprint, to create a fine-grained personality profile.

So any firm that built a model correlating likes with personality could try to send advertisements to Facebook users according to their inferred personalities, and could tailor their adverts to people with particular traits. And once they had built a model, the firm would not need to store any Facebook data.

But does targeting advertisements on the basis of personality even work?

In laboratory settings, there is evidence that consumers respond more favourably to marketing messages that reflect their personality, says Sandra Matz, a computational social scientist at the Columbia Business School in New York City. As for a real-world test, Matz cites a 2017 study2 that she co-authored with Kosinski and Stillwell, in which female Facebook users were shown adverts for a beauty retailer, targeted to the consumer’s presumed extroverted or introverted nature, on the basis of their Facebook likes. People shown ads tailored to match their presumed personality trait bought significantly more goods than those shown mismatched ads.

But these effects were small in absolute terms, points out Brendan Nyhan, a political researcher at Dartmouth College in Hanover, New Hampshire. And what works in consumer purchasing might not apply to voting, he says. “It’s surely possible to leverage personality information for political persuasion in some way, but, as far as I know, such effects are not proven or known to be of a substantively meaningful magnitude,” Nyhan adds. He points to other studies3,4,5 that suggest that political ‘microtargeting’ — sending specific kinds of messages to specific voters — has limited effectiveness.

Dean Eckles, a social scientist at the MIT Sloan School of Management in Cambridge, Massachusetts, who worked at Facebook between 2010 and 2017, says that it is possible to build a picture of someone’s Big Five traits using solely their Facebook data. He thinks that psychographic targeting can increase persuasion compared with no targeting at all. However, differences in Big Five and other commonly studied personality traits do not explain all the differences in how people respond to targeting, he says.

And in the real world, it’s not clear whether personality-based profiling would be better than the myriad other ways to target people that Facebook already provides, says political scientist Timothy Ryan at the University of North Carolina at Chapel Hill. For instance, Eckles says, Facebook offers ad ‘optimization’, using its data to predict when showing a user an ad will result in an action, such as a click. That service already segments users by many more dimensions than do models such as the Big Five.

Did Cambridge Analytica use a personality model based on Facebook likes?

It’s not clear. Its parent company, SCL, did receive a tranche of data on Facebook users from Aleksandr Kogan, another psychologist at the University of Cambridge. In 2013, Kogan created a Facebook app for academic research, which in 2014 became commercial. He formed a company, Global Science Research (GSR), and says that he changed the name of the app and its terms and conditions to make clear it was for commercial use. He recruited around 250,000 new users to answer personality surveys and sign up to his Facebook app.

At the time, Facebook allowed apps to pull in certain data both from its immediate users and from their Facebook friends, including their likes, locations and stated employment history — although in 2014 the firm changed those policies. Kogan received information on some 30 million people and provided their predicted personality scores to SCL, which formed Cambridge Analytica. Facebook says that passing on the data was against its rules.

Cambridge Analytica has said that it uses psychographic marketing (although this month it suspended the man who has been most vocal about these claims — Alexander Nix, its chief executive). But the firm denies that it used Facebook data to help do this. In February, Nix told members of the UK Parliament that Kogan’s research “proved to be fruitless”. And Cambridge Analytica also says that it deleted the Facebook data — although The New York Times says it has seen evidence that the firm still possesses some of the data. (Cambridge Analytica denies that it holds data obtained through Kogan’s company, or any data derived from them).

But the company wouldn’t necessarily need data on Facebook likes to build models of people’s personalities. Advertisers can mine other aspects of people’s digital footprints for correlations with personality, says Kosinski — such as Twitter feeds, browsing histories and phone-call patterns. Research suggests that these data can be predictive of personality, albeit with varying results. US-based company Crystal, for instance, predicts personality profiles by analysing public data, according to its website. VisualDNA, a London-based firm, says on its website it uses psychological profiling, too — built on personality quizzes taken by 40 million people. (Neither company has been implicated in the Cambridge Analytica allegations.) And in 2016, Nix told US broadcaster National Public Radio that Cambridge Analytica’s personality profiling was based on a survey of “hundreds of thousands of Americans”. The firm also claims to use a database that includes demographic information and data on voting histories, TV viewing habits and buying patterns, all of which can be legally purchased in the United States.

Cambridge Analytica could, then, have used Facebook as a delivery tool to show personality-tailored ads to specific subsets of people, selected on the basis of psychographic models built using other data, says Matz. Using Facebook’s Lookalike feature, which finds other users the platform deems similar to a given group, it could even target people they haven’t profiled. Or it could have targeted ads to subsets of Facebook users without any effective personality profile model, and simply boasted of psychometric targeting. Ryan says he has seen a sample of the advertisements used by Cambridge Analytica. “They did not appear to me to be tailored with the Big Five traits in mind,” he says.

The data set Kogan passed on could be politically useful whether or not it directly informed personality models, says Matthew Hindman, a communications researcher at the George Washington University in Washington DC. Simply knowing so many voters’ Facebook profiles, and being able to make simple guesses about their political partisanship would be “a big deal — even if [Cambridge Analytica’s] psychometrics were bunk”, he tweeted.

Cambridge Analytica is unlikely ever to reveal its advertising methods. How could we find out whether it was psychographically targeting Facebook users?

Some scientists think it could be possible to reverse-engineer the process by getting the company to reveal what data it has on individuals. Under data-protection laws in Europe, companies must tell individuals who ask what data a firm has on them, where it got those data from and the reasons for the information being processed.

A group of academics and journalists is trying to get its data from Cambridge Analytica, says Paul-Olivier Dehaye, a mathematician previously at the University of Zurich in Switzerland. “The law is built so that this kind of investigation is possible. But the mechanics of it hasn’t been smoothed out,” he says. In his view, giving individuals better control of their data is a long-term solution to the ethical concerns thrown up by psychotargeting methods. Dehaye has founded a company called to make requesting data from companies easier.

The UK Information Commissioner’s Office, which is investigating the use of data analytics for political purposes, inspected Cambridge Analytica’s offices in London last week. That investigation might reveal more about the firm’s practices.