Digital Veil · Case #9913
Evidence
Cambridge Analytica harvested data from 87 million Facebook users through a personality quiz app developed by researcher Aleksandr Kogan· Only 270,000 users installed the app — it collected data on all their Facebook friends without consent· The firm worked on over 200 political campaigns across multiple countries, including the 2016 Trump campaign and Brexit referendum· Facebook discovered the breach in 2015 but did not disclose it publicly until March 2018· The Federal Trade Commission fined Facebook $5 billion in 2019 — the largest penalty ever imposed on a company for violating consumer privacy· Christopher Wylie disclosed the operation to The Guardian and The New York Times in March 2018 after leaving the company in 2014· Cambridge Analytica filed for bankruptcy in May 2018, two months after the scandal broke· The UK Information Commissioner's Office fined Facebook £500,000 — the maximum possible under pre-GDPR law·
Digital Veil · Part 13 of 17 · Case #9913 ·

Cambridge Analytica Harvested Personal Data From 87 Million Facebook Users Without Their Consent and Used It to Build Psychographic Profiles for Political Targeting. Christopher Wylie Disclosed It in 2018. Facebook Paid $5 Billion in Fines.

In 2018, whistleblower Christopher Wylie revealed that Cambridge Analytica had harvested personal data from 87 million Facebook users without their knowledge or consent. The firm used this data to construct psychographic profiles that categorized voters by personality traits and targeted them with customized political messages. The scandal prompted the largest fine in FTC history, catalyzed GDPR enforcement in Europe, and exposed the vulnerability of social media platforms to mass data exploitation. This is the documented architecture of how political advertising became psychological profiling.

87MFacebook users whose data was harvested
$5BFTC fine imposed on Facebook
270KUsers who actually installed the app
200+Political campaigns serviced worldwide
Financial
Harm
Structural
Research
Government

The Architecture of Mass Data Harvesting

On March 17, 2018, The Guardian and The New York Times simultaneously published investigations revealing that a British political consulting firm called Cambridge Analytica had harvested personal data from approximately 87 million Facebook users without their knowledge or consent. The data collection occurred between 2013 and 2015 through a personality quiz app that exploited Facebook's platform permissions to access not only information from users who installed the app, but also data from all their Facebook friends. Only about 270,000 people had actually installed the app.

The disclosures came from Christopher Wylie, a former Cambridge Analytica research director who had helped build the company's data operation. Wylie told journalists that Cambridge Analytica had created a system to harvest Facebook data at scale and use it to build psychographic profiles — personality models categorizing voters by psychological traits — that could be used to target them with customized political advertising. He described the operation as building "Steve Bannon's psychological warfare mindfuck tool."

87 Million
Facebook users whose data was harvested. The data included profile information, locations, likes, and in many cases private messages — all collected without explicit consent through a third-party app.

Cambridge Analytica was founded in 2013 as the American subsidiary of SCL Group, a British defense contractor specializing in psychological operations and election management. The firm was capitalized with approximately $15 million from Robert Mercer, a hedge fund billionaire and major Republican donor, and chaired by Steve Bannon, who would later become chief executive of Donald Trump's presidential campaign and White House chief strategist. The company positioned itself as bringing cutting-edge data science to political campaigning, claiming it could identify persuadable voters and influence their behavior through precision messaging.

The Mechanism: How Facebook's Platform Enabled Mass Collection

The technical architecture that enabled Cambridge Analytica's data harvest was rooted in Facebook's platform policies from 2007 to 2014. During this period, third-party apps could request extensive permissions from users, including the ability to access information about the user's Facebook friends. When a user granted these permissions to an app, the app could collect data not only from that user's profile but from the profiles of everyone in their friend network — typically hundreds of people per user.

In 2013, Aleksandr Kogan, a psychology researcher at Cambridge University, was approached by Cambridge Analytica to develop a data collection method. Kogan created an app called "thisisyourdigitallife," which was marketed as a personality quiz for academic research. Users were paid small amounts — typically $2 to $5 — to take the quiz and grant the app permission to access their Facebook profile data. The app collected standard profile information: name, location, birthday, likes, and posts. Critically, through Facebook's friend permission feature, the app also collected the same data from all Facebook friends of users who installed it.

270,000
Users who actually installed the app. Through Facebook's friend permissions, this relatively small number of direct users enabled data collection from 87 million people — an amplification factor of more than 300 to 1.

Kogan transferred the harvested data to Cambridge Analytica through his company, Global Science Research (GSR). This transfer violated Facebook's terms of service, which prohibited selling or transferring user data to third parties. Facebook's enforcement of this policy, however, was minimal. The company did not conduct audits to verify compliance and relied primarily on self-certification from developers.

The scale of the harvesting was facilitated by the network effects inherent to social media. Each user who installed the app had an average of several hundred Facebook friends. Because of overlapping friend networks, the 270,000 app users provided pathways to data on tens of millions of unique individuals. Most of these 87 million people never knew their data had been collected, never consented to its use, and had no relationship with Cambridge Analytica or Kogan's research.

Psychographic Profiling: The Claimed Science

Cambridge Analytica's core product was psychographic profiling — the use of data to build personality models of individuals that could predict their behavior and inform how to persuade them. The firm based its approach on the OCEAN personality model (also called the Five Factor Model), a well-established framework in academic psychology that measures personality across five dimensions: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism.

The theory was that by analyzing someone's Facebook activity — the pages they liked, the content they shared, their interactions — algorithms could infer their personality type. Cambridge Analytica claimed it could determine personality profiles from as few as 10 Facebook likes, and that these profiles could predict voting behavior, receptivity to different message frames, and likelihood of being persuaded. Armed with these profiles, political campaigns could microtarget voters with customized content designed to resonate with their specific psychological characteristics.

"We are thrilled that our revolutionary approach to data-driven communication has played such an integral part in President-elect Trump's extraordinary win."

Alexander Nix, Cambridge Analytica CEO — Company Statement, November 9, 2016

In presentations to potential clients, Cambridge Analytica executives claimed they had personality profiles on 230 million American adults and had worked on over 200 political campaigns globally. CEO Alexander Nix described the firm's capabilities in sweeping terms, suggesting it had essentially mapped the psychological landscape of the American electorate. The company produced case studies claiming credit for election victories and demonstrating dramatic improvements in advertising effectiveness.

The scientific validity of these claims, however, was contested. While the OCEAN model is well-established in psychology, the ability to accurately infer personality from Facebook data, and particularly the ability to use these inferences to effectively persuade people, remained unproven. Academic researchers who examined Cambridge Analytica's methods after the scandal noted that the firm's public claims substantially exceeded what the underlying science supported. A 2019 analysis in Nature Human Behaviour concluded that while personality prediction from digital footprints was possible to some degree, the accuracy was lower than Cambridge Analytica claimed, and the effectiveness of personality-based targeting was likely exaggerated for marketing purposes.

Political Deployment: The 2016 Election and Brexit

Cambridge Analytica's most prominent work was for political campaigns in 2015-2016, including Ted Cruz's presidential primary campaign, Donald Trump's general election campaign, and reported advisory work related to the Brexit referendum in the United Kingdom.

In late 2014 and early 2015, Robert Mercer, who controlled Cambridge Analytica through his investment, was the primary financial backer of Senator Ted Cruz's presidential campaign, ultimately contributing over $13 million. Cambridge Analytica was hired by Cruz's campaign and claimed to provide data infrastructure, voter modeling, and digital advertising targeting. The firm embedded staff with the campaign and claimed its psychographic approach was central to Cruz's strategy.

$5.9M
Payment from Trump campaign to Cambridge Analytica. According to Federal Election Commission records, the campaign paid Cambridge Analytica approximately $5.9 million for data and digital advertising services between June and November 2016.

After Cruz withdrew from the race in May 2016, Mercer shifted his support to Donald Trump. Steve Bannon, who had been vice president of Cambridge Analytica, became CEO of Trump's campaign in August 2016. Cambridge Analytica was subsequently hired by the Trump campaign, with payments beginning in June 2016. The firm claimed to provide voter targeting, digital advertising optimization, and data analytics for the campaign.

The actual extent of Cambridge Analytica's influence on the Trump campaign is contested and remains unclear. Cambridge Analytica executives publicly claimed the firm ran "all the digital campaign, the television campaign, and our data informed all the strategy." However, members of the Trump campaign's internal digital team, led by Brad Parscale, disputed this characterization. Parscale and others suggested that Cambridge Analytica's role was limited, that the campaign relied primarily on Republican National Committee data and its own analytics, and that Cambridge Analytica's psychographic tools were not significantly deployed.

Investigative reporting after the scandal found evidence supporting both narratives: Cambridge Analytica did embed staff with the campaign and provide data services, but the campaign's digital operation was multifaceted with numerous vendors and internal teams. Some former Cambridge Analytica employees told investigators that the company's claims about its role were inflated for marketing purposes. The Senate Intelligence Committee investigated Cambridge Analytica's work as part of its inquiry into Russian interference in the 2016 election but found no evidence that Cambridge Analytica's data was shared with foreign actors.

In the United Kingdom, Cambridge Analytica's parent company SCL Group and Cambridge Analytica itself worked on campaigns related to the 2016 Brexit referendum. The firm reportedly did work for Leave.EU, one of the pro-Brexit campaigns, though the precise nature and extent of this work remained murky. UK investigations focused on whether data protection laws were violated and whether campaign finance rules were broken through coordination between officially separate campaign entities.

The Disclosure: How the Scandal Became Public

Facebook first learned about Cambridge Analytica's data harvesting in December 2015, when The Guardian published an article by Harry Davies reporting that Cambridge Analytica was using psychological data based on research covering millions of Facebook users. Facebook's response was to send legal demands to Cambridge Analytica, Aleksandr Kogan, and Christopher Wylie requiring them to delete all harvested Facebook data and to provide written certifications that they had done so.

All three parties provided such certifications. Facebook accepted these certifications without independent verification. The company did not audit Cambridge Analytica's servers, did not examine whether the data had been copied or distributed further, and critically, did not inform the 87 million affected users that their data had been compromised. Facebook treated the matter as resolved through the certifications and did not disclose it publicly.

This decision would become central to later regulatory findings. Facebook had a 2011 consent decree with the Federal Trade Commission requiring it to protect user data and to notify users when their data was shared beyond their privacy settings. By not informing affected users about the Cambridge Analytica harvesting for more than two years, Facebook violated this agreement.

"A company that was built on the harvesting of personal data has no right to claim it did not see the warning signs about the fate of that data."

Elizabeth Denham, UK Information Commissioner — ICO Investigation Report, October 2018

Christopher Wylie left Cambridge Analytica in 2014, citing concerns about the company's activities and ethics. By 2017, he had decided to disclose what he knew about the operation. He approached Carole Cadwalladr, an investigative journalist at The Guardian who had been reporting on Cambridge Analytica's work in Brexit campaigns. Wylie provided Cadwalladr with internal documents, emails, and detailed accounts of how Cambridge Analytica was built and operated.

The Guardian and The New York Times coordinated their investigations and published simultaneously on March 17, 2018. The reports revealed the full scope of the data harvesting, the number of affected users, the methods Cambridge Analytica used to exploit Facebook's platform, and Wylie's insider account of the firm's operations and political objectives. The disclosures were accompanied by Channel 4 News's undercover recordings of Cambridge Analytica CEO Alexander Nix discussing unethical campaign tactics including bribery and entrapment.

The public reaction was immediate and intense. Facebook's stock price dropped 18% in the days following the revelations, erasing approximately $120 billion in market capitalization. The hashtag #DeleteFacebook trended globally. Political leaders in the US, UK, and EU called for investigations and regulation. Mark Zuckerberg, who had been largely silent for days, finally responded with public apologies and promises of platform changes.

Regulatory Response: The Largest Fine in FTC History

The Cambridge Analytica scandal triggered regulatory investigations on both sides of the Atlantic. In the United States, the Federal Trade Commission opened an investigation in March 2018 to determine whether Facebook's handling of user data violated its 2011 consent decree. That earlier decree had settled FTC charges that Facebook deceived users about privacy controls and required Facebook to implement a comprehensive privacy program and obtain explicit user consent before sharing data beyond privacy settings.

The FTC investigation examined whether allowing third-party apps to access friend data without explicit consent from those friends violated the consent decree, and whether Facebook's failure to verify that Cambridge Analytica had deleted the harvested data constituted inadequate privacy protection. The investigation lasted 16 months and involved examination of millions of documents and extensive depositions of Facebook executives.

$5 Billion
FTC fine against Facebook. Announced in July 2019, this was the largest penalty ever imposed on a company for violating consumer privacy and one of the largest civil penalties ever assessed by the US government.

In July 2019, the FTC announced a settlement with Facebook imposing a $5 billion penalty — approximately 20 times larger than any previous privacy penalty. The settlement also required Facebook to implement new corporate governance structures for privacy, including creation of an independent privacy committee on its board of directors, designation of compliance officers who could be held personally liable for false certifications, and submission to regular independent privacy assessments for the next 20 years.

The FTC vote was 3-2, split along party lines. The two Democratic commissioners issued dissenting statements arguing that the penalty was grossly inadequate given Facebook's $55 billion in annual revenue (the fine represented approximately one month of revenue) and that the settlement should have included personal liability for executives including Mark Zuckerberg. Commissioner Rohit Chopra wrote: "The settlement imposes no meaningful changes to the company's structure or financial incentives, which led to these violations. The terms do not restrict Facebook's mass surveillance or advertising tactics."

Privacy advocates and legal scholars echoed these criticisms, noting that Facebook's stock price actually increased after the settlement was announced, suggesting investors viewed the penalty as less severe than anticipated. The settlement required no admission of wrongdoing by Facebook. Nevertheless, it represented the largest privacy enforcement action in US history and established new precedents for corporate accountability in data protection.

International Regulatory Action

In the United Kingdom, the Information Commissioner's Office (ICO) launched its own investigation in March 2018. Led by Information Commissioner Elizabeth Denham, the ICO obtained warrants to search Cambridge Analytica's offices in London, seizing servers and documents. The investigation faced immediate challenges: Cambridge Analytica filed for bankruptcy in May 2018 and began destroying records, and Facebook initially resisted allowing the ICO to inspect the harvested data.

In October 2018, the ICO issued findings and penalties. Facebook was fined £500,000 — the maximum penalty available under the Data Protection Act 1998, which governed at the time of the violations. The ICO found that between 2007 and 2014, Facebook had failed to safeguard user information and had not been transparent about how data was harvested and used by third parties. The ICO's report concluded: "Facebook failed to adequately protect the privacy of user information when making it available to application developers."

The £500,000 fine was widely criticized as trivial given Facebook's size and revenues. However, this was the maximum penalty available under pre-GDPR law. The EU's General Data Protection Regulation (GDPR), which took effect in May 2018, allowed penalties up to 4% of global annual revenue — which for Facebook would have meant potential fines exceeding $2 billion. The timing meant the Cambridge Analytica violations fell under the old regulatory regime with much lower penalties.

Jurisdiction
Fine Amount
Legal Basis
United States (FTC)
$5 billion
Violation of 2011 consent decree
United Kingdom (ICO)
£500,000 ($663,000)
Data Protection Act 1998 violation
Ratio to Facebook Revenue
~9% annual / ~1.8% quarterly
~0.001% annual

The ICO's investigation extended beyond Facebook to examine political data practices more broadly. The office investigated 30 organizations and issued a comprehensive report on political campaigning and data protection. Denham concluded that "the big tech companies need to stop viewing themselves as an exception" to data protection laws and called for a statutory code of practice for use of personal data in political campaigns.

European data protection authorities in multiple member states also opened investigations. The Irish Data Protection Commission, which regulates Facebook's European operations, conducted its own inquiry that ultimately resulted in additional findings about Facebook's legal basis for data processing under GDPR.

The Question of Effectiveness

A critical question that remained contentious after the scandal was whether Cambridge Analytica's psychographic targeting actually worked. The firm's marketing materials and public statements claimed revolutionary effectiveness. CEO Alexander Nix told audiences that Cambridge Analytica's approach was dramatically superior to traditional political advertising. The company produced case studies suggesting it had delivered decisive advantages in close elections.

However, multiple lines of evidence suggested these claims were exaggerated. Academic researchers who examined the underlying methods found that personality prediction from Facebook likes, while possible, was substantially less accurate than Cambridge Analytica claimed. The effectiveness of targeting ads based on personality profiles — as opposed to simpler demographic or behavioral targeting — was not demonstrated in peer-reviewed research.

Former Cambridge Analytica employees and clients provided conflicting accounts. Some whistleblowers, including Brittany Kaiser (another former Cambridge Analytica employee who later testified before Parliament), described psychographic modeling as the core of the firm's work. Others, including members of political campaigns that hired Cambridge Analytica, suggested the firm's actual capabilities were limited and that its psychographic tools were not significantly used.

"We exploited Facebook to harvest millions of people's profiles. And built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on."

Christopher Wylie, Cambridge Analytica Whistleblower — Testimony to US Senate, May 2018

A 2020 academic analysis published in The Journal of Politics examined whether Cambridge Analytica-style psychographic targeting was effective by conducting large-scale field experiments. The researchers found that personality-based targeting provided no meaningful advantage over demographic targeting and that the effects were statistically insignificant. The study concluded that Cambridge Analytica's claims about effectiveness were likely exaggerated.

This raised an important question: if Cambridge Analytica's methods didn't work as claimed, was the scandal overblown? Critics of the regulatory response argued that disproportionate attention was paid to what was essentially marketing hype. However, regulators and privacy advocates countered that the effectiveness of the targeting was irrelevant to the core violation: 87 million people had their data harvested and used without consent. Whether that use was effective or ineffective didn't change the fact that it was unauthorized.

The Aftermath: Platform Changes and Political Consequences

In April 2018, Mark Zuckerberg testified before Congress in a joint hearing with the Senate Judiciary and Commerce Committees. Over nearly five hours of questioning from 44 senators, Zuckerberg acknowledged that Facebook had made mistakes in protecting user data but resisted calls for comprehensive regulation. The hearing revealed significant technical knowledge gaps among legislators, with some questions suggesting fundamental misunderstandings of how digital platforms operate.

Facebook implemented numerous platform changes in response to the scandal. The company restricted third-party apps' access to user data, eliminating the friend permission feature that had enabled mass harvesting. Facebook introduced new privacy controls, created a Privacy Center to make settings more accessible, and implemented new review processes for apps requesting data access. The company also established an independent content oversight board and expanded its security and privacy staff.

Critics argued these changes came years too late. Facebook had known since at least 2012 that its platform was vulnerable to data harvesting — the company's internal documents later revealed in litigation showed that employees had raised concerns about third-party app permissions well before the Cambridge Analytica incident. The company had allowed problematic permissions to continue because the open data access was central to its strategy of encouraging third-party developers to build on its platform.

200+
Political campaigns worldwide that Cambridge Analytica claimed to have worked on. The firm operated in countries across six continents, raising questions about data practices in elections globally.

Cambridge Analytica and its parent company SCL Group filed for bankruptcy and ceased operations in May 2018. However, several executives formed new companies with similar business models. Emerdata Limited was incorporated in 2017 with some of the same leadership and shareholders as Cambridge Analytica, leading to concerns that the operation would continue under different names. Journalists and investigators tracked the dispersion of former Cambridge Analytica staff into new data consulting firms, lobbying operations, and political campaigns.

The scandal's political ramifications were significant but uneven. In the 2016 Trump campaign, the role of Cambridge Analytica became a subject of congressional investigation as part of inquiries into Russian election interference. While no evidence emerged of collusion between Cambridge Analytica and foreign actors, the investigation revealed extensive connections between the firm, the Trump campaign leadership, and WikiLeaks that suggested coordination that may have violated campaign finance law. No charges were brought.

In the United Kingdom, investigations into Brexit campaigns' use of Cambridge Analytica data and methods resulted in fines for campaign finance violations but no criminal prosecutions. The UK Electoral Commission found that Leave.EU and the official Vote Leave campaign had exceeded spending limits and failed to properly report coordination, with data operations central to the violations.

Broader Questions: Surveillance Capitalism and Democratic Infrastructure

The Cambridge Analytica scandal exposed structural vulnerabilities in the relationship between social media platforms, political campaigns, and democratic processes. At its core, the incident revealed that Facebook's business model — providing free services in exchange for user data that enables targeted advertising — created incentives to maximize data collection and sharing while minimizing privacy protection.

Harvard Business School professor Shoshana Zuboff, whose book "The Age of Surveillance Capitalism" was published in 2019 shortly after the scandal, argued that Cambridge Analytica represented not an aberration but a logical extension of the surveillance economy. Technology platforms had built business models on extracting maximum data from users and monetizing it through precision targeting. Political campaigns were simply another customer category for these targeting capabilities.

The scandal also highlighted the inadequacy of existing regulatory frameworks. Data protection laws in both the US and Europe had been written for an earlier technological era and were poorly adapted to the scale, speed, and opacity of digital data flows. The fact that 87 million people could have their data harvested without their knowledge and that it took over two years for this harvesting to become public revealed fundamental gaps in both platform governance and regulatory oversight.

Legal scholars and policymakers debated whether the problem was Facebook specifically, the broader business model of ad-supported platforms, or the absence of comprehensive data protection law. In Europe, the Cambridge Analytica scandal accelerated implementation and enforcement of GDPR, which had been adopted in 2016 but took effect in May 2018. In the United States, comprehensive federal data protection legislation was proposed but not enacted, with debates continuing about whether the US should adopt European-style rights-based privacy regulation or rely on sectoral approaches and FTC enforcement.

What the Evidence Establishes

The documented facts of the Cambridge Analytica scandal are clear: a political consulting firm harvested data from 87 million Facebook users without their explicit consent, Facebook knew about the harvesting in 2015 but did not disclose it publicly for more than two years, the harvested data was used to build psychological profiles for political targeting, and whistleblower disclosure in 2018 triggered the largest privacy enforcement actions in regulatory history.

What remains contested is the significance: Did Cambridge Analytica's methods actually influence election outcomes, or were its capabilities exaggerated? Was this an exceptional case of data abuse, or a typical example of practices common throughout the political data industry? Should the regulatory response be understood as appropriate accountability or as disproportionate reaction to a single incident?

The evidence suggests that Cambridge Analytica's claims about effectiveness were likely inflated, but that the core violation — mass unauthorized data collection — was real and represented a systemic vulnerability in social media platforms' architecture. The scandal revealed that Facebook had prioritized platform growth over user privacy, that political campaigns were willing to exploit any available data advantage regardless of ethical considerations, and that existing regulation was inadequate to prevent or quickly detect such abuses.

The architectural lesson is that when platforms are designed to maximize data extraction and when political campaigns operate in winner-take-all contexts with weak enforcement, data will be exploited. Cambridge Analytica did not create these structural conditions; it exploited them. The scandal's lasting impact is not the fate of one firm but the exposure of vulnerabilities in the infrastructure where technology, data, and democracy intersect.

Primary Sources
[1]
See article for sources
Evidence File
METHODOLOGY & LEGAL NOTE
This investigation is based exclusively on primary sources cited within the article: court records, government documents, official filings, peer-reviewed research, and named expert testimony. Red String is an independent investigative publication. Corrections: [email protected]  ·  Editorial Standards