Вы находитесь на странице: 1из 42

Political Manipulation and Internet Advertising Infrastructure

Author(s): Matthew Crain and Anthony Nadler


Source: Journal of Information Policy, Vol. 9 (2019), pp. 370-410
Published by: Penn State University Press
Stable URL: https://www.jstor.org/stable/10.5325/jinfopoli.9.2019.0370
Accessed: 23-04-2020 07:38 UTC

REFERENCES
Linked references are available on JSTOR for this article:
https://www.jstor.org/stable/10.5325/jinfopoli.9.2019.0370?seq=1&cid=pdf-
reference#references_tab_contents
You may need to log in to JSTOR to access the linked references.

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms

This article is licensed under a Creative Commons Attribution-NonCommercial-


NoDerivatives 4.0 International License (CC BY-NC-ND 4.0). To view a copy of this license,
visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Penn State University Press is collaborating with JSTOR to digitize, preserve and extend
access to Journal of Information Policy

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation and Internet
Advertising Infrastructure
Matthew Crain and Anthony Nadler

ABSTRACT
Disinformation and other forms of manipulative, antidemocratic communication
have emerged as a problem for Internet policy. While such operations are not
limited to electoral politics, efforts to influence and disrupt elections have cre-
ated significant concerns. Data-driven digital advertising has played a key role in
facilitating political manipulation campaigns. Rather than stand alone incidents,
manipulation operations reflect systemic issues within digital advertising markets
and infrastructures. Policy responses must include approaches that consider digital
advertising platforms and the strategic communications capacities they enable. At
their root, these systems are designed to facilitate asymmetrical relationships of
influence.
Keywords: Disinformation, political advertising, infrastructure, targeted advertis-
ing, social media

This article examines the intersection of online political manipulation and


digital advertising. While there is an emerging consensus among inter-
national policymakers that manipulative online communication presents
a growing challenge to democratic processes, there have been relatively
few attempts to understand the linkages between manipulation campaigns
and digital advertising systems. Addressing this gap, this study presents
a diagnosis of how digital advertising infrastructure, as it is currently
designed and managed, creates opportunities for political manipulation
and foreign interference. Data-driven ad technologies enhance the influ-
ence that advertisers can have on-target audiences by leveraging detailed
information about individuals, often without their knowledge or consent.

Matthew Crain: Media, Journalism & Film, Miami University


Anthony Nadler: Media and Communications Studies, Ursinus College

This research was supported in part by the Government of Canada. The views expressed here are
the authors’ own.

Journal of Information Policy, Volume 9, 2019


This work is licensed under Creative Commons Attribution CC-BY-NC-ND

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 371

As Ravel, Woolley, and Sridharan put it, data-driven advertising is designed


“like a one-way mirror” in which campaigns and tech platforms “can see
the public, but the public cannot see them.”1 Both foreign and domestic
operatives can exploit such ad systems to influence political behavior and
discourse through deceptive means that use data to pinpoint cognitive and
psychological vulnerabilities to influence individuals and groups.
We offer an assessment of policies for addressing the use of digital adver-
tising systems by foreign operatives and other manipulative agents trying
to influence elections, shape political discourse, inflame social division, and
undermine democracy. Our policy recommendations synthesize and build
upon ideas from a review of over two dozen reports released between 2017 and
early 2019 by research institutes, civil society groups, and government inquiries
in North America and Europe (see bibliography). While most reports cover
a wide spectrum of problems ranging from privacy to data security, a unique
feature of this research is that we concentrate specifically on policy responses to
manipulation campaigns’ use of digital advertising. While much of our discus-
sion applies to the digital advertising industry generally, we focus particularly
on social media platforms because of their centrality as spaces of online polit-
ical discourse and their dominant position in the global advertising market.
Manipulation campaigns and foreign influence operations rarely rely
exclusively on digital advertising—they also create deceptive front groups,
make use of imposter social media accounts, game search engine algo-
rithms, and deploy bots to distort online conversations, among other
tactics. The many challenges of the global information ecosystem are
deeply interconnected and policymakers must respond to those chal-
lenges through a broad range of initiatives, as we argue in the conclusion.
Nonetheless, we believe that policies directed specifically toward digital ad
systems represent some of most urgent “low-hanging fruit” for tackling
these problems. We agree with former Facebook chief security officer Alex
Stamos that the advertising components of social media “have the most
capability for abuse generally,” while regulating their capacities poses “the
least free expression concerns.”2 So in this article, we focus on the workings
of digital advertising before expanding to consider how problems related

1. Ravel, Woolley, and Sridharan, 6.


2. Alex Stamos, “The Products That Have the Most Capability for Abuse Generally Have
the Least Free Expression Concerns, Which Is Convenient. The Top Two, Advertising and
Recommendation Engines, Are Especially Concerning Because They Put Content in Front of
People *Who Did Not Ask to See It*,” Tweet, @alexstamos (blog), February 2, 2019, https://
twitter.com/alexstamos/status/1091711395991670784.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
372 JOURNAL OF INFORMATION POLICY

to ad systems are linked to other public policy challenges posed by social


media and other online environments.
A central finding of this study is that digital ad systems have been built
with capacities that can easily be weaponized. When political operatives
weaponize ad tech, they use it to identify weak points where groups and indi-
viduals are most vulnerable to strategic influence. In such cases, individuals’
data is turned against them and used to help political advertisers more effec-
tively influence their targets. While identifying and removing manipulation
campaigns is an important effort, we argue the most effective responses to
political manipulation must do more than try to remove “bad actors” from
abusing digital ad systems. Rather, the very capacities of digital ad systems that
facilitate such weaponized communication need to be recalibrated to better
serve democratic ideals. We discuss a range of policy proposals to address
these issues in the short and medium term, including increasing advertising
transparency, expanding the data rights of individuals, and attenuating adver-
tisers’ capability to carve audiences into smaller and smaller segments.
Some government entities, understandably, will be concerned first and
foremost with foreign-controlled interference operations. Yet, this article
frames the advertising infrastructure facilitating political manipulation as
itself a liability for democratic societies—whether campaigns are run by
foreign or domestic operatives or some combination. While we recom-
mend greater transparency into the funding of advertising that would help
identify and (potentially) eliminate foreign-funded ads, most of the rec-
ommendations we offer aim to curb manipulative capacities as a whole.
We think there are several reasons that even those most concerned about
foreign interference operations should consider this approach. First, since
the discovery of foreign interference operations in 2016, state-connected
actors have become more sophisticated at covering their fingerprints in
digital space.3 Second, foreign actors may recruit domestic operatives to
run campaigns on their behalf—either through coercive means such as
blackmail and bribery or based on ideological affinity. Third, as we argue in
this article, the current digital advertising infrastructure incentivizes polit-
ical campaigns to target fragmented communities and amplify social divi-
sion. An atmosphere of heightened divisiveness and social fracture creates
conditions favorable to antidemocratic election interference operations,
whether or not they make use of digital advertising.

3. “Removing Bad Actors on Facebook | Facebook Newsroom,” Facebook Newsroom (blog),


July 31, 2018, https://newsroom.fb.com/news/2018/07/removing-bad-actors-on-facebook/.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 373

This article proceeds by defining the scope of the problem of political


manipulation campaigns and the role of digital ad systems in such cam-
paigns, summarizing the capacities of contemporary digital ad systems,
outlining how these capacities become weaponized by political operatives,
examining promising policy responses to these problems as well as limita-
tions to and uncertainties about implementing such policies, and finally
placing the problems of manipulation campaigns in a broader context
beyond ad systems and outlining what we see as the most crucial policy
questions for building more democratic digital media environments.

What Role Does Digital Advertising Play in Political


Manipulation?

In recent years, governments have begun to recognize and respond to an


emerging set of problems associated with manipulative online political
communications.4 Researchers and policymakers have used a number of
terms to describe these problems. In the wake of the 2016 Brexit vote and
the US Presidential election, the term “fake news” spread quickly among
researchers and journalists to designate egregiously inaccurate news stories
that were being widely shared on social media. These stories were largely
created by either small entrepreneurs angling for profit from clickbait or
partisan operatives using false news as an influence tool. However, the
term “fake news” was confusing because it could be used to refer to quite
different kinds of content, from news satire to good-faith journalistic
mistakes to blatantly false news fabricated for profit.5 Populist politicians
quickly seized the term and started labeling any critical coverage of them
as “fake news.”
More recently the global conversation among researchers and policy-
makers has shifted toward framing communication problems within the
digital media landscape as matters of “misinformation” or “disinforma-
tion.”6 Misinformation generally refers to “information whose inaccuracy
is unintentional,” while disinformation designates “information that is
deliberately false or misleading.”7 Framing the problem exclusively in terms

4. Bradshaw, Neudert, and Howard.


5. Wardle.
6. European Commission, “High Representative of the Union”; House of Commons of
Canada, “Democracy under Threat”; Koulolias et al. See also the bibliography for this article.
7. Jack, 2–3.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
374 JOURNAL OF INFORMATION POLICY

of inaccurate information, however, can itself lead to inadequate responses


if actions are limited to fact-checking efforts to identify and remove strictly
false information from social media.8
In this article, we use the term “manipulation campaigns” to name a
range of deceptive communication strategies that use data-driven advertis-
ing to target vulnerabilities to influence, in attempts to shape discourse or
behavior to meet strategic objectives. Manipulation campaigns keep some
aspects of their operations hidden from their targets, but they do not nec-
essarily traffic in false information. For instance, a domestic manipulation
campaign sought to influence a US Senate race in 2017 by creating an online
front group called “Dry Alabama” on social media.9 The group promoted a
statewide ban on alcohol and used statistics—not necessarily false ones—
about car crashes and alcohol-related deaths. Yet, it was not connected to
any genuine effort to ban alcohol in Alabama; rather the operatives behind
“Dry Alabama” were trying to bring prohibition to the political foreground
in order to exacerbate divisions over the issue among Republican voters.
While the digital tactics of manipulation campaigns can be used by any-
one, the most sophisticated campaigns will likely be backed by actors able
to devote considerable resources to these efforts. Bradshaw and Howard
identify the most powerful of such groups as deployments of cyber troops:
“government or political party actors tasked with manipulating public
opinion online.”10 Such agents may engage in setting up impostor social
media personas, make use of bots and automated accounts, and exploit rec-
ommendation and search algorithms to disseminate their messages. Digital
ads can play a vital role in many of these operations. In perhaps the most
well-reported example, a Russian organization called the Internet Research
Agency (IRA) spent “thousands of U.S. dollars every month” on social
media ads and promoted posts in efforts to influence US elections in 2016.11
In their review of global cases of computational propaganda, Bradshaw
and Howard report that cyber troops are making “increasing use of paid
advertisements and search engine optimization on a widening array of
Internet platforms.”12 They find that cyber troop campaigns are often
spending large sums of money, and some are drawing on the expertise of

8. As Full Fact points out, a “moral panic” around fake news could prompt overreactions that
threaten free speech. Full Fact.
9. Shane and Blinder.
10. Bradshaw and Howard.
11. U.S. v. Internet Research Agency.
12. Bradshaw and Howard.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 375

“political communication firms that specialize in data-driven targeting and


online campaigning.”13 As two analysts put it in their summary of the US
2016 Presidential Election, political manipulation campaigns are “digital
marketing 101.”14 Platforms go to extraordinary lengths to collect data on
users and their behavior to allow advertisers to decide just who they want
to target with what approach. Advertising creates paid priority lanes where
message senders can leverage the knowledge acquired through surveillance
and profiling. This opens the possibility for weaponizing this data-driven
system in the ways we describe in the following.

What Are the Capacities of Data-Driven Advertising That


Enable Manipulation Campaigns?

Global digital advertising is estimated to be a US$327 billion industry in


2019.15 Major players include ad platforms such as Google and Facebook,
advertising agency conglomerates such as WPP, and a range of data spe-
cialists and information technology companies such as data brokers and
data management platforms. These companies provide advertising services
that leverage consumer data via a massive surveillance infrastructure. This
infrastructure offers an increasingly sophisticated toolkit for influencing
targeted publics and is readily applied to political objectives. Growing evi-
dence shows that digital advertising has been put to political use not only
by official electoral campaigns, but also by special interest lobbies, foreign
state actors, and domestic dark money groups.16
Fierce competition has propelled digital advertising companies to build
innovative mechanisms for influencing consumers. As Google states in
its marketing materials: “the best advertising captures people’s attention,
changes their perception, or prompts them to take action.”17 Advertising for-
mats are varied and not always easily recognizable as paid communication.
Digital display and video ads run alongside an array of search keywords,
promoted social media posts, sponsored content, and native advertising
formats, all of which can be targeted to highly specific audiences across

13. Ibid.
14. Ghosh and Scott, “Russia’s Election Interference.”
15. McNair.
16. Kaye; Kim et al.; Valentino-DeVries.
17. Google, “Changing Channels,” 12.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
376 JOURNAL OF INFORMATION POLICY

social feeds, mobile apps, websites, and other channels. Highly segmented
­message targeting, through digital advertising, can help spur “organic”
amplification and generate human assets for information operations.18
Major ad platforms have typically operated as open marketplaces, avail-
able to any advertiser who meets basic quality standards. Responding to
controversies, platforms have tightened restrictions in recent years, imple-
menting various protocols for advertiser authentication and restricting
access to ad services for certain groups. For example, Facebook now requires
advertisers in certain countries to “confirm identity and location before
running political ads and disclose who paid for them.”19 As we discuss in
the following, while such policies seem to take first steps in the fight against
political manipulation, these efforts can be circumvented by influence oper-
ations with relative ease. The scope and implementation of these systems,
which can vary significantly, require careful regulatory scrutiny.
Digital ad infrastructure provides three key interlocking communica-
tion capacities.20 The first is the capacity to use consumer monitoring to
develop detailed consumer profiles. The second is the capacity to target
highly segmented audiences with strategic messaging across devices and
contexts. The third is the capacity to automate and optimize tactical ele-
ments of influence campaigns. There are numerous technical means that
have been developed to enable these capacities.21 Like all advertising for-
mats, digital ad spaces are designed to offer advertisers many choices and
options. However, we see these three capacities as core features that have
been built into the digital infrastructure as essential, top-level features that
enable today’s data-driven advertising practices.

Surveillance and Profiling

Digital advertising depends on the collection and exchange of vast quan-


tities of online and offline data about individuals. Social media platforms,
advertising networks, data brokers, and many other parties record and

18. Bey et al.


19. “Hard Questions: What Is Facebook Doing to Address the Challenges It Faces? |
Facebook Newsroom,” accessed March 23, 2019, https://newsroom.fb.com/news/2019/02/
addressing-challenges/.
20. These “capacities” are an analytical disentanglement of the many overlapping practices
and technologies of digital advertising. See also Tufekci.
21. For an exploration of these technical means that goes into more detail than we provide
here, see Nadler, Crain, and Donovan.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 377

synthesize a wide range of consumer data across applications and devices


in order to more effectively target them with ads.22 Social media ­companies
such as Facebook are especially prodigious in generating data from closely
monitoring their users. Research conducted by ProPublica showed
Facebook was using at least 52,000 attribute categories to classify its two
billion users.23 Among the data Facebook collects from its own services are
user posts, reactions to posts, profile information, social connections, data
extracted from photographs and video (including facial recognition data),
information on user logins, and, at least at one point in time, posts that
users “self-censored” (i.e., composed but did not actually publish).24
Data gathered firsthand is often merged with third-party data to enrich
consumer profiles and enable ad distribution mechanisms such as the
widely used “real-time bidding” (RTB) systems.25 Distinct data points are
attached to people and devices via unique persistent identifiers, which are
then stored in profile databases. One of the longest running persistent ID
technologies is the HTTP cookie, which now operates alongside a host of
other identifying mechanisms.26 Using persistent IDs, ad platforms con-
tinuously update profile records with new information, which over time
provides insights into individual identities, interests, behaviors, and atti-
tudes. Facebook, for example, partners with large numbers of websites and
mobile applications to share data for profiling and targeting.27
The value of this for advertisers and influence campaigns lies not simply
in individual data points but in the inferences and behavior predictions that
can be drawn from large pools of data. In some of the most controversial
cases, advertisers have tried to develop “underlying psychological profiles”
to create influence campaigns customized to psychological dispositions.28
As early as 2013, researchers developed methods to reliably infer sensitive
personal attributes based solely on Facebook “Likes” data; 29 these included
personality traits, political and religious views, intelligence, happiness,
and sexual orientation. As psychological profiling and predictive analytics
have advanced, advertisers have found opportunities to design campaigns

22. United States Federal Trade Commission.


23. Angwin, Varner, and Tobin; Lumb; Dean.
24. Das and Kramer.
25. Brave RTB complaint; Engelhardt and Narayanan.
26. Davies.
27. Schechner and Secada.
28. Graves and Matz.
29. Digital records of behavior expose personal traits. Kosinski, Stillwell, and Graepel.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
378 JOURNAL OF INFORMATION POLICY

around characteristics and traits that have not been self-­disclosed by the
targets.30 Such inferences have been made available to target, or exclude,
politically sensitive groups for social media ad campaigns.31

Microtargeting

Using observed data, inferred insights, and a range of contextual infor-


mation, advertising is targeted in ways that seek to maximize impact and
efficiently produce desired influence outcomes. Individual profiles are
grouped into addressable publics through a variety of targeting mecha-
nisms that govern both audience composition (selecting who sees a partic-
ular message) and ad placement (determining when and where particular
ads are shown). Facebook’s full-service ad platform illustrates key elements
of this targeting capacity. Advertisers can use the built-in Ad Manager to
manually select targeting criteria from among many thousands of possible
attributes. To enable more precision, Facebook’s Custom Audience tool
allows advertisers to target specific groups by uploading lists of identifying
information such as e-mail addresses or voter registration records. Using
predictive analytics, Facebook’s Lookalike Audience feature “clones” audi-
ences that share certain attributes with targeted publics. While some major
platforms have tried to prevent advertisers from directly targeting based on
sensitive attributes such as ethnicity, researchers have found that malicious
advertisers can still target these groups by using proxy criteria.32
Microtargeting that is designed to exploit personality traits and psy-
chological profiles has been found to be particularly effective.33 In 2017,
leaked documents revealed that Facebook claimed the ability to predict
its teenage users’ emotional states to give advertisers the means to reach
those who feel “worthless,” “insecure,” and “anxious.”34 That same year,
the British Army ran a recruitment campaign on Facebook that targeted
16-year-olds around the time that standardized test results were released,
typically a moment of heightened uncertainty.35 Some of the ads suggested
that students who were discouraged by their results might register for the
army, rather than say, pursue further education. In 2015, antiabortion

30. Bey et al., 82–83.


31. Angwin, Varner, and Tobin; Lumb.
32. Angwin and Parris Jr.; Spiecer et al.
33. Matz and Netzter; Sandra Matz et al.
34. Reilly.
35. Morris.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 379

groups employed a digital ad agency to use mobile geofencing targeting to


send ads to women who visited reproductive health clinics in states across
the United States. The ads, which included messages such as “You have
choices,” were triggered via GPS location data and were served to women
for up to 30 days after leaving the target area.36

Automation and Optimization

While targeting parameters can be manually configured with great preci-


sion, digital advertisers increasingly use automated decision-making sys-
tems to test and optimize the composition of target publics as well as the
timing, placement, and even content of ad messages.37 Ad tech infrastruc-
ture gives advertisers the capacity to offload key tactical decisions to spe-
cialized systems that continuously incorporate the results of multivariate
experimentation to improve performance.
RTB systems, used to dynamically place ads across media channels,
increasingly incorporate machine learning systems to evaluate the results of
large numbers of placements in order to determine which consumer attri-
butes are the most predictive of desired influence outcomes.38 Techniques
for “content optimization” apply a similar logic to ad messaging. Through
methods like split testing (also called A/B testing), advertisers can exper-
iment with large variations of messaging and design to find what works.
For instance, consider a campaign that wants to determine how to inflame
feelings among rural and exurban communities that they are being looked
down upon by urban elites. This campaign may test scores of slogans and
images to see which combinations receive the most shares and engagements
among which specific microtargeting parameters. Potentially, the campaign
may find that tastes categories, such as an interest in certain music groups
or genres, can be used to predict which style of slogans and ads work best
for which targets. These and other techniques help advertisers customize
outreach to individuals based on forecasts of their vulnerability to different
influence strategies and through repeat engagements, attempt to home in
on the most influential persuasion strategy for each user.39 Well-resourced
political campaigns have reportedly experimented with thousands of ad

36. Enwemeka.
37. Ghosh and Scott, “Digital Deceit I.”
38. HubSpot. What is deep learning? https://blog.hubspot.com/marketing/what-is-deep-
learning
39. Kaptein et al.; Berkovsky, Kaptein, and Zancanaro, 18.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
380 JOURNAL OF INFORMATION POLICY

variations to see which are the most effective. Advertisers can use such tools
to determine what issues resonate with p ­ articular targets as well as test for
fears or prejudices that can be invoked to influence political behavior.
These systems bring significant speed and cost advantages, allowing adver-
tisers to quickly and efficiently tailor their efforts to meet particular strategic
objectives. Campaigns can be optimized for individual behaviors like clicks
and video views, but they can also be tuned to elevate particular conversa-
tion or promote social interaction.40 As standard practice, digital market-
ing campaigns are coordinated across multiple platforms and channels and
paid advertising is often deployed in conjunction with other promotional
techniques. Tools such as social media management services enable advertis-
ers to operate complex multiplatform campaigns and use automated deci-
sion-making systems to “optimize persuasive power for every dollar spent.”41

How Do Manipulation Campaigns Weaponize Digital Advertising?

It is difficult to imagine the vast and sprawling infrastructure of moni-


toring, profiling, and optimizing that fuels data-driven advertising could
have been built without public reassurances of its benign purposes. In
communication with citizens and regulators, representatives of the digital
ad industry have continually promised that the aim of this sophisticated
targeting is simply to make advertising more efficient, which benefits con-
sumers and advertisers alike. For instance, the Digital Advertising Alliance
of Canada proclaims targeting results in “better ads” for user because,
“when advertisers use online interest-based advertising tools, you get ads
that are more interesting, relevant, and useful to you.”42
Data-driven advertising has proven to be a dual-use technology. It can
be used to help match Internet users with ads for products that fit their
predicted interests. Yet, it can also be used against users’ interests. When
advertisers use data-driven advertising to target weak points where groups
or individuals are most vulnerable to strategic influence, they are weapon‑
izing digital ad systems. Weaponizing digital advertising turns the data that

40. AdEspresso. Optimizing your Facebook campaign objective. https://adespresso.com/


guides/facebook-ads-optimization/campaign-objective/
41. Ghosh and Scott, “Digital Deceit I.”
42. “Frequently Asked Questions (FAQ)—AdChoices | Choix de Pub,” Digital Advertising
Alliance of Canada (blog), accessed February 25, 2019, https://youradchoices.ca/faq/.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 381

the industry champions as a way to identify consumers’ interests into data


that can be used to shape and modify political behaviors and attitudes.
Ever since the ad industry has been making the public case for the mutual
benefits of targeted ads, some advertising firms have been exploring how to
take advantage of digital targeting in ways that are clearly contrary to the
“everyone benefits” spirit. One avenue has been through advertisers’ uptake of
research from behavioral and cognitive science. Scientists have identified what
behavioral economist Dan Ariely refers to as the “predictable irrationality” of
human decision-making.43 Advertising firms have tried to develop techniques
whereby they can intervene upon the heuristics and patterned flaws in human
decision-making to influence decisions through precisely targeted and timed
ads.44 In one illustrative example, a marketing firm advised beauty product
advertisers to figure when and in what situations women feel most “vulnera-
ble about their beauty” and use those moments for strategic targeting.45
Both commercial and political advertising have long raised concerns
about manipulation. There is a robust critical communication literature
that analyzes tactics advertisers use to influence desires, fantasies, and cul-
tural meaning-making processes.46 In many respects, there are significant
continuities between digital techniques and older advertising practices. It
would be a mistake to only emphasize rupture in critical approaches to
data-driven ads. That said, today’s digital media landscape enables target-
ing at an unprecedented degree of precision and at an unprecedented scale.
New technical capacities in tandem with laissez-faire regulatory regimes
have had enormous consequences for advertising.47 Among other shifts,
data-driven advertising has led to a “behavioral turn” in marketing theory.
Marketers and political operatives are embracing models of human deci-
sion-making informed more by behavioral science than the psychoanalytic
and semiotic models of the human mind that informed so much of twen-
tieth century advertising. This behavioral turn has inspired an approach
to both commercial and political advertising that focuses on strategically
intervening in targets’ decision-making processes.
Political manipulation campaigns also weaponize digital advertising by
using it to identify and target vulnerabilities to influence, though in some

43. Ariely.
44. Calo; Shaw.
45. PHD Media.
46. Some of the landmark contributions to this line of critique include, Williamson; Packard;
Ewen; McClintock.
47. Zuboff; Ghosh and Scott, “Digital Deceit I.”

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
382 JOURNAL OF INFORMATION POLICY

cases political weaponization tactics look different from those of commercial


manipulators. Commercial manipulators tend to rely on behavioral science
to identify individual cognitive vulnerabilities. They are looking for strategic
points of behavioral modification or decision influence, such as identifying
particular types of moods when someone might be influenced to make a pur-
chase they otherwise would not. Political manipulators may use some of the
same behavior modification techniques as well; however, most known politi-
cal manipulation campaigns focus more on amplifying or channeling group-
based identity threats. As the noted political psychologist Leonie Huddy says,
“group identities are central to politics, an inescapable conclusion drawn
from decades of political behavior research.”48 A large body of research sug-
gests that when people perceive threats to a personal identity—whether those
threats be bodily, material, or symbolic—that identity itself tends to take
on more salience.49 Invoking identity threats can mobilize political action
through promoting calls to stand up for one’s threatened in-group. Or it can
justify denigration of or attacks on the out-group perceived as threatening.
Social media companies have still not released all the data necessary to
offer a fully detailed portrait of the manipulation campaigns over their ad
platforms. Yet, there is one specific case that has received the most scrutiny—
the Russian IRA’s attempts to interfere in US politics, especially from 2015 to
2017. Public outcry and political pressure from the US Congress led Facebook,
Google, Twitter, and others to make available much more detail than usual
about this particular manipulation campaign. Two in-depth studies of IRA
operations in the United States—one led by New Knowledge, the other by
Oxford’s Computational Propaganda Project—show how data-driven adver-
tising allowed the IRA to target specific groups with content intended to
inflame identity threats and exacerbate social division.50 The IRA campaigns
involved both advertising and peer-to-peer (organic) content, but targeted
advertising appears to have played an instrumental role in seeding the spread
of the organic content by building followings for inauthentic accounts.51

48. Huddy, 738.


49. Riek, Mania, and Gaertner.
50. Howard, Ganesh, and Liotsiou; DiResta et al.
51. It is difficult to discern exactly how much of the traffic of IRA Facebook and Instagram
accounts can be traced specifically to advertising. As the New News research points out,
“Approximately two dozen Facebook and Instagram accounts achieved audience sizes over
100,000 followers; however, no data was provided to indicate what percentage of followers came
from ad conversions, engagement with organic content, or suggestions from the recommenda-
tion engine.” DiResta et al., 38.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 383

The IRA campaign leveraged identity threats both to mobilize ­support


for candidates and issues they sought to aid and to splinter opposition
groups or suppress voting from groups likely to support opposition
­candidates. Researchers found that during the 2016 campaign, the IRA
messages aimed at conservatives evinced “a clear and consistent preference
for then-candidate Donald Trump from July 2015 onward” while the IRA
was also “strong and consistent in their efforts to undermine the candi-
dacy of then-candidate Hillary Clinton throughout all of their pages.”52
The IRA campaign portrayed conservative-leaning groups facing identity
threats—such as a cultural takeover through immigration, job losses to
be precipitated by extreme environmentalists, and liberal accusations of
conservatives as bigots—in efforts to encourage “extreme right-wing vot-
ers to be more confrontational.”53 At the same time, the IRA campaign
sought to leverage identity threats to break apart Democratic coalitions,
with a special focus on targeting those with interests in racial justice
activism and African American heritage. The Computation Propaganda
Project found the IRA consistently sought to encourage “African American
voters to boycott elections or follow the wrong voting procedures in 2016,
and more recently for Mexican American and Hispanic voters to distrust
US institutions.”54
There are a number of factors that make targeted digital advertising a
particularly attractive tool for manipulation campaigns seeking to exploit
social division. Of the IRA operations, New Knowledge researchers
conclude:

They exploited social unrest and human cognitive biases. The divi-
sive propaganda Russia used to influence American thought and
steer conversations for over three years wasn’t always objectively false.
The content designed to reinforce in-group dynamics would likely
have offended outsiders who saw it, but the vast majority wasn’t hate
speech. Much of it wasn’t even particularly objectionable. But it was
absolutely intended to reinforce tribalism, to polarize and divide . . . 55

Digital ad systems offer a great advantage for such efforts over mass audi-
ence print and broadcast media. First, microtargeting allows advertisers

52. Ibid., 80–81.


53. Howard, Ganesh, and Liotsiou, 3.
54. Ibid.
55. DiResta et al., 99.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
384 JOURNAL OF INFORMATION POLICY

to carefully profile and target those suspected to be most susceptible to a


specific identity threat. Second, well-targeted ads can be more inflamma-
tory than mass ads without risking counterproductive effects. With mass
­advertising, political operatives know that such strategies can activate back-
lash effects that can outweigh their goals.56 Third, the claims made by pre-
cisely targeted ads are unlikely to be questioned or challenged in their native
media environments. Only audiences deemed likely to respond well to the
ads are likely to see them.57 Fourth, popular social media are designed to
favor the distribution of content that triggers immediate and strongly emo-
tional responses.58 Lastly, digital ad systems allow for manipulative operatives
to continually refine their approaches through testing multiple variants of an
ad (through split testing) with different audience parameters. A well-funded
campaign can test tens of thousands of ad variants daily.59

Policy Approaches to Preventing Political Weaponization of


Digital Advertising

As foreign interference and manipulation operations prompted global dis-


cussion of social media and disinformation campaigns, major social media
companies started to announce they were ready to take on new responsi-
bilities. Prior to 2016, major social media companies had generally showed
little concern as to whether their networks helped circulate disinformation
or manipulative content. Public outcry and political pressures forced a
reckoning and company CEOs announced new commitments to fighting
these uses of their platforms. Regulations surrounding digital advertising
have not kept pace with rapid technological developments. Nor have reg-
ulators been able to consider the more gradual paradigm shift represented
by digital advertising as data-driven targeting and testing have become
central features of persuasion.
We concur with investigators who see self-regulatory efforts by social
media and data service industries as severely inadequate responses to the
threats weaponized advertising poses to democratic communication.60 Tech
companies will need to provide technical input and expertise in tackling

56. Roese and Sande; Fridkin and Kenney.


57. For a further exploration, see Jamieson.
58. Jones, Libert, and Tynski; Vaidhyanathan.
59. Beckett.
60. House of Commons of Canada, “Democracy under Threat,” 34.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 385

problems of political manipulation, but the demonstrated shortcomings of


their self-regulatory actions so far show that state action is required. In addi-
tion to demonstrated failings of tech companies to respond by t­ hemselves,
self-regulation approaches have a number of inherent weakness: there is no
powerful position for public advocates when industry interest and public
interest diverge; self-regulations are difficult to enforce; they may not be
well coordinated across firms and are subject to change without public or
democratic input. Tech companies generally offer members of the public
no reliable form of redress when individual or group harms are incurred.
In the following, we review and assess a wide range of policy options
for regulatory approaches to thwarting foreign and domestic manipula-
tion enabled by digital advertising. We are reviewing approaches suggested
by researchers and government officials discussed in the documents listed
in the bibliography. Our goal here is not to cover all recommendations
from these reports comprehensively; rather, we synthesize what we see as
the most promising ideas that pertain to data-driven political advertising.
While we think some of these recommendations would be relatively sim-
ple to implement and enjoy wide popular support across many regions
and countries, we also discuss their complications and tradeoffs. Different
localities will need to fine-tune and adapt their own policies. In general, we
want to suggest that diverse stakeholders should have input in the adapta-
tion of policies governing digital advertising. Yet, the need for democratic
input should not be used as a justification for delaying the implementation
of new regulations. Policymakers must take actions so that most of the
decisions regards digital ad systems are not left to the unilateral control of
private companies, which understandably put their own pecuniary interest
above other considerations.
Our recommendations are based on the diagnosis above identifying
the basic capacities of digital advertising systems that enable weaponized
political messaging. This “infrastructure approach” tackles the problem
by examining how policy might dampen the communication capacities
of data-driven advertising that allow operatives to target vulnerabilities
to influence, as well as how digital media infrastructure can be designed in
ways that positively promote free and open democratic communication.
The infrastructure approach differs most clearly from two other promi-
nent approaches to the problem: a militarization approach and a bad actors
approach. The militarization framework entails countries investing in greater
surveillance of digital media—and potentially control over—by military
and intelligence agencies. Proponents of this approach also tend to advocate

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
386 JOURNAL OF INFORMATION POLICY

for deterrence of foreign interference attacks through ­counter­attacks, sanc-


tions, or diplomatic measures. We should note that encouraging greater
surveillance and militarization of digital communication introduces its own
threats to free and open democratic communications. These drawbacks must
be fully explored, and their potential benefits carefully weighed in light of
other options that do not introduce the same threats.
The “bad actors” approach is the most cautious and least disruptive to
advertising business models. In this case, policymakers or tech companies
focus their efforts on trying to identify a select set of “bad actors”—such
as foreign agents—responsible for engaging in political manipulation. This
approach tries to remove these troublemakers or the problematic content
they have produced without reform of the digital advertising architecture of
data collection, message targeting, and testing that provides opportunities
for manipulation. Many of the measures introduced by Facebook, Google,
and other attempts at industry self-regulation over the past two years fall
under this category. As discussed earlier, these measures have demonstrable
weaknesses and the number of actors—both foreign and domestic in many
countries—engaging in manipulative campaigns appears to be increasing
over the past two years.
One of the overarching challenges for any policy that applies special
scrutiny or regulation to “political” advertising is deciding exactly what
counts as a “political” advertisement. There are three major challenges to
this task. First, the scope of “political” advertising is layered and difficult to
define. One approach to defining what kinds of digital ads count as “polit-
ical” focuses only on ads pertaining directly to elections. Many countries
apply special regulation to advertisements that mention specific candidates
running for office or ads promoted by political parties, candidates, or offi-
cial groups supporting candidates. Such a narrow definition of “political,”
however, creates giant holes in the filter that would not catch many of the
advertising techniques manipulative actors use.
Manipulative campaigns may seek goals beyond election influence,
including influencing public discourse about a specific issue or simply try-
ing to amplify social divisions within a democracy to make it less stable.
There is strong evidence that some of the campaigns connected to the
Russian IRA were oriented toward this latter goal, as IRA accounts have
promoted competing sides of the same agenda. For instance, IRA-linked
social media accounts promoted two competing rallies set in Houston on
the same day in 2016. One IRA page “United Muslims of America” ran
ads promoting a rally to “Save Islamic Knowledge,” while another IRA

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 387

promoted the rally to “Stop the Islamification of Texas.”61 This kind of


activity would fall through the gaping holes of policies that apply only to
ads focused on electoral candidates.
Even campaigns directed specifically to interfere with election outcomes
may use ads that do not mention specific candidates or races. One analy-
sis reviewed all 3,517 ads connected to the Russian IRA that targeted US
Facebook users from June 2015 to August 2017. While much of this activity
occurred prior to November 2016 and likely sought to influence the US elec-
tions, the analysts found that only a very small fraction mentioned any can-
didates by name.62 Many more ads did not mention candidates but sought to
exacerbate tensions around race and other social identities and cleavages that
might have had electoral impact without mentioning candidates or parties.
A broader approach to defining the term “political,” should seek to include
discussion of key political issues, in addition to references to specific candi-
dates or parties. In a number of countries, Facebook has been identifying
advertising as “political” if their content is “related to politics or issues of
national importance.”63 This is generally stronger than Google’s current poli-
cies which, in the case of the European Union (EU; though similar for other
countries where Google has rolled out this policy), only includes ads “that fea-
ture a political party, or a current elected officeholder or candidate for the EU
Parliament.”64 Nonetheless, blind spots persist in Facebook’s approach that
indicate the difficulty of coming up with any comprehensive definition of
“political.” Facebook’s definition focuses only on national issues and does not
cover ad campaigns targeting local issues. As resources for local news produc-
tion decline rapidly in market news economies, influence operations aimed
at local levels may actually be the most powerful. Furthermore, exactly what
counts as an issue of “national importance” is a matter of debate. Facebook’s
list of issues of national importance for the United States includes “infrastruc-
ture” and “government reform.” Yet it is unclear whether ads addressing, for
instance, issues relating to technology regulation would qualify.
Second, even with a settled definition of political ads, ad platforms
will find it challenging to identify which ads fall under that definition
at a mass scale. In June 2018, Facebook representatives told UK inves-
tigators, “Our systems do not have a perfect or reliable way to classify

61. Allbright.
62. Penzenstadler, Heath, and Guynn.
63. Facebook Business.
64. Google. “Political Content.”

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
388 JOURNAL OF INFORMATION POLICY

the category that advertisements (which are developed and distributed by


third-­parties on our platform) fall in, whether it is political or housing or
educational or otherwise.”65 This problem is hardly unique to Facebook, as
many online ad platforms sell exponentially more ads and ad variants than
traditional media outlets and do not have humans review their content.
Ravel, Wooley, and Hamsini warn, “algorithms used to police social media
platforms are vulnerable to the biases and fallibility of their producers.”66
In their view, “Technology companies created the problems on their plat-
forms that they now claim necessitate the use of technologies that haven’t
yet been realized—asking for trust that they have not earned.”67 Even as
automated decision-making and machine learning advance, such processes
will need oversight for bias and democratic accountability.
Third, manipulation campaigns can use tactics to circumvent any defi-
nition of “political.” Political manipulation efforts might rely on targeted
ads in some cases in which the ads themselves do not refer to political
content. Instead, these campaigns might try to target a group with ads that
promote an ostensibly nonpolitical social media feed or group. Only after
gaining followers for such a group would the campaign start to introduce
political themes into that feed. For instance, imagine a campaign runs tar-
geted ads for a Facebook group that appears to be a support group for peo-
ple suffering from social anxiety disorder. The campaign may try to gain
the trust of followers in this support group, only to later introduce political
messages to the group once they have gained trust. The Russian IRA tried
tactics along these lines, in one case promoting a hotline for masturbation
addiction targeting followers of a page they had created that appears to be
a site for deeply religious Christians.
Categorizing political speech or political data essentially draws a box
around something and says: “this is political.” What is in the political box,
then, is recognized as important to democracy and subject to special rules
and standards to reflect that status. Yet, the box does not contain everything,
so what is not in the box is held to different rules and standards. No box will
perfectly encapsulate all political speech. Nonetheless, these earlier challenges
should not justify inaction—they should encourage a capacious approach to
defining political advertising. If implementing the policies discussed in the
following for all advertisements is deemed impractical, regulations based on

65. House of Commons of Canada, “Disinformation and ‘Fake News’: Interim Report,” 37.
66. Ravel, Woolley, and Sridharan.
67. Ibid.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 389

a broad understanding of “political” can still significantly restrict the options


available to manipulative political operatives.
For any policies that rely on ad platforms to identify “political ads,” we
recommended that regulators provide guidelines for defining “political”
and open avenues for courts or regulatory commissions to provide over-
sight of the processes platforms use to identify political advertisements.
Whatever body is tasked with setting these definitions—whether a public
commission or private company—should seek input from civil and human
rights organizations and diverse civil society stakeholders.

Transparency in Encounters with Ads

One common approach policymakers have suggested to prevent the wea-


ponization of ads focuses on transparency surrounding ad practices. At
a minimum, users should know who is targeting them and why they are
being targeted. This same transparency principle also entails that infor-
mation on targeted political ad campaigns is made available to indepen-
dent researchers and journalists who can act as public interest watchdogs.
Journalists, researchers, and independent auditors or regulators can make
sense of larger patterns in targeted advertising from a vantage point dis-
tinct from the individualized experience of users. When these groups con-
vey their insights to members of the public and public interest advocates,
they serve a critical function in making the workings of digital advertising
more transparent to users and democratic publics. Regulators across the
globe have barely started to address the paradigm shift represented by per-
sonalized digital advertising. While not everything is new in digital adver-
tising, both industry accounts and academic research suggest personalized
advertising operates according to a logic quite different from previous iter-
ations of ads.68 Such a shift may call for rethinking very basic issues about
advertising and what kind of public interest ground rules may be needed.
So far, the policies that have most affected digital advertising, such as the
EU’s General Data Protection Regulation (GDPR), have not directly tack-
led new concerns arising from targeted advertising, but rather have pri-
marily addressed concerns regarding privacy and data rights. We suggest
three overarching goals for policies regarding targeted political advertising:
1. Ads platforms should help users to make informed interpretations and
judgments about political advertisements.

68. Turow.
This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
390 JOURNAL OF INFORMATION POLICY

2. Ad platforms and political advertisers should provide governments,


researchers, and civil society with information for audits and tools for
efficiently monitoring ad campaigns as a key component of democratic
oversight.
3. Ad platforms must limit fraudulent, illegitimate campaigns (e.g., scam-
mers, foreign operatives).
To operationalize these goals, we recommend a series of concrete poli-
cies aimed in increasing transparency in users’ encounters with ads, and in
funding and sponsorship.
Transparency in ad design, targeting, and profiling:
A. Require on-ad disclaimers informing users of ad sponsors, the specific
targeting parameters used by the advertiser, and identification of all
the sources of data used in targeting (e.g., platform activity, external
browsing, public records).
B. Require all political ads’ disclaimers also include a prominent link for more
information about specific ads and their sponsors, including the amount
of money spent on the ad, the time period the ad is set to run, the spon-
sors’ identified donors, and a further link to all variants of the same ad.
C. Require all political ads’ spaces to be designed in ways that foreground
users’ awareness that they are encountering a paid advertisement.
Ghosh and Scott recommend, “All political ads that appear in social
media streams should be clearly marked with a consistent designation,
such as a bright red box that is labeled ‘Political Ad’ in bold white text,
or bold red text in the subtitle of a video ad.”69
D. When users interact with digital ads—through clicks, likes, shares—a
pop up should remind them that they are interacting with a paid polit-
ical ad and explain any ways in which such an interaction may leave
data traces that could influence future targeting.
Users’ interests are not fully served simply by making information about
political ads available if it is time-consuming to find. We must consider
the incredible volume of ads users are subjected to online. As cognitive
psychological research documents quite convincingly, humans must rely
on cues able to be processed quickly to form mental impressions and make
decisions.70 The design of advertising interfaces determines which features
are salient and which are not. Advertisers would generally prefer ads’ spaces

69. Ghosh and Scott, “Digital Deceit II,” 14.


70. Kahneman.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 391

designed with minimal contextual cues to prompt users to identify them


as ads. In many cases, digital advertisers also prefer their messages blend
into streams of so-called “organic” web content without calling attention
to themselves as ads. Yet, we suggest there is a public interest in designing
ads in ways that not only make such information available but bring it to
the foreground of perception and processing.
Transparency and verification in ad sponsorship:
A. The sponsors and funding sources of targeted ads should have their
identities verified. This could happen either through a public agency
or strict requirements that place verification on ad platforms and
exchanges. In the latter case, large ad platforms should be bound by
“know your consumer” regulations for political advertisements. They
should take all reasonable steps to accurately verify the full identity of
the sponsoring organization, including the identity of its major donors.
In this respect, ad platforms will be required to take on a responsibility
to prevent foreign electoral interference and manipulation campaigns
similar to the steps banks take to prevent money laundering.
B. “Dark money” should be eliminated in targeted advertising by requir-
ing all political advertisements that make use of targeting data to iden-
tify all significant funders and donors.
C. The UK House of Commons reports, “Some organizations such as the
Institute of Practitioners in Advertising (IPA) support creating a cen-
tral public register of online political adverts, rather than leaving it to
the social media companies themselves.” We see this as a promising
recommendation that would benefit the public interest by eliminating
inconsistencies among platform-specific archives.
D. Government or civil society commissions should partner with ad plat-
forms to create criteria and procedures for identifying “inauthentic
activity” from digital advertisers that create false appearances.

Discussion of Transparency

Major tech companies such as Facebook and Google have already started
to implement their own policies requiring certain types of political ads in
some countries to include a disclaimer naming a sponsor and to go through
a ­verification process. These verification processes, however, have proved
feeble. Just before the 2018 midterm election, a VICE news investigation
team “applied to buy fake ads on behalf of all 100 sitting U.S. senators,
including ads ‘Paid for by’ Mitch McConnell and Chuck Schumer. All 100

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
392 JOURNAL OF INFORMATION POLICY

sailed through the system, indicating that just about anyone can buy an
ad identified as ‘Paid for by’ a major U.S. politician.”71 Even if measures
are put in place to prevent advertisers from impersonating elected officials,
as long as sponsors are easily able to create front groups that provide lit-
tle information about their donors, such disclaimers do little to provide
meaningful information to citizens, journalists, researchers, or regulators.
In certain circumstances, there are legitimate concerns about whether
requiring identification of donors for political ads would potentially chill
speech. We recommend policymakers consider tradeoffs carefully, but we
think that given that targeted advertising relies on personal users’ data
there is more justification for limiting anonymous speech by large donors
in this area than others. Alternatively, policymakers could require that
platforms specifically ask users if they are willing to allow their data to be
used by political groups that do not disclose all major donors. We suspect
that requiring such explicit permission from users would effectively end
the practice of anonymously funded targeted ads.
Requiring ad platforms to stringently verify the identity of sponsor-
ing organizations and their financing is a separate issue from requiring
sponsors to make their donors public. Making ad sales contingent upon
verification is a crucial step to preventing undisclosed foreign influence
operations from using targeted advertising. If the burdens of a rigorous
verification process significantly disadvantage small advertisers, policy-
makers could consider whether to place spending thresholds below which
advertisers could use a less rigorous process, though ad platforms would
need to take steps to prevent abuse of this leniency.

Data Rights

Ad-supported manipulation strategies depend on the widespread col-


lection and exchange of consumer data. Social media platforms and
other companies in digital marketing have historically faced few restric-
tions on their data practices. This “wild west” scenario is shifting as
­policymakers across the world have begun to implement various data
protection and privacy regulations.72 The most important of these is
the EU’s GDPR, which provides individuals with a range of data rights

71. Turton.
72. Information Commissioner’s Office; Bradshaw, Neudert, and Howard.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 393

and privacy protections and is often cited as an international model for


policymakers.73
In general terms, data rights give individuals control over the ways in
which their personal information is collected and used. This approach pri-
oritizes individual autonomy and is grounded in the principle of informed
consent. Paired with robust enforcement mechanisms, data rights can shift
the asymmetries of information and control that characterize individuals’
engagement with advertising platforms.
Empowering individuals to control how their information is used and
exchanged could significantly “blunt the precision” of the profiling, audi-
ence segmentation, and targeted messaging systems that have proven ripe
for political abuse.74 Strong opt-in regulations in the model of the GDPR
would likely reduce the supply of targeting data and audience attention
available to manipulation campaigns, limiting their effectiveness and tacti-
cal options. Limiting the overall supply of advertising data would also cut
down the potential for security breaches.
Proposals under the data rights framework include
1. Political profiling and ad targeting should be strictly opt-in services that
require individual consent. Following the GDPR model, valid consent
must be obtained in advance and be “freely given, specific, informed
and unambiguous.”75
2. Because ad profiling and targeting systems use a wide array of data,
consent-based data rights measures should apply to a broad scope of
personal information.
3. Consent should require periodic renewal to reflect the fact that data
practices, as well as individuals’ privacy preferences, change over time.
4. Consent should be obtained on as granular a basis as possible. Ideally,
individuals should have control over how their data is used not only by
platforms, but by specific advertisers.

Discussion of Data Rights Approaches

Models of consent such as the “Notice and Choice” opt-out standard in


the United States are largely ineffectual for stemming ad-supported polit-
ical manipulation and are better understood as failed policies that abet a

73. Ghosh and Scott, “Digital Deceit II”; Greenspon and Owen.
74. Ghosh and Scott, “Digital Deceit II,” 22.
75. General Data Protection Regulation Article 4 line 11.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
394 JOURNAL OF INFORMATION POLICY

wide range of privacy harms.76 In contrast, the GDPR provides a strong


model for codifying “opt-in” consent to process personal data, stipulating
that valid consent must be “freely given, specific, informed and unambig-
uous” and must be obtained in advance of data processing.77
As the GDPR makes clear, meaningful consent requires absolute trans-
parency and clarity from data processors when making disclosures about
data practices.78 Any application of consent-based regulations to problems
of ad-supported political manipulation must be linked with the kinds of
transparency measures we discuss earlier. Additional GDPR protections
like “purpose specification” and “use limitation” are meant to ensure that
data can be used only for purposes that are specifically agreed to by the
individual. Generally speaking, explicit consent is required when data
collected for one purpose is used for another. Robust opt-in regulations
like these are potentially powerful disruptors of advertising-based politi-
cal manipulation by providing a “shield against microtargeting.”79 Applied
broadly, such measures would significantly impact not only social media
companies and ad platforms but the wider data broker industry that has
moved rapidly into political advertising in recent years.80 However, even
more international cooperation may be required to properly regulate data
brokers and ad exchanges while keeping web content accessible across
international boundaries.
Not all implementations of data rights are created equal. Opt-out policy
regimes, such as the “Notice and Choice” model in the United States, repre-
sent a version of data rights that do very little to prevent ad-supported politi-
cal manipulation.81 Strong opt-in consent regimes like the GDPR, which has
brought major changes to the digital advertising industry, could have signif-
icant impacts on ad-supported political manipulation. Impact depends on
the degree to which people will choose to opt-out of ad targeting. This is an
empirical question and should be studied by policymakers through surveys
and obtaining opt-out data from companies’ GDPR compliance activities.
Evidence suggests high levels of dissatisfaction with online privacy at large.82

76. Woodrow Hartzog, “Policy Principles for a Federal Data Privacy Framework in the
United States,” § U.S. Senate Committee on Commerce, Science and Transportation (2019).
77. General Data Protection Regulation Article 4 line 11.
78. Dillet.
79. Ravel, Woolley, and Sridharan, 14.
80. Chester and Montgomery.
81. Rothchild.
82. Centre for International Governance Innovation.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 395

According to one recent survey, 68 percent of respondents found “tracking


online activity to tailor advertisements” to be unethical.83
Consent-based data rights measures must apply to a broad scope of
personal information if they are to be effective mitigators of advertising-
supported political manipulation. Applying protections and rights to
sensitive personal information (such as data revealing racial/ethnic ori-
gin, political opinions, or religious beliefs) is only a baseline. Ad profiling
and targeting systems use many kinds of metadata and computationally
derived data to infer and predict consumer information. Even the GDPR,
which designates a category of sensitive personal information with special
protections, leaves gaps for creative applications of political targeting by
proxy and, according to the UK House of Commons, does not protect
“inferred data.”84
Consent should never be granted in perpetuity, but should instead
require periodic renewal to reflect the fact that data practices, as well as
individuals’ privacy preferences, change over time. Consent should also be
able to be withdrawn at any time. Implementing expiration dates offsets
some of the burden that consent-based approaches place on individuals
to routinely and proactively maintain their privacy choices. Instead, the
burden should be on advertisers and platforms to periodically reach out to
individuals to obtain renewed consent.
The GDPR’s consent requirements are triggered differentially across a
range of variables, prompting some concerns about regulatory “gaps” and
“loopholes.”85 For example, a number of conditions exist whereby consent is
not required to process personal information. While a full discussion of these
issues is beyond the scope of this article, the question of when consent should
be required to use data for targeted advertising is of critical importance.
Leading social media platforms and ad networks operate at a mas-
sive scale, providing sophisticated communication tools to millions of
advertisers with diverse motives and objectives. In such an environment,
consent at the platform level should not imply consent across the board
for all advertisers. We propose that consent requirements be applied to
the core capacities of digital advertising—behavioral profiling and tar-
geted ­messaging—and that consent be obtained on as granular a basis as

83. Sample was 6,387 adults in France, Germany, the United Kingdom, and the United
States. RSA Security
84. House of Commons of Canada. “Disinformation and ‘Fake News’: Final Report.”
85. Bradshaw, Neudert, and Howard; McCann and Hall.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
396 JOURNAL OF INFORMATION POLICY

possible to give individuals control over how their data is used by different
advertisers.
One implementation of granularity would apply consent not only
to the platforms that provide advertising infrastructure services, but to
every advertiser that uses those platforms to profile and target individuals.
Rather than simply asking individuals for blanket consent to all manner of
targeted advertising (as Facebook has attempted to do under the GDPR86),
permission could be obtained by individual advertisers on a platform by
platform basis. For example, XYZ Political Action Committee (PAC) could
be required to obtain consent from individuals before targeting them on
Facebook, regardless of whether the PAC imports their own database of
supporters or simply uses Facebook’s baked-in ad targeting systems.87 If
the PAC then wanted to reach those same individuals on another platform,
further consent could be required. If split testing were used in any of these
instances, separate and distinct consent could be mandated as well.
We contend that granular consent aligns with the spirit of GDPR’s
purpose specification requirement. Guidelines from the Article 29 Data
Protection Working Party (precursor to European Data Protection Board)
suggest that data processors should “consider introducing a process of
granular consent where they provide a clear and simple way for data
subjects to agree to different purposes for processing.”88 Current inter-
pretations seem to understand targeted advertising as a single category
of purpose. We argue that advertising contains a spectrum of purposes
dependent upon advertiser identities, objectives, and targeting mecha-
nisms. Policy should recognize that important distinctions exist between
an ad campaign that uses profile data to target individuals about a divisive
social issue and a consumer product campaign that uses demographics to
reach a broad audience.
Granular consent extends the basic principle that people must be
informed in order to make choices about how their data is used. This

86. In addition to seeking blanket consent from its users, Facebook has also “bundled” con-
sent to advertising within its more general terms of service provision. At the time of this writing,
privacy regulators in several EU countries are investigating this issue as it pertains to Facebook
and other major ad platforms.
87. To the best of our knowledge, the GDPR is not clear on whether consent is required to be
obtained by advertisers that use the built-in targeting capacities of an ad platform like Facebook.
88. “Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of
Regulation 2016/679” (ARTICLE 29 DATA PROTECTION WORKING PARTY, October 3,
2017), https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 397

approach makes advertisers accountable to the targets of their influence


campaigns, makes microtargeting more visible, and decreases the likeli-
hood that people will be targeted by entities they do not trust.
We acknowledge that “granular consent” as outlined here would face
significant challenges. Industry critics would likely argue that such plan
would be too onerous, would produce bad user experiences, and would
“kill innovation.” These may be valid concerns; however, it is worth
pointing out that as a matter of course, the digital advertising industry
has long levied such complaints against virtually all regulatory measures
aimed in its direction. If companies truly believe that consumers want
targeted advertising, the principle of granular consent should not present
a major threat to their business model, design questions notwithstand-
ing. The real pushback would stem from the fact that the likely result of
granular consent is that most people would opt-out from many advertis-
ers. In that case, the rollout of the GDPR is instructive, wherein major
platforms have attempted to thwart meaningful consent via bundling
and other means.
A growing body of research finds significant flaws in the notion that
individual consent should be the core mechanism of data policy.89 Though
sustained engagement with this literature is beyond the scope of this
article, it is useful to summarize some key findings to clarify the point that
data rights must be considered in conjunction with other policy frame-
works to combat political manipulation. Market imperatives incentivize
companies to game and undermine consent mechanisms and other forms
of privacy controls by making them hard to find, difficult to use, or flawed
by design.90 As tech journalist Will Oremus writes, these tactics allow
companies to “mollify privacy critics while maintaining the status quo”
of unchecked data collection and processing.91 Strong regulations like the
GDPR attempt to account for these issues by requiring that consent be
“freely given, specific, informed and unambiguous.”
Even when implemented in good faith, consent mechanisms burden
individuals to make judgments about how their data will be used in an
increasingly complex, and increasingly unknowable, information ecosys-
tem. Daniel Solove, a leading legal scholar of privacy, frames ­consent within
a “privacy self-management” paradigm, which fails to address a range of

89. Hartzog; Solove; Rothchild.


90. Dillet; Hill; Tiku.
91. Oremus.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
398 JOURNAL OF INFORMATION POLICY

“cognitive problems” and structural constraints that “impair individuals’


ability to make informed, rational choices about the costs and benefits of
consenting to the collection, use, and disclosure of their personal data.”92
While consent laudably seeks to prioritize individual autonomy, privacy
self-management is undermined by the structure of data collection mar-
kets. In a system where data is collected over time, exchanged, combined,
and used in unpredictable ways by various entities, it has become impos-
sible “for people to weigh the costs and benefits of revealing information
or permitting its use or transfer without an understanding of the potential
downstream uses.”93
Data rights regulations like GDPR are designed to address a wide set
of concerns relating to digital information. While political manipulation
is increasingly understood to be linked to data rights and privacy issues, it
has not been a driving force of policy design in this area. While data rights
may help blunt the precision of weaponized ad targeting, they should not
be the only policy tool in the kit.

Regulating Data-Driven Advertising Capacities in the


Public Interest

When applied to political manipulation, data rights approaches are valu-


able to the extent that they limit the pools of data and human attention
that are available to political influence operatives. Like transparency mea-
sures, data rights are meant to empower individuals to navigate digital
platforms with greater awareness, purpose, and autonomy. In effect, these
initiatives place the burden of upholding democratic communications
norms at the nexus of the consumer/ad platform transaction. Once mar-
ket conditions are properly calibrated, individuals are largely left to fend
for themselves. Along similar lines, competition policy has also been sug-
gested as a tool to remedy political manipulation and disinformation.94
The general notion is that large platforms concentrate risk for political
interference and that uncompetitive markets insulate platforms from the
consequences of poor data practices. Proposals for reconfigured antitrust
review, data portability, and interoperability standards are recommended

92. Solove, 1880–81.


93. Ibid., 1881.
94. Ghosh and Scott, “Digital Deceit II.”

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 399

under the rationale that market forces will diversify the tech services land-
scape and give consumers more choices that enhance privacy and reduce
manipulative targeting.
In addition to market-based approaches, policymakers should con-
sider more direct forms of intervention into the data-driven advertising
capacities that are most susceptible to abuse. As Hartzog notes, rather than
offloading risk to consumers though transparency guidelines and consent
mechanisms, “strong rules limiting collection and storage on the front end
can mitigate concern about the privacy problems raised through data ana-
lytics, sharing, and exploitation.”95
The UK House of Commons final report on Disinformation and Fake
News proposes “re-introducing friction into the online experience.”96
While that report focuses on slowing down user interactivity “to give peo-
ple time to consider what they are writing and sharing,” we propose that
incorporating friction into ad targeting systems could be an effective means
to tamp down advertising-supported political manipulation. Proposals in
this area generally seek to
1. Limit advertisers’ capacities to find and target vulnerabilities
2. Mitigate the tendency of online political advertising toward niche tar-
geting that can amplify social segmentation with incentives that encour-
age campaigns to address a broad and heterogeneous public sphere
If political advertising requires elevated codes of transparency and data
rights in order to meet public interest goals, then policymakers should also
consider higher public interest standards for the tools and techniques of
political influence operations. Such an approach draws from and extends
GDPR-style privacy regulation, which as Bradshaw et al. note, “has gaps in
coverage and enforcement that limit its effectiveness to address all problems
associated with social media manipulation and data-driven targeting.”97
Proposals under the category of public interest ad regulation include
1. Political profiling and targeting could be inhibited by strong data min-
imization standards such as those mandated by the GDPR. Key com-
ponents of data minimization are “collecting personal data only when
it is absolutely needed; deciding if some types of data should never be

95. Hartzog, Policy principles for a federal data privacy framework in the United States.
96. House of Commons of Canada. “Disinformation and ‘Fake News’: Final Report.”
97. Bradshaw, Neudert, and Howard.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
400 JOURNAL OF INFORMATION POLICY

collected; keeping data only for as long as necessary; and limiting access
to only those who truly need it.”98
2. Advertising profile information or certain categories therein could be
subject to firm expiration dates. In such cases, “old data” would rou-
tinely be expunged from storage systems.99 This would limit advertisers’
ability to develop profiles over long periods of time and could shift
advertising away from intermittent communication toward more peri-
odic contact with trusted entities. The GDPR includes rules intended
to limit data storage, though they appear to give wide discretion to data
processors/controllers.
3. Policymakers should closely scrutinize specific advertising techniques
that present clear opportunities for abuse and convene multistakeholder
discussions about their social benefits and costs. Policymakers should
move to constrain profiling and targeting practices that are found to
present unacceptable levels of political risk. Lookalike targeting,100 geo-
targeting,101 cross-device tracking, third-party data brokering,102 split
testing, and microtargeting are among the techniques that deserve
heightened regulatory review.
4. Policymakers should commission or undertake research to consider pol-
icies that could encourage political advertisers to forego microtargeting
and address broad and heterogeneous constituencies. Potentially, such
a result could come from policies that greatly restrict data collection or
targeting capacities. Yet, policymakers might consider more direct routes
to counteracting the economic incentives that push political advertis-
ing toward digital microtargeting. Policies could include direct require-
ments on ad platforms that no more than a certain percentage of their
political advertising meets a well-thought criteria of microtargeting.
Other policies could include additional burdens on funding used for
microtargeted political advertising, such as not allowing tax-deductible
nonprofit funds to be used for microtargeted ads.

98. “Why DRN?—Digital Rights Now,” accessed March 5, 2019, https://digitalrightsnow.ca/


why-drn/.
99. HTTP cookie protocols already include expiration functionality.
100. House of Commons of Canada. “Disinformation and ‘Fake News’: Interim Report.”
101. Ghosh and Scott, “Digital Deceit I.”
102. McCann and Hall.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 401

Discussion of Regulating Ad Capacities in the Public Interest

Policymakers must be direct in weighing the costs and benefits of adver-


tising profiling and targeting techniques that present clear opportunities
for abuse. This is a challenging task for several reasons. Policymakers have
thus far had limited access to operational details of known political manip-
ulation campaigns. Ad profiling is highly segmented and includes target-
ing criteria inferred from predictive analytics. Ostensibly “nonpolitical”
data can be used as proxies for political ad targeting. Finally, data-driven
advertising is highly profitable, especially for the largest platforms such as
Facebook and Google, which wield significant political economic power.
Even as questions swirl about the democratic implications of such sys-
tems, ad platforms continue to invest, expand, and fortify political buffers
through lobbying and electoral campaign donations.103
Such challenges notwithstanding, if democratic political communica-
tion depends on an open public sphere, then policymakers should delib-
erate whether data-driven advertising designed to facilitate individualized
communication is antithetical to democratic ideals. As we have argued in
this article, ad-supported political manipulation often hinges upon the
capacity to carve audiences into precise segments that can be targeted in
exploitive ways. We therefore suggest that policymakers take particular
note of possibilities to add friction into microtargeting processes. One
proposal is to enact minimum thresholds for the size of targeted audi-
ences.104 Such thresholds could operate on sliding scales to account for
particular contexts (e.g., national vs. regional election), but the basic goal
is to limit the capacity for advertisers to send individualized political mes-
sages. This idea has a degree of precedent in that platforms like Facebook
already voluntarily reduce the distribution of posts deemed problematic or
contravening community standards.105 A policy of minimum audience
thresholds is similar in that it recognizes microtargeted political messaging
itself to be problematic for democratic norms.
Perhaps surprisingly, one of the boldest proposals in this area comes
from the digital advertising industry itself. The IPA, a major UK advertising
trade association, has officially called for “a moratorium on micro-targeted

103. Solon and Siddiqui.


104. House of Commons of Canada, “Disinformation and ‘Fake News’: Interim Report.”
105. “The Three-Part Recipe for Cleaning up Your News Feed | Facebook Newsroom,” accessed
March 5, 2019, https://newsroom.fb.com/news/2018/05/inside-feed-reduce-remove-inform/.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
402 JOURNAL OF INFORMATION POLICY

political advertising online.”106 In the words of IPA President Sarah


Golding: “Politics relies on the public square—on open, collective debate.
We, however, believe micro-targeted political ads circumvent this. Very
small numbers of voters can be targeted with specific messages that exist
online only briefly.”107 To be clear, the IPA advocates for a temporary stop-
page to minimize harm until a regulatory framework can be established.
Nevertheless, the thrust of their position is that microtargeted political
advertising as a general category of practice carries social costs that out-
weigh its benefits. The IPA’s assessment is that “ad technology designed for
consumer products and services” cannot be permitted to be “weaponized”
for political ends.108

Designing Democratic Communication Infrastructures

In the long term, a robust approach to addressing digital foreign inter-


ference and political manipulation will address these problems as matters
of communication infrastructure. A primary question citizens and poli-
cymakers must tackle is how digital infrastructure—including digital ad
systems—can be built to encourage more open and meaningful demo-
cratic communication and limit the potential for manipulative tactics to
flourish. This challenge is partly analogous to designing a game. The best
games incentivize fair competition and have built-in structures (or give rise
to norms) that penalize cheating and other forms of malicious play. Like
games, our digital communication systems are also designed in ways that
encourage certain types of activity and not others. Social media systems
have been built to maximize revenues through keeping users engaged as
long as possible so they can be exposed to targeted ads. The architecture
and operations of social media platforms are not designed simply by pro-
grammers’ hunches about what will interest their users; rather, the major
platforms have engaged in meticulous observation and testing of users to
figure out just what to do to keep users engaged for as long as possible.
Unfortunately, as recent digital propaganda and disinformation campaigns
demonstrate, when social media are calibrated to optimize this goal, they

106. “IPA to Call for Moratorium on Micro-Targeted Political Ads Online,” accessed
March 5, 2019, https://ipa.co.uk/news/ipa-to-call-for-moratorium-on-micro-targeted-­political-
ads-online#.
107. Ibid.
108. Singer.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 403

can undermine the kinds of communication that make democracies thrive.


Left unchecked, they create what Judy Estrin and Sam Gill refer to as
“digital pollution” or negative externalities, including manipulation and
disinformation, trolling, digital addiction, and upending the revenue
models that have traditionally supported commercial journalism.109
Digital advertising infrastructure is far from the only online avenue for
foreign interference and political manipulation campaigns to operate. The
policy responses discussed in this article focus broadly on reining in the capac-
ities of digital ad systems that create accessible opportunities for political wea-
ponization. Yet, the scope of such recommendations is too narrow to address
the full range of factors that interact with and compound the threats resulting
from manipulation campaigns’ use of digital ad systems. For a more holis-
tic assessment, citizens, policymakers, and civil society actors need to take a
wide-angle view of policymaking in contemporary media environments. This
includes developing effective policies and enforcement mechanisms that help
prevent manipulation campaigns from taking advantage of the peer-to-peer/
publisher side of social media networks. One specific recommendation along
these lines found across a number of inquiries and reports is that social media
platforms should identify automatic accounts and bots as such and not allow
such accounts to affect popularity rankings and curation algorithms.110
Policymakers may also play productive roles in the stewardship of media
environments beyond regulations that affect the specific design of social
media platforms and ad systems. First, several reports and government
inquiries have called for states to allocate additional resources to study
political disinformation problems, noting the necessity for states to forge
durable collaborative relationships with the private sector and civil soci-
ety.111 Along these lines, some called for increased oversight and regulatory
powers for dedicated privacy officials.112 Second, policymakers must take
steps to secure resources for independent journalism organizations that
seek to build trust across diverse groups of citizens. Commercial news reve-
nues have been undermined as digital markets have shifted ad revenue away
from content producers and toward ad platforms and intermediaries.113

109. Estrin and Gill.


110. Ghosh and Scott, “Digital Deceit II.”
111. House of Commons of Canada, “Democracy under Threat”; “Disinformation and ‘Fake
News’: Interim Report.” 68; Koulolias et al., 6.
112. Greenspon and Owen, 27; House of Commons of Canada, “Democracy under Threat,”
26; “Disinformation and ‘Fake News’: Interim Report.”
113. Nielsen and Ganter.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
404 JOURNAL OF INFORMATION POLICY

This decline in resources devoted to journalism has helped to create a


vacuum of reliable information and trust that manipulation campaigns
attempt to exploit. To address journalism’s revenue crisis, Pickard, among
others, has proposed that states tax digital ad revenues to fund indepen-
dent, nonprofit news production.114
More broadly, concerned tech workers, public interest advocates, and
researchers are searching for ways to incorporate democratic accountability
and input from diverse groups into the design decisions that determine
social media architecture. Along these lines, Ravel, Woolley, and Sridharan
recommend that digital platforms “consult with civil rights groups on
an ongoing basis and incorporate findings into product development.”115
Policymakers may find ways to incentivize such collaborations or require
public and civil society input, which could lead toward building more inclu-
sive digital environments that are less prone to political manipulation. The
success of efforts to introduce democratic accountability will be strongly
impacted by the extent to which such measures can override the prevailing
design imperative of communications systems to maximize private profits.

bibliography

Angwin, Julia, and Terry Parris Jr. “Facebook Lets Advertisers Exclude Users by Race.”
ProPublica, October 28, 2016. Accessed March 15, 2019. https://www.propublica.org/
article/facebook-lets-advertisers-exclude-users-by-race.
Angwin, Julia, Madeleine Varner, and Ariana Tobin. “Facebook Enabled Advertisers to Reach
‘Jew Haters.’” ProPublica, September 14, 2017. Accessed March 15, 2019. https://www.
propublica.org/article/facebook-enabled-advertisers-to-reach-jew-haters.
Ariely, Dan. Predictably Irrational: The Hidden Forces That Shape Our Decisions. New York:
Harper Collins, 2008.
Beckett, Lois. “Trump Digital Director Says Facebook Helped Win the White House.”
The Guardian, October 9, 2017, sec. Technology. https://www.theguardian.com/
technology/2017/oct/08/trump-digital-director-brad-parscale-facebook-advertising.
Berkovsky, Shlomo, Maurits Kaptein, and Massimo Zancanaro. “Adaptivity and Personalization
in Persuasive Technologies.” In Proceedings of the Personalization in Persuasive
Technology Workshop, Persuasive Technology 2016, edited by R. Orji, M. Reisinger,
M. Busch, A. Dijkstra, A. Stibe, and M. Tscheligi, Salzburg, Austria, April 5, 2016.
Bey, Sebastian, Giorgio Bertolin, Nora Biteniece, Edward Christie, and Anton Dek.
“Responding to Cognitive Security Challenges.” NATO STRATCOM Centre
of Excellence, January 2019. Accessed March 15, 2019. https://stratcomcoe.org/
responding-cognitive-security-challenges.

114. Pickard, “Break Facebook’s Power”; “The Violence of the Market.”


115. Ravel, Woolley, and Sridharan, 22.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 405

Bodine-Baron, E., T. Helmus, A. Radin, and E. Treyger. Countering Russian Social Media
Influence. Santa Monica, CA: Rand Corporation, 2018. Accessed March 15, 2019. https://
www.rand.org/content/dam/rand/pubs/research_reports/RR2700/RR2740/RAND_
RR2740.pdf.
Bradshaw, S., and P. Howard. Challenging Truth and Trust: A Global Inventory of Organized
Social Media Manipulation. Computational Propaganda Research Project, Oxford
Internet Institute, 2018. Accessed March 15, 2019. http://comprop.oii.ox.ac.uk/research/
cybertroops2018/.
Bradshaw, S., L.-M. Neudert, and P. Howard. Government Responses to Malicious Use of Social
Media. NATO STRATCOM Centre of Excellence, 2018. Accessed March 15, 2019.
https://comprop.oii.ox.ac.uk/research/government-responses/.
Calo, Ryan. “Digital Market Manipulation.” George Washington Law Review 82, no. 4 (August
2014): 995–1051.
Centre for International Governance Innovation (CIGI). “2018 CIGI-Ipsos Global Survey on
Internet Security and Trust,” 2018. Accessed March 5, 2019. https://www.cigionline.org/
internet-survey-2018.
Chester, Jeff, and Kathryn C. Montgomery. “The Role of Digital Marketing in Political
Campaigns.” Internet Policy Review 6, no. 4 (December 31, 2017). Accessed March 15,
2019. https://policyreview.info/articles/analysis/role-digital-marketing-political-
campaigns.
Das, Sauvik, and Adam D. I. Kramer. “Self-Censorship on Facebook.” Facebook Research,
July 2, 2013. Accessed March 15, 2019. https://research.fb.com/publications/
self-censorship-on-facebook/.
Davies, Jessica. “WTF Is a Persistent ID.” Digiday, March 8, 2017. Accessed March 15, 2019.
https://digiday.com/marketing/wtf-persistent-id/
Dean, Sam. “Facebook Decided Which Users Are Interested in Nazis—and Let Advertisers Target
Them Directly.” Los Angeles Times, February 21, 2019. Accessed March 15, 2019. https://
www.latimes.com/business/technology/la-fi-tn-facebook-nazi-metal-ads-20190221-
story.html.
Dillet, Romain. “French Data Protection Watchdog Fines Google $57 Million under the
GDPR.” TechCrunch, January 21, 2019. Accessed March 15, 2019. https://techcrunch.
com/2019/01/21/french-data-protection-watchdog-fines-google-57-million-under-
the-gdpr/.
DiResta, Renee, Kris Shaffer, Becky Ruppel, David Sullivan, Robert Matney, Ryan Fox,
Jonathan Albright, and Ben Johnson. “The Tactics & Tropes of the Internet Research
Agency.” New Knowledge, 2018. Accessed March 15, 2019. https://cdn2.hubspot.net/
hubfs/4326998/ira-report-rebrand_FinalJ14.pdf.
The Electoral Commission. Digital Campaigning: Increasing Transparency for Voters. United
Kingdom, 2018. Accessed March 15, 2019. https://www.electoralcommission.org.uk/__
data/assets/pdf_file/0010/244594/Digital-campaigning-improving-transparency-for-
voters.pdf
Engelhardt, Steven, and Arvind Narayanan. “Online Tracking: A 1-million-site Measurement
and Analysis,” October 27, 2016. Accessed March 15, 2019. http://randomwalker.info/
publications/OpenWPM_1_million_site_tracking_measurement.pdf.
Enwemeka, Zeninjor. “Under Agreement, Firm Won’t Target Digital Ads around Mass.
Health Clinics.” WBUR, April 4, 2017. Accessed March 15, 2019. http://www.wbur.org/
bostonomix/2017/04/04/massachusetts-geofencing-ads-settlement.
Estrin, J., and S. Gill. “The World Is Choking on Digital Pollution.” Washington Monthly,
January 13, 2019. Accessed March 15, 2019. https://washingtonmonthly.com/magazine/
january-february-march-2019/the-world-is-choking-on-digital-pollution/.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
406 JOURNAL OF INFORMATION POLICY

European Commission. “High Representative of the Union for Foreign Affairs and Security
Policy.” Action Plan against Disinformation (No. JOIN(2018) 36 final), 2018a.
Accessed March 15, 2019. https://ec.europa.eu/commission/sites/beta-political/files/
eu-communication-disinformation-euco-05122018_en.pdf
———. Report on the Implementation of the Communication “Tackling  Online  Disin‑
formation: A European Approach” (No. COM(2018) 794/3), 2018b. Accessed March  15,
2019. https://ec.europa.eu/commission/sites/beta-political/files/eu-communication-
disinformation-euco-05122018_en.pdf
Ewen, Stuart. Captains of Consciousness: Advertising and the Social Roots of the Consumer Culture.
New York: Basic Books, 2008.
Facebook Business. “Getting Authorized to Run Ads Related to Politics or Issues of National
Importance.” Advertiser Help Center. Accessed September 9, 2018. Accessed March 15,
2019. https://www.facebook.com/business/help/208949576550051.
Fridkin, Kim L., and Patrick J. Kenney. “Variability in Citizens’ Reactions to Different Types
of Negative Campaigns.” American Journal of Political Science 55, no. 2 (2011): 307–25.
Full Fact. Tacking Misinformation in an Open Society. 2018. Accessed March 15, 2019.
https://fullfact.org/media/uploads/full_fact_tackling_misinformation_in_an_open_
society.pdf
Ghosh, D., and B. Scott. Digital Deceit I: The Technologies behind Precision Propaganda on the
Internet. New America Foundation, January 2018a. Accessed March 15, 2019. https://
www.newamerica.org/public-interest-technology/policy-papers/digitaldeceit/.
———. Digital Deceit II: A Policy Agenda to Fight Disinformation on the Internet. New
America Foundation, 2018b. Accessed March 15, 2019. https://shorensteincenter.org/
digital-deceit-ii-policy-agenda-fight-disinformation-internet/
———. “Russia’s Election Interference Is Digital Marketing 101.” The Atlantic, February 19, 2018c.
Accessed March 15, 2019. https://www.theatlantic.com/international/archive/2018/02/
russia-trump-election-facebook-twitter-advertising/553676/.
Google. “Changing Channels: Building a Better Marketing Strategy to Reach Today’s Viewers,”
February 2018a. Accessed March 15, 2019. https://services.google.com/fh/files/misc/
changing_channels_a_marketers_guide_to_tv_and_video_advertising.pdf
———. “Political Content—Advertising Policies Help.” Accessed September 16, 2018b. Accessed
March 15, 2019. https://support.google.com/adspolicy/answer/6014595?hl=en
Graves, Christopher, and Sandra Matz. “What Marketers Should Know About Personality-
Based Marketing.” Harvard Business Review, May 2, 2018. Accessed March 15, 2019.
https://hbr.org/2018/05/what-marketers-should-know-about-­p ersonality-based-
marketing.
Greenspon, E., and T. Owen. Democracy Divided: Countering Disinformation  and   Hate
in the Digital Public Sphere. University of British Columbia: Public  Policy   Forum,
2018. Accessed March 15, 2019. https://ppforum.ca/publications/social-marketing-
hate-speech-disinformation-democracy/
Hartzog, Woodrow. “Opinions—The Case Against Idealising Control.” European Data Protection
Law Review 4, no. 4 (2018): 423–32. doi:10.21552/edpl/2018/4/5.
Hill, Kashmir. “‘Do Not Track’ Privacy Tool Doesn’t Do Anything.” Gizmodo, October 15,
2018. Accessed March 15, 2019. https://gizmodo.com/do-not-track-the-privacy-tool-
used-by-millions-of-peop-1828868324.
House of Commons of Canada. Digital, Culture, Media and Sports Committee. Disinformation
and ‘Fake News’: Interim Report (No. HC 363), 2018a. Accessed March 15, 2019. https://
publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/363/363.pdf
———. Standing Committee on Access to Information, Privacy and Ethics. Democracy under
Threat: Risks and Solutions in the Era of Disinformation and Data Monopoly, 2018b.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 407

Accessed March 15, 2019. http://www.ourcommons.ca/DocumentViewer/en/42-1/


ETHI/report-17/
———. Digital, Culture, Media and Sports Committee. Disinformation and ‘Fake News’: Final
Report, February 14, 2019. Accessed March 15, 2019. https://publications.parliament.uk/
pa/cm201719/cmselect/cmcumeds/363/363.pdf.
Howard, P., B. Ganesh, and D. Liotsiou. The IRA and Political Polarization in the United States.
Computational Propaganda Research Project, Oxford Internet Institute, 2018. Accessed
March 15, 2019. https://comprop.oii.ox.ac.uk/research/ira-political-polarization/
Huddy, Leonie. “From Group Identity to Political Cohesion and Commitment.” In The
Oxford Handbook of Political Psychology. New York: Oxford, 2013. Accessed March 15,
2019. http://www.oxfordhandbooks.com.ezp1.lib.umn.edu/view/10.1093/oxfordhb/9780
199760107.001.0001/oxfordhb-9780199760107-e-023.
Information Commissioner’s Office. Investigation into the Use of Data Analytics in Political
Campaigns. United Kingdom, 2018. Accessed March 15, 2019. https://ico.org.uk/media/
action-weve-taken/2260271/investigation-into-the-use-of-data-analytics-in-­political-
campaigns-final-20181105.pdf
Jack, C. Lexicon of Lies. Data & Society Research Institute, 2017. Accessed March 15, 2019.
https://datasociety.net/output/lexicon-of-lies/
Jamieson, Kathleen Hall. “Messages, Micro-Targeting, and New Media Technologies.” The
Forum 11 (October 1, 2013), 429–43. doi:10.1515/for-2013-0052.
Jones, Kerry, Kelsey Libert, and Kristin Tynski. “The Emotional Combinations That Make
Stories Go Viral.” Harvard Business Review, May 23, 2016. Accessed March 15, 2019.
https://hbr.org/2016/05/research-the-link-between-feeling-in-control-and-viral-content.
Kahneman, Daniel. Thinking, Fast and Slow. New York: Farrar, Strauss and Giroux, 2011.
Kaptein, M., P. Markopoulos, B. de Ruyter, and E. Aarts. “Personalizing Persuasive Technologies:
Explicit and Implicit Personalization Using Persuasion Profiles.” International Journal of
Human-Computer Studies 77 (2015): 38–51. doi:10.1016/j.ijhcs.2015.01.004.
Kaye, Kate. “Data-Driven Targeting Creates Huge 2016 Political Ad Shift: Broadcast TV Down
20%, Cable and Digital Way Up.” Ad Age, January 3, 2017. Accessed March 15, 2019.
http://adage.com/article/media/2016-political-broadcast-tv-spend-20-cable-52/307346/.
Kim, Young Mie, Jordan Hsu, David Neiman, Colin Kou, Levi Bankston, Soo Yun Kim, Richard
Heinrich, Robyn Baragwanath, and Garvesh Raskutti. “The Stealth Media? Groups and
Targets behind Divisive Issue Campaigns on Facebook.” Political Communication 35, no.
4 (October 2, 2018): 515–41. doi:10.1080/10584609.2018.1476425.
Kosinski, Michal, David Stillwell, and Thore Graepel. “Private Traits and Attributes Are
Predictable from Digital Records of Human Behavior.” Proceedings of the National
Academy of Sciences 110, no. 15 (April 2013): 5802–05. doi:10.1073/pnas.1218772110.
Koulolias, V., G. Jonathan, M. Fernandez, and D. Sotirchos. Combating Misinformation: An
Ecosystem in Co-Creation. Organization for Economic Co-operation and Development,
2018. Accessed March 15, 2019. http://www.diva-portal.org/smash/get/diva2:1208770/
FULLTEXT01.pdf
Lumb, David. “Facebook Removes 5,000 Ad Targeting Options to Prevent Discrimination.”
Engadget, August 21, 2018. Accessed March 15, 2019. https://www.engadget.com/2018/08/
21/facebook-removes-5-000-ad-targeting-options-to-prevent-discrimin/.
Matz, Sandra, M. Kosinski, G. Nave, and D. J. Stillwell. “Psychological Targeting as an Effective
Approach to Digital Mass Persuasion.” Proceedings of the National Academy of Sciences 114,
no. 48 (November 2017): 12714–19.
Matz, Sandra, and Oded Netzter. “Using Big Data as a Window into Consumers’ Psychology.”
Current Opinion in Behavioral Sciences 18 (2017): 7–12. https://www.sciencedirect.com/
science/article/pii/S2352154617300566

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
408 JOURNAL OF INFORMATION POLICY

McCann, D., and M. Hall. Blocking the Data Stalkers. New Economics Foundation, 2018.
Accessed March 15, 2019. https://neweconomics.org/uploads/files/NEF_Blocking_Data_
Stalkers.pdf
McClintock, Anne. “Soft-Soaping Empire: Commodity Racism and Imperial Advertising.” In
Travellers’ Tales: Narratives of Home and Displacement, edited by Jon Bird, Barry Curtis,
Melinda Mash, Tim Putnam, George Robertson, and Lisa Tickner, 129–52. London:
Routledge, 2005.
McNair, Corey. “Global Ad Spending Update.” eMarketer, November 20, 2018. Accessed March
15, 2019. https://www.emarketer.com/content/global-ad-spending-update.
Morris, Steven. “British Army Ads Targeting ‘Stressed and Vulnerable Teenagers.’” The Guardian,
June 8, 2018. Accessed March 15, 2019. https://www.theguardian.com/uk-news/2018/
jun/08/british-army-criticised-for-exam-results-day-recruitment-ads.
Nadler, Anthony, Matthew Crain, and Joan Donovan. “Weaponizing the Digital Influence
Machine: The Political Perils of Online Ad Tech.” Data & Society Research
Institute, October 17, 2018. Accessed March 15, 2019. https://datasociety.net/output/
weaponizing-the-digital-influence-machine/.
Nielsen, Rasmus Kleis, and Sarah Anne Ganter. “Dealing with Digital Intermediaries: A Case
Study of the Relations between Publishers and Platforms.” New Media & Society 20, no.
4 (April 1, 2018): 1600–17. doi:10.1177/1461444817701318.
Oremus, Will. “Facebook Says a ‘Clear History’ Tool Will Hurt Its Advertising Business. Good.”
Slate, February 27, 2019. Accessed March 15, 2019. https://slate.com/technology/2019/02/
facebook-clear-history-button-real-wow.html.
Packard, Vance. The Hidden Persuaders. New York: David McKay Company, 1957.
Penzenstadler, Nick, Brad Heath, and Jessica Guynn. “We Read Every One of the 3,517 Facebook
Ads Bought by Russians. Here’s What We Found.” USA Today, May 13, 2018.
PHD Media, “New Beauty Study Reveals Days, Times and Occasions When U.S. Women Feel
Least Attractive.” Cision PR Newswire, October 2, 2013. https://www.prnewswire.com/
news-releases/new-beauty-study-reveals-days-times-and-occasions-when-us-women-feel-
least-attractive-226131921.html.
Pickard, Victor. “Break Facebook’s Power and Renew Journalism.” The Nation, April 18,
2018. Accessed March 15, 2019. https://www.thenation.com/article/break-facebooks-
power-and-renew-journalism/.
———. “The Violence of the Market.” Journalism 20, no. 1 (January 1, 2019): 154–58.
doi:10.1177/1464884918808955.
Ravel, A. M., S. C. Woolley, and H. Sridharan. Principles and Policies to Counter Deceptive Digital
Politics. Maplight; Institute for the Future, 2019. Accessed March 15, 2019. https://s3-us-
west-2.amazonaws.com/maplight.org/wp-content/uploads/20190211224524/Principles-
and-Policies-to-Counter-Deceptive-Digital-Politics-1-1-2.pdf
Reilly, Michael. “Is Facebook Targeting Ads at Sad Teens?” MIT Technology Review, May  1,  2017.
Accessed March 15, 2019. https://www.technologyreview.com/s/604307/is-facebook-
targeting-ads-at-sad-teens/
Riek, Blake M., Eric W. Mania, and Samuel L. Gaertner. “Intergroup Threat and Outgroup
Attitudes: A Meta-Analytic Review.” Personality and Social Psychology Review 10, no. 4
(November 1, 2006): 336–53. doi:10.1207/s15327957pspr1004_4.
Roese, Neal J., and Gerald N. Sande. “Backlash Effects in Attack Politics.” Journal of Applied
Social Psychology 23, no. 8 (1993): 632–53. doi:10.1111/j.1559-1816.1993.tb01106.x.
Rothchild, John. “Against Notice and Choice: The Manifest Failure of the Proceduralist
Paradigm to Protect Privacy Online (or Anywhere Else).” Cleveland State Law Review 66,
no. 3 (May 15, 2018): 559.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
Political Manipulation 409

“RSA Data Privacy & Security Survey 2019: The Growing Data Disconnect between Consumers
and Businesses.” RSA Security, February 6, 2019. https://www.rsa.com/content/dam/en/
misc/rsa-data-privacy-and-security-survey-2019.pdf.
Schechner, Sam, and Mark Secada. “You Give Apps Sensitive Personal Information. Then They
Tell Facebook.” Wall Street Journal, February 22, 2019, sec. Tech, accessed March 15, 2019.
https://www.wsj.com/articles/you-give-apps-sensitive-personal-information-then-they-tell-
facebook-11550851636.
Shane, Scott, and Alan Blinder. “Democrats Faked Online Push to Outlaw Alcohol in Alabama
Race.” The New York Times, January 7, 2019, sec. U.S., accessed March 15, 2019. https://
www.nytimes.com/2019/01/07/us/politics/alabama-senate-facebook-roy-moore.html.
Shaw, Tamsin. “Invisible Manipulators of Your Mind.” The New York Review of Books, April
20, 2017. Accessed March 15, 2019. http://www.nybooks.com/articles/2017/04/20/
kahneman-tversky-invisible-mind-manipulators/.
Singer, Natasha. “‘Weaponized Ad Technology’: Facebook’s Moneymaker Gets a Critical Eye.”
New York Times, August 16, 2018. Accessed March 15, 2019. https://www.nytimes.
com/2018/08/16/technology/facebook-microtargeting-advertising.html.
Solon, Olivia, and Sabrina Siddiqui. “Forget Wall Street—Silicon Valley Is the New
Political Power in Washington.” The Guardian, September 3, 2017, sec. Technology,
accessed March 15, 2019. https://www.theguardian.com/technology/2017/sep/03/
silicon-valley-politics-lobbying-washington.
Solove, Daniel J. “Introduction: Privacy Self-Management and the Consent Dilemma
Symposium: Privacy and Technology.” Harvard Law Review 126 (2012): 1880–1903.
Spiecer, Till, Muhammad Ali, Giridhari Venkatadri, Filipe Nunes Ribeiro, George Arvanitakis,
Fabrício Benevenuto, Krishna P. Gummadi, Patrick Loiseau, and Alan Mislove. “Potential
for Discrimination in Online Targeted Advertising.” Proceedings of Machine Learning
Research 81 (2018): 1–15. http://proceedings.mlr.press/v81/speicher18a/speicher18a.pdf.
Stamos, A. “How the U.S. Has Failed to Protect the 2018 Election—and Four Ways to Protect
2020.” Lawfare, August 22, 2018. Accessed February 12, 2019. https://www.lawfareblog.
com/how-us-has-failed-protect-2018-election-and-four-ways-protect-2020.
Tiku, Nitasha. “Facebook Is Steering Users Away from Privacy Protections.” Wired,
April 18, 2018. Accessed March 15, 2019. https://www.wired.com/story/
facebook-is-steering-users-away-from-privacy-protections/
Tufekci, Zeynep. “Engineering the Public: Big Data, Surveillance and Computational Politics.”
First Monday 19, no. 7 (2014). Accessed March 15, 2019. http://firstmonday.org/article/
view/4901/4097
Turow, Joseph. The Daily You: How the New Advertising Industry Is Defining Your Identity and
Your Worth. New Haven, CT: Yale University Press, 2012.
Turton, William. “We Posed as 100 Senators to Run Ads  on Facebook. Facebook  Approved
All of Them.” Vice News, October 30, 2018. Accessed March 15,  2019.  https://
news.vice.com/en_ca/article/xw9n3q/we-posed-as-100-senators-to-run-ads-on-
facebook-facebook-approved-all-of-them.
United States Federal Trade Commission. “Cross Device Tracking: An FTC Staff Report,”
January 2017. Accessed March 15, 2019. https://www.ftc.gov/system/files/documents/
reports/cross-device-tracking-federal-trade-commission-staff-report-january-2017/ftc_
cross-device_tracking_report_1-23-17.pdf.
Vaidhyanathan, Siva. Antisocial Media: How Facebook Disconnects Us and Undermines Democracy.
New York: Oxford University Press, 2018.

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms
410 JOURNAL OF INFORMATION POLICY

Valentino-DeVries, Jennifer. “Facebook’s Experiment in Ad Transparency Is Like Playing Hide


and Seek.” ProPublica, January 31, 2018. Accessed March 15, 2019. https://www.pro-
publica.org/article/facebook-experiment-ad-transparency-toronto-canada.
Wardle, Claire. “Fake News. It’s Complicated,” February 16, 2017. Accessed March 15, 2019.
https://medium.com/1st-draft/fake-news-its-complicated-d0f773766c79.
Warner, M. Potential Policy Proposals for Regulation of Social Media and Technology Firms (White
Paper). U.S. Senate, 2018. Accessed March 15, 2019. https://www.warner.senate.gov/public/_
cache/files/d/3/d32c2f17-cc76-4e11-8aa9-897eb3c90d16/65A7C5D983F899DAAE5AA21F-
57BAD944.social-media-regulation-proposals.pdf
Williamson, Judith. Decoding Advertisements: Ideology and Meaning in Advertising. London:
Calder and Boyars, 1978.
Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New
Frontier of Power. New York: PublicAffairs, 2019.

court case

U.S. v. Internet Research Agency, 18 U.S.C. §§ 2, 371, 1349, 1028A (U.S. Dist., D.C., 2018).

This content downloaded from 1.23.210.162 on Thu, 23 Apr 2020 07:38:12 UTC
All use subject to https://about.jstor.org/terms

Вам также может понравиться