Tag Archives: manipulation

Did Media Literacy Backfire?

Anxious about the widespread consumption and spread of propaganda and fake news during this year’s election cycle, many progressives are calling for an increased commitment to media literacy programs. Others are clamoring for solutions that focus on expert fact-checking and labeling. Both of these approaches are likely to fail — not because they are bad ideas, but because they fail to take into consideration the cultural context of information consumption that we’ve created over the last thirty years. The problem on our hands is a lot bigger than most folks appreciate.

CC BY 2.0-licensed photo by CEA+ | Artist: Nam June Paik, “Electronic Superhighway. Continental US, Alaska & Hawaii” (1995).

What Are Your Sources?

I remember a casual conversation that I had with a teen girl in the midwest while I was doing research. I knew her school approached sex ed through an abstinence-only education approach, but I don’t remember how the topic of pregnancy came up. What I do remember is her telling me that she and her friends talked a lot about pregnancy and “diseases” she could get through sex. As I probed further, she matter-of-factly explained a variety of “facts” she had heard that were completely inaccurate. You couldn’t get pregnant until you were 16. AIDS spreads through kissing. Etc. I asked her if she’d talked to her doctor about any of this, and she looked me as though I had horns. She explained that she and her friends had done the research themselves, by which she meant that they’d identified websites online that “proved” their beliefs.

For years, that casual conversation has stuck with me as one of the reasons that we needed better Internet-based media literacy. As I detailed in my book It’s Complicated: The Social Lives of Networked Teens, too many students I met were being told that Wikipedia was untrustworthy and were, instead, being encouraged to do research. As a result, the message that many had taken home was to turn to Google and use whatever came up first. They heard that Google was trustworthy and Wikipedia was not.

Understanding what sources to trust is a basic tenet of media literacy education. When educators encourage students to focus on sourcing quality information, they encourage them to critically ask who is publishing the content. Is the venue a respected outlet? What biases might the author have? The underlying assumption in all of this is that there’s universal agreement that major news outlets like the New York Times, scientific journal publications, and experts with advanced degrees are all highly trustworthy.

Think about how this might play out in communities where the “liberal media” is viewed with disdain as an untrustworthy source of information…or in those where science is seen as contradicting the knowledge of religious people…or where degrees are viewed as a weapon of the elite to justify oppression of working people. Needless to say, not everyone agrees on what makes a trusted source.

Students are also encouraged to reflect on economic and political incentives that might bias reporting. Follow the money, they are told. Now watch what happens when they are given a list of names of major power players in the East Coast news media whose names are all clearly Jewish. Welcome to an opening for anti-Semitic ideology.

Empowered Individuals…with Guns

We’ve been telling young people that they are the smartest snowflakes in the world. From the self-esteem movement in the 1980s to the normative logic of contemporary parenting, young people are told that they are lovable and capable and that they should trust their gut to make wise decisions. This sets them up for another great American ideal: personal responsibility.

In the United States, we believe that worthy people lift themselves up by their bootstraps. This is our idea of freedom. What it means in practice is that every individual is supposed to understand finance so well that they can effectively manage their own retirement funds. And every individual is expected to understand their health risks well enough to make their own decisions about insurance. To take away the power of individuals to control their own destiny is viewed as anti-American by so much of this country. You are your own master.

Children are indoctrinated into this cultural logic early, even as their parents restrict their mobility and limit their access to social situations. But when it comes to information, they are taught that they are the sole proprietors of knowledge. All they have to do is “do the research” for themselves and they will know better than anyone what is real.

Combine this with a deep distrust of media sources. If the media is reporting on something, and you don’t trust the media, then it is your responsibility to question their authority, to doubt the information you are being given. If they expend tremendous effort bringing on “experts” to argue that something is false, there must be something there to investigate.

Now think about what this means for #Pizzagate. Across this country, major news outlets went to great effort to challenge conspiracy reports that linked John Podesta and Hillary Clinton to a child trafficking ring supposedly run out of a pizza shop in Washington, DC. Most people never heard the conspiracy stories, but their ears perked up when the mainstream press went nuts trying to debunk these stories. For many people who distrust “liberal” media and were already primed not to trust Clinton, the abundant reporting suggested that there was something to investigate.

Most people who showed up to the Comet Ping Pong pizzeria to see for their own eyes went undetected. But then a guy with a gun decided he “wanted to do some good” and “rescue the children.” He was the first to admit that “the intel wasn’t 100%,” but what he was doing was something that we’ve taught people to do — question the information they’re receiving and find out the truth for themselves.

Experience Over Expertise

Many marginalized groups are justifiably angry about the ways in which their stories have been dismissed by mainstream media for decades. This is most acutely felt in communities of color. And this isn’t just about the past. It took five days for major news outlets to cover Ferguson. It took months and a lot of celebrities for journalists to start discussing the Dakota Pipeline. But feeling marginalized from news media isn’t just about people of color. For many Americans who have watched their local newspaper disappear, major urban news reporting appears disconnected from reality. The issues and topics that they feel affect their lives are often ignored.

For decades, civil rights leaders have been arguing for the importance of respecting experience over expertise, highlighting the need to hear the voices of people of color who are so often ignored by experts. This message has taken hold more broadly, particularly among lower and middle class whites who feel as though they are ignored by the establishment. Whites also want their experiences to be recognized, and they too have been pushing for the need to understand and respect the experiences of “the common man.” They see “liberal” “urban” “coastal” news outlets as antithetical to their interests because they quote from experts, use cleaned-up pundits to debate issues, and turn everyday people (e.g., “red sweater guy”) into spectacles for mass enjoyment.

Consider what’s happening in medicine. Many people used to have a family doctor whom they knew for decades and trusted as individuals even more than as experts. Today, many people see doctors as arrogant and condescending, overly expensive and inattentive to their needs. Doctors lack the time to spend more than a few minutes with patients, and many people doubt that the treatment they’re getting is in their best interest. People feel duped into paying obscene costs for procedures that they don’t understand. Many economists can’t understand why so many people would be against the Affordable Care Act because they don’t recognize that this “socialized” medicine is perceived as experts over experience by people who don’t trust politicians who tell them what’s in their best interest any more than they trust doctors. And public trust in doctors is declining sharply.

Why should we be surprised that most people are getting medical information from their personal social network and the Internet? It’s a lot cheaper than seeing a doctor, and both friends and strangers on the Internet are willing to listen, empathize, and compare notes. Why trust experts when you have at your fingertips a crowd of knowledgeable people who may have had the same experience as you and can help you out?

Consider this dynamic in light of discussions around autism and vaccinations. First, an expert-produced journal article was published linking autism to vaccinations. This resonated with many parents’ experience. Then, other experts debunked the first report, challenged the motivations of the researcher, and engaged in a mainstream media campaign to “prove” that there was no link. What unfolded felt like a war on experience, and a network of parents coordinated to counter this new batch of experts who were widely seen as ignorant, moneyed, and condescending. The more that the media focused on waving away these networks of parents through scientific language, the more the public felt sympathetic to the arguments being made by anti-vaxxers.

Keep in mind that anti-vaxxers aren’t arguing that vaccinations definitively cause autism. They are arguing that we don’t know. They are arguing that experts are forcing children to be vaccinated against their will, which sounds like oppression. What they want is choice — the choice to not vaccinate. And they want information about the risks of vaccination, which they feel are not being given to them. In essence, they are doing what we taught them to do: questioning information sources and raising doubts about the incentives of those who are pushing a single message. Doubt has become tool.

Grappling with “Fake News”

Since the election, everyone has been obsessed with fake news, as experts blame “stupid” people for not understanding what is “real.” The solutionism around this has been condescending at best. More experts are needed to label fake content. More media literacy is needed to teach people how not to be duped. And if we just push Facebook to curb the spread of fake news, all will be solved.

I can’t help but laugh at the irony of folks screaming up and down about fake news and pointing to the story about how the Pope backs Trump. The reason so many progressives know this story is because it was spread wildly among liberal circles who were citing it as appalling and fake. From what I can gather, it seems as though liberals were far more likely to spread this story than conservatives. What more could you want if you ran a fake news site whose goal was to make money by getting people to spread misinformation? Getting doubters to click on clickbait is far more profitable than getting believers because they’re far more likely to spread the content in an effort to dispel the content. Win!

CC BY 2.0-licensed photo by Denis Dervisevic.

People believe in information that confirms their priors. In fact, if you present them with data that contradicts their beliefs, they will double down on their beliefs rather than integrate the new knowledge into their understanding. This is why first impressions matter. It’s also why asking Facebook to show content that contradicts people’s views will not only increase their hatred of Facebook but increase polarization among the network. And it’s precisely why so many liberals spread “fake news” stories in ways that reinforce their belief that Trump supporters are stupid and backwards.

Labeling the Pope story as fake wouldn’t have stopped people from believing that story if they were conditioned to believe it. Let’s not forget that the public may find Facebook valuable, but it doesn’t necessarily trust the company. So their “expertise” doesn’t mean squat to most people. Of course, it would be an interesting experiment to run; I do wonder how many liberals wouldn’t have forwarded it along if it had been clearly identified as fake. Would they have not felt the need to warn everyone in their network that conservatives were insane? Would they have not helped fuel a money-making fake news machine? Maybe.

But I think labeling would reinforce polarization — but it would feel like something was done. Nonbelievers would use the label to reinforce their view that the information is fake (and minimize the spread, which is probably a good thing), while believers would simply ignore the label. But does that really get us to where we want to go?

Addressing so-called fake news is going to require a lot more than labeling.It’s going to require a cultural change about how we make sense of information, whom we trust, and how we understand our own role in grappling with information. Quick and easy solutions may make the controversy go away, but they won’t address the underlying problems.

What Is Truth?

As a huge proponent for media literacy for over a decade, I’m struggling with the ways in which I missed the mark. The reality is that my assumptions and beliefs do not align with most Americans. Because of my privilege as a scholar, I get to see how expert knowledge and information is produced and have a deep respect for the strengths and limitations of scientific inquiry. Surrounded by journalists and people working to distribute information, I get to see how incentives shape information production and dissemination and the fault lines of that process. I believe that information intermediaries are important, that honed expertise matters, and that no one can ever be fully informed. As a result, I have long believed that we have to outsource certain matters and to trust others to do right by us as individuals and society as a whole. This is what it means to live in a democracy, but, more importantly, it’s what it means to live in a society.

In the United States, we’re moving towards tribalism, and we’re undoing the social fabric of our country through polarization, distrust, and self-segregation. And whether we like it or not, our culture of doubt and critique, experience over expertise, and personal responsibility is pushing us further down this path.

Media literacy asks people to raise questions and be wary of information that they’re receiving. People are. Unfortunately, that’s exactly why we’re talking past one another.

The path forward is hazy. We need to enable people to hear different perspectives and make sense of a very complicated — and in many ways, overwhelming — information landscape. We cannot fall back on standard educational approaches because the societal context has shifted. We also cannot simply assume that information intermediaries can fix the problem for us, whether they be traditional news media or social media. We need to get creative and build the social infrastructure necessary for people to meaningfully and substantively engage across existing structural lines. This won’t be easy or quick, but if we want to address issues like propaganda, hate speech, fake news, and biased content, we need to focus on the underlying issues at play. No simple band-aid will work.


Special thanks to Amanda Lenhart, Claire Fontaine, Mary Madden, and Monica Bulger for their feedback!

This post was first published as part of a series on media, accountability, and the public sphere. See also:

Hacking the Attention Economy

For most non-technical folks, “hacking” evokes the notion of using sophisticated technical skills to break through the security of a corporate or government system for illicit purposes. Of course, most folks who were engaged in cracking security systems weren’t necessarily in it for espionage and cruelty. In the 1990s, I grew up among teenage hackers who wanted to break into the computer systems of major institutions that were part of the security establishment, just to show that they could. The goal here was to feel a sense of power in a world where they felt pretty powerless. The rush was in being able to do something and feel smarter than the so-called powerful. It was fun and games. At least until they started getting arrested.

Hacking has always been about leveraging skills to push the boundaries of systems. Keep in mind that one early definition of a hacker (from the Jargon File) was “A person who enjoys learning the details of programming systems and how to stretch their capabilities, as opposed to most users who prefer to learn only the minimum necessary.” In another early definition (RFC:1392), a hacker is defined as “A person who delights in having an intimate understanding of the internal workings of a system, computers and computer networks in particular.” Both of these definitions highlight something important: violating the security of a technical system isn’t necessarily the primary objective.

Indeed, over the last 15 years, I’ve watched as countless hacker-minded folks have started leveraging a mix of technical and social engineering skills to reconfigure networks of power. Some are in it for the fun. Some see dollar signs. Some have a much more ideological agenda. But above all, what’s fascinating is how many people have learned to play the game. And in some worlds, those skills are coming home to roost in unexpected ways, especially as groups are seeking to mess with information intermediaries in an effort to hack the attention economy.

CC BY-NC 2.0-licensed photo by artgraff.

It all began with memes… (and porn…)

In 2003, a 15-year-old named Chris Poole started an image board site based on a Japanese trend called 4chan. His goal was not political. Rather, like many of his male teenage peers, he simply wanted a place to share pornography and anime. But as his site’s popularity grew, he ran into a different problem — he couldn’t manage the traffic while storing all of the content. So he decided to delete older content as newer content came in. Users were frustrated that their favorite images disappeared so they reposted them, often with slight modifications. This gave birth to a phenomenon now understood as “meme culture.” Lolcats are an example. These are images of cats captioned with a specific font and a consistent grammar for entertainment.

Those who produced meme-like images quickly realized that they could spread like wildfire thanks to new types of social media (as well as older tools like blogging). People began producing memes just for fun. But for a group of hacker-minded teenagers who were born a decade after I was, a new practice emerged. Rather than trying to hack the security infrastructure, they wanted to attack the emergent attention economy. They wanted to show that they could manipulate the media narrative, just to show that they could. This was happening at a moment when social media sites were skyrocketing, YouTube and blogs were challenging mainstream media, and pundits were pushing the idea that anyone could control the narrative by being their own media channel. Hell, “You” was TIME Magazine’s person of the year in 2006.

Taking a humorist approach, campaigns emerged within 4chan to “hack” mainstream media. For example, many inside 4chan felt that widespread anxieties about pedophilia were exaggerated and sensationalized. They decided to target Oprah Winfrey, who, they felt, was amplifying this fear-mongering. Trolling her online message board, they got her to talk on live TV about how “over 9,000 penises” were raping children. Humored by this success, they then created a broader campaign around a fake character known as Pedobear. In a different campaign, 4chan “b-tards” focused on gaming the TIME 100 list of “the world’s most influential people” by arranging it such that the first letter of each name on the list spelled out “Marblecake also the game,” which is a known in-joke in this community. Many other campaigns emerged to troll major media and other cultural leaders. And frankly, it was hard not to laugh when everyone started scratching their heads about why Rick Astley’s 1987 song “Never Gonna Give You Up” suddenly became a phenomenon again.

By engaging in these campaigns, participants learned how to shape information within a networked ecosystem. They learned how to design information for it to spread across social media.

They also learned how to game social media, manipulate its algorithms, and mess with the incentive structure of both old and new media enterprises. They weren’t alone. I watched teenagers throw brand names and Buzzfeed links into their Facebook posts to increase the likelihood that their friends would see their posts in their News Feed. Consultants starting working for companies to produce catchy content that would get traction and clicks. Justin Bieber fans ran campaign after campaign to keep Bieber-related topics in Twitter Trending Topics. And the activist group Invisible Children leveraged knowledge of how social media worked to architect the #Kony2012 campaign. All of this was seen as legitimate “social media marketing,” making it hard to detect where the boundaries were between those who were hacking for fun and those who were hacking for profit or other “serious” ends.

Running campaigns to shape what the public could see was nothing new, but social media created new pathways for people and organizations to get information out to wide audiences. Marketers discussed it as the future of marketing. Activists talked about it as the next frontier for activism. Political consultants talked about it as the future of political campaigns. And a new form of propaganda emerged.

The political side to the lulz

In her phenomenal account of Anonymous — “Hacker, Hoaxer, Whistleblower, Spy” — Gabriella Coleman describes the interplay between different networks of people playing similar hacker-esque games for different motivations. She describes the goofy nature of those “Anons” who created a campaign to expose Scientology, which many believed to be a farcical religion with too much power and political sway. But she also highlights how the issues became more political and serious as WikiLeaks emerged, law enforcement started going after hackers, and the Arab Spring began.

CC BY-SA 3.0-licensed photo by Essam Sharaf via Wikimedia Commons.

Anonymous was birthed out of 4chan, but because of the emergent ideological agendas of many Anons, the norms and tactics started shifting. Some folks were in it for fun and games, but the “lulz” started getting darker and those seeking vigilante justice started using techniques like “doxing”to expose people who were seen as deserving of punishment. Targets changed over time, showcasing the divergent political agendas in play.

Perhaps the most notable turn involved “#GamerGate” when issues of sexism in the gaming industry emerged into a campaign of harassment targeted at a group of women. Doxing began being used to enable “swatting” — in which false reports called in by perpetrators would result in SWAT teams sent to targets’ homes. The strategies and tactics that had been used to enable decentralized but coordinated campaigns were now being used by those seeking to use the tools of media and attention to do serious reputational, psychological, economic, and social harm to targets. Although 4chan had long been an “anything goes” environment (with notable exceptions), #GamerGate became taboo there for stepping over the lines.

As #GamerGate unfolded, men’s rights activists began using the situation to push forward a long-standing political agenda to counter feminist ideology, pushing for #GamerGate to be framed as a serious debate as opposed to being seen as a campaign of hate and harassment. In some ways, the resultant media campaign was quite successful: major conferences and journalistic enterprises felt the need to “hear both sides” as though there was a debate unfolding. Watching this, I couldn’t help but think of the work of Frank Luntz, a remarkably effective conservative political consultant known for reframing issues using politicized language.

As doxing and swatting have become more commonplace, another type of harassment also started to emerge en masse: gaslighting. This term refers to a 1944 Ingrid Bergman film called “Gas Light” (which was based on a 1938 play). The film depicts psychological abuse in a domestic violence context, where the victim starts to doubt reality because of the various actions of the abuser. It is a form of psychological warfare that can work tremendously well in an information ecosystem, especially one where it’s possible to put up information in a distributed way to make it very unclear what is legitimate, what is fake, and what is propaganda. More importantly, as many autocratic regimes have learned, this tactic is fantastic for seeding the public’s doubt in institutions and information intermediaries.

The democratization of manipulation

In the early days of blogging, many of my fellow bloggers imagined that our practice could disrupt mainstream media. For many progressive activists, social media could be a tool that could circumvent institutionalized censorship and enable a plethora of diverse voices to speak out and have their say. Civic minded scholars were excited by “smart mobs” who leveraged new communications platforms to coordinate in a decentralized way to speak truth to power. Arab Spring. Occupy Wall Street. Black Lives Matter. These energized progressives as “proof” that social technologies could make a new form of civil life possible.

I spent 15 years watching teenagers play games with powerful media outlets and attempt to achieve control over their own ecosystem. They messed with algorithms, coordinated information campaigns, and resisted attempts to curtail their speech. Like Chinese activists, they learned to hide their traces when it was to their advantage to do so. They encoded their ideas such that access to content didn’t mean access to meaning.

Of course, it wasn’t just progressive activists and teenagers who were learning how to mess with the media ecosystem that has emerged since social media unfolded. We’ve also seen the political establishment, law enforcement, marketers, and hate groups build capacity at manipulating the media landscape. Very little of what’s happening is truly illegal, but there’s no widespread agreement about which of these practices are socially and morally acceptable or not.

The techniques that are unfolding are hard to manage and combat. Some of them look like harassment, prompting people to self-censor out of fear. Others look like “fake news”, highlighting the messiness surrounding bias, misinformation, disinformation, and propaganda. There is hate speech that is explicit, but there’s also suggestive content that prompts people to frame the world in particular ways. Dog whistle politics have emerged in a new form of encoded content, where you have to be in the know to understand what’s happening. Companies who built tools to help people communicate are finding it hard to combat the ways their tools are being used by networks looking to skirt the edges of the law and content policies. Institutions and legal instruments designed to stop abuse are finding themselves ill-equipped to function in light of networked dynamics.

The Internet has long been used for gaslighting, and trolls have long targeted adversaries. What has shifted recently is the scale of the operation, the coordination of the attacks, and the strategic agenda of some of the players.

For many who are learning these techniques, it’s no longer simply about fun, nor is it even about the lulz. It has now become about acquiring power.

A new form of information manipulation is unfolding in front of our eyes. It is political. It is global. And it is populist in nature. The news media is being played like a fiddle, while decentralized networks of people are leveraging the ever-evolving networked tools around them to hack the attention economy.

I only wish I knew what happens next.

This post was first published as part of a series on media, accountability, and the public sphere. See also:

 

This post was also translated to Portuguese

Facebook Must Be Accountable to the Public

A pair of Gizmodo stories have prompted journalists to ask questions about Facebook’s power to manipulate political opinion in an already heated election year. If the claims are accurate, Facebook contractors have depressed some conservative news, and their curatorial hand affects the Facebook Trending list more than the public realizes. Mark Zuckerberg took to his Facebook page yesterday to argue that Facebook does everything possible to be neutral and that there are significant procedures in place to minimize biased coverage. He also promises to look into the accusations.

Watercolor by John Orlando Parry, “A London Street Scene” 1835, in the Alfred Dunhill Collection.

As this conversation swirls around intentions and explicit manipulation, there are some significant issues missing. First, all systems are biased. There is no such thing as neutrality when it comes to media. That has long been a fiction, one that traditional news media needs and insists on, even as scholars highlight that journalists reveal their biases through everything from small facial twitches to choice of frames and topics of interests. It’s also dangerous to assume that the “solution” is to make sure that “both” sides of an argument are heard equally. This is the source of tremendous conflict around how heated topics like climate change and evolution are covered. Itis even more dangerous, however, to think that removing humans and relying more on algorithms and automation will remove this bias.

Recognizing bias and enabling processes to grapple with it must be part of any curatorial process, algorithmic or otherwise. As we move into the development of algorithmic models to shape editorial decisions and curation, we need to find a sophisticated way of grappling with the biases that shape development, training sets, quality assurance, and error correction, not to mention an explicit act of “human” judgment.

There never was neutrality, and there never will be.

This issue goes far beyond the Trending box in the corner of your Facebook profile, and this latest wave of concerns is only the tip of the iceberg around how powerful actors can affect or shape political discourse. What is of concern right now is not that human beings are playing a role in shaping the news — they always have — it is the veneer of objectivity provided by Facebook’s interface, the claims of neutrality enabled by the integration of algorithmic processes, and the assumption that what is prioritized reflects only the interests and actions of the users (the “public sphere”) and not those of Facebook, advertisers, or other powerful entities.

The key challenge that emerges out of this debate concerns accountability.In theory, news media is accountable to the public. Like neutrality, this is more of a desired goal than something that’s consistently realized. While traditional news media has aspired to — but not always realized — meaningful accountability, there are a host of processes in place to address the possibility of manipulation: ombudspeople, whistleblowers, public editors, and myriad alternate media organizations. Facebook and other technology companies have not, historically, been included in that conversation.

I have tremendous respect for Mark Zuckerberg, but I think his stance that Facebook will be neutral as long as he’s in charge is a dangerous statement.This is what it means to be a benevolent dictator, and there are plenty of people around the world who disagree with his values, commitments, and logics. As a progressive American, I have a lot more in common with Mark than not, but I am painfully aware of the neoliberal American value systems that are baked into the very architecture of Facebook and our society as a whole.

Who Controls the Public Sphere in an Era of Algorithms?

In light of this public conversation, I’m delighted to announce that Data & Society has been developing a project that asks who controls the public sphere in an era of algorithms. As part of this process, we convened a workshop and have produced a series of documents that we think are valuable to the conversation:

These documents provide historical context, highlight how media has always been engaged in power struggles, showcase the challenges that new media face, and offer case studies that reveal the complexities going forward.

This conversation is by no means over. It is only just beginning. My hope is that we quickly leave the state of fear and start imagining mechanisms of accountability that we, as a society, can live with. Institutions like Facebook have tremendous power and they can wield that power for good or evil. Butfor society to function responsibly, there must be checks and balances regardless of the intentions of any one institution or its leader.

This work is a part of Data & Society’s developing Algorithms and Publics project, including a set of documents occasioned by the Who Controls the Public Sphere in an Era of Algorithms? workshop. More posts from workshop participants: