Author Archives: zephoria

The Messy Fourth Estate

(This post was originally posted on Medium.)

For the second time in a week, my phone buzzed with a New York Times alert, notifying me that another celebrity had died by suicide. My heart sank. I tuned into the Crisis Text Line Slack channel to see how many people were waiting for a counselor’s help. Volunteer crisis counselors were pouring in, but the queue kept growing.

Celebrity suicides trigger people who are already on edge to wonder whether or not they too should seek death. Since the Werther effect study, in 1974, countless studies have conclusively and repeatedly shown that how the news media reports on suicide matters. The World Health Organization has adetailed set of recommendations for journalists and news media organizations on how to responsibly report on suicide so as to not trigger copycats. Yet in the past few years, few news organizations have bothered to abide by them, even as recent data shows that the reporting on Robin Williams’ death triggered an additional 10 percent increase in suicide and a 32 percent increase in people copying his method of death. The recommendations aren’t hard to follow — they focus on how to convey important information without adding to the problem.

Crisis counselors at the Crisis Text Line are on the front lines. As a board member, I’m in awe of their commitment and their willingness to help those who desperately need support and can’t find it anywhere else. But it pains me to watch as elite media amplifiers make counselors’ lives more difficult under the guise of reporting the news or entertaining the public.

Through data, we can see the pain triggered by 13 Reasons Why and the New York Times. We see how salacious reporting on method prompts people to consider that pathway of self-injury. Our volunteer counselors are desperately trying to keep people alive and get them help, while for-profit companies reap in dollars and clicks. If we’re lucky, the outlets triggering unstable people write off their guilt by providing a link to our services, with no consideration of how much pain they’ve caused or the costs we must endure.

I want to believe in journalism. But my faith is waning.

I want to believe in journalism. I want to believe in the idealized mandate of the fourth estateI want to trust that editors and journalists are doing their best to responsibly inform the public and help create a more perfect union.But my faith is waning.

Many Americans — especially conservative Americans — do not trust contemporary news organizations. This “crisis” is well-trod territory, but the focus on fact-checking, media literacy, and business models tends to obscure three features of the contemporary information landscape that I think are poorly understood:

  1. Differences in worldview are being weaponized to polarize society.
  2. We cannot trust organizations, institutions, or professions when they’re abstracted away from us.
  3. Economic structures built on value extraction cannot enable healthy information ecosystems.

Let me begin by apologizing for the heady article, but the issues that we’re grappling with are too heady for a hot take. Please read this to challenge me, debate me, offer data to show that I’m wrong. I think we’ve got an ugly fight in front of us, and I think we need to get more sophisticated about our thinking, especially in a world where foreign policy is being boiled down to 140 characters.

1. Your Worldview Is Being Weaponized

I was a teenager when I showed up at a church wearing jeans and a T-shirt to see my friend perform in her choir. The pastor told me that I was not welcomebecause this was a house of God, and we must dress in a manner that honors Him. Not good at following rules, I responded flatly, “God made me naked. Should I strip now?” Needless to say, I did not get to see my friend sing.

Faith is an anchor for many people in the United States, but the norms that surround religious institutions are man-made, designed to help people make sense of the world in which we operate. Many religions encourage interrogation and questioning, but only within a well-established framework.Children learn those boundaries, just as they learn what is acceptable insecular society. They learn that talking about race is taboo and that questioning the existence of God may leave them ostracized.

Like many teenagers before and after me, I was obsessed with taboos and forbidden knowledge. I sought out the music Tipper Gore hated, read the books my school banned, and tried to get answers to any question that made adults gasp. Anonymously, I spent late nights engaged in conversations on Usenet, determined to push boundaries and make sense of adult hypocrisy.

Following a template learned in Model UN, I took on strong positions in order to debate and learn. Having already lost faith in the religious leaders in my community, I saw no reason to respect the dogma of any institution. And because I made a hobby out of proving teachers wrong, I had little patience for the so-called experts in my hometown. I was intellectually ravenous, but utterly impatient with, if not outright cruel to the adults around me. I rebelled against hierarchy and was determined to carve my own path at any cost.

have an amazing amount of empathy for those who do not trust the institutions that elders have told them they must respect. Rage against the machine. We don’t need no education, no thought control. I’m also fully aware that you don’t garner trust in institutions through coercion or rational discussion. Instead, trust often emerges from extreme situations.

Many people have a moment where they wake up and feel like the world doesn’t really work like they once thought or like they were once told. That moment of cognitive reckoning is overwhelming. It can be triggered by any number of things — a breakup, a death, depression, a humiliating experience.Everything comes undone, and you feel like you’re in the middle of a tornado, unable to find the ground. This is the basis of countless literary classics, the crux of humanity. But it’s also a pivotal feature in how a society comes together to function.

Everyone needs solid ground, so that when your world has just been destabilized, what comes next matters. Who is the friend that picks you up and helps you put together the pieces? What institution — or its representatives — steps in to help you organize your thinking? What information do you grab onto in order to make sense of your experiences?

Contemporary propaganda isn’t about convincing someone to believe something, but convincing them to doubt what they think they know.

Countless organizations and movements exist to pick you up during your personal tornado and provide structure and a framework. Take a look at how Alcoholics Anonymous works. Other institutions and social bodies know how to trigger that instability and then help you find groundCheck out the dynamics underpinning military basic training. Organizations, movements, and institutions that can manipulate psychological tendencies toward a sociological end have significant power. Religious organizations, social movements, and educational institutions all play this role, whether or not they want to understand themselves as doing so.

Because there is power in defining a framework for people, there is good reason to be wary of any body that pulls people in when they are most vulnerable. Of course, that power is not inherently malevolentThere is fundamental goodness in providing structures to help those who are hurting make sense of the world around them. Where there be dragons is when these processes are weaponized, when these processes are designed to produce societal hatred alongside personal stability. After all, one of the fastest ways to bond people and help them find purpose is to offer up an enemy.

And here’s where we’re in a sticky spot right now. Many large institutions — government, the church, educational institutions, news organizations — are brazenly asserting their moral authority without grappling with their own shit.They’re ignoring those among them who are using hate as a tool, and they’re ignoring their own best practices and ethics, all to help feed a bottom line. Each of these institutions justifies itself by blaming someone or something to explain why they’re not actually that powerful, why they’re actually the victim. And so they’re all poised to be weaponized in a cultural war rooted in how we stabilize American insecurity.And if we’re completely honest with ourselves, what we’re really up against is how we collectively come to terms with a dying empire. But that’s a longer tangent.

Any teacher knows that it only takes a few students to completely disrupt a classroom. Forest fires spark easily under certain conditions, and the ripple effects are huge. As a child, when I raged against everyone and everything, it was my mother who held me into the night. When I was a teenager chatting my nights away on Usenet, the two people who most memorably picked me up and helped me find stable ground were a deployed soldier and a transgender woman, both of whom held me as I asked insane questions. They absorbed the impact and showed me a different way of thinking. They taught me the power of strangers counseling someone in crisis. As a college freshman, when I was spinning out of control, a computer science professor kept me solid and taught me how profoundly important a true mentor could be. Everyone needs someone to hold them when their world spins, whether that person be a friend, family, mentor, or stranger.

Fifteen years ago, when parents and the news media were panicking about online bullying, I saw a different risk. I saw countless kids crying out online in pain only to be ignored by those who preferred to prevent teachers from engaging with students online or to create laws punishing online bullies. We saw the suicides triggered as youth tried to make “It Gets Better” videos to find community, only to be further harassed at school. We saw teens studying the acts of Columbine shooters, seeking out community among those with hateful agendas and relishing the power of lashing out at those they perceived to be benefiting at their expense. But it all just seemed like a peculiar online phenomenon, proof that the internet was cruel. Too few of us tried to hold those youth who were unquestionably in pain.

Teens who are coming of age today are already ripe for instability. Their parents are stressed; even if they have jobs, nothing feels certain or stable. There doesn’t seem to be a path toward economic stability that doesn’t involve college, but there doesn’t seem to be a path toward college that doesn’t involve mind-bending debt. Opioids seem like a reasonable way to numb the pain in far too many communities. School doesn’t seem like a safe place, so teenagers look around and whisper among friends about who they believe to be the most likely shooter in their community. As Stephanie Georgopulos notesthe idea that any institution can offer security seems like a farce.

When I look around at who’s “holding” these youth, I can’t help but notice the presence of people with a hateful agenda. And they terrify me, in no small part because I remember an earlier incarnation.

In 1995, when I was trying to make sense of my sexuality, I turned to various online forums and asked a lot of idiotic questions. I was adopted by the aforementioned transgender woman and numerous other folks who heard me out, gave me pointers, and helped me think through what I felt. In 2001, when I tried to figure out what the next generation did, I realized thatstruggling youth were more likely to encounter a Christian gay “conversion therapy” group than a supportive queer peer. Queer folks were sick of being attacked by anti-LGBT groups, and so they had created safe spaces on private mailing lists that were hard for lost queer youth to find. And so it was that in their darkest hours, these youth were getting picked up by those with a hurtful agenda.

Teens who are trying to make sense of social issues aren’t finding progressive activists. They’re finding the so-called alt-right.

Fast-forward 15 years, and teens who are trying to make sense of social issues aren’t finding progressive activists willing to pick them up. They’re finding the so-called alt-right. I can’t tell you how many youth we’ve seen asking questions like I asked being rejected by people identifying with progressive social movements, only to find camaraderie among hate groupsWhat’s most striking is how many people with extreme ideas are willing to spend time engaging with folks who are in the tornado.

Spend time reading the comments below the YouTube videos of youth struggling to make sense of the world around them. You’ll quickly find comments by people who spend time in the manosphere or subscribe to white supremacist thinking. They are diving in and talking to these youth, offering a framework to make sense of the world, one rooted in deeply hateful ideas.These self-fashioned self-help actors are grooming people to see that their pain and confusion isn’t their fault, but the fault of feminists, immigrants, people of color. They’re helping them believe that the institutions they already distrust — the news media, Hollywood, government, school, even the church — are actually working to oppress them.

Most people who encounter these ideas won’t embrace them, but some will. Still, even those who don’t will never let go of the doubt that has been instilled in the institutions around them. It just takes a spark.

So how do we collectively make sense of the world around us? There isn’t one universal way of thinking, but even the act of constructing knowledge is becoming polarized. Responding to the uproar in the news media over “alternative facts,” Cory Doctorow noted:

We’re not living through a crisis about what is true, we’re living through a crisis about how we know whether something is true. We’re not disagreeing about facts, we’re disagreeing about epistemology. The “establishment” version of epistemology is, “We use evidence to arrive at the truth, vetted by independent verification (but trust us when we tell you that it’s all been independently verified by people who were properly skeptical and not the bosom buddies of the people they were supposed to be fact-checking).

The “alternative facts” epistemological method goes like this: “The ‘independent’ experts who were supposed to be verifying the ‘evidence-based’ truth were actually in bed with the people they were supposed to be fact-checking. In the end, it’s all a matter of faith, then: you either have faith that ‘their’ experts are being truthful, or you have faith that we are. Ask your gut, what version feels more truthful?”

Doctorow creates these oppositional positions to make a point and to highlight that there is a war over epistemology, or the way in which we produce knowledge.

The reality is much messier, because what’s at stake isn’t simply about resolving two competing worldviews. Rather, what’s at stake is how there is no universal way of knowing, and we have reached a stage in our political climate where there is more power in seeding doubt, destabilizing knowledge, and encouraging others to distrust other systems of knowledge production.

Contemporary propaganda isn’t about convincing someone to believe something, but convincing them to doubt what they think they know. Andonce people’s assumptions have come undone, who is going to pick them up and help them create a coherent worldview?

2. You Can’t Trust Abstractions

Deeply committed to democratic governance, George Washington believed that a representative government could only work if the public knew their representatives. As a result, our Constitution states that each member of the House should represent no more than 30,000 constituents. When we stopped adding additional representatives to the House in 1913 (frozen at 435), each member represented roughly 225,000 constituents. Today, the ratio of congresspeople to constituents is more than 700,000:1Most people will never meet their representative, and few feel as though Washington truly represents their interests. The democracy that we have is representational only in ideal, not in practice.

As our Founding Fathers knew, it’s hard to trust an institution when it feels inaccessible and abstract. All around us, institutions are increasingly divorced from the community in which they operate, with often devastating costs.Thanks to new models of law enforcement, police officers don’t typically come from the community they serve. In many poor communities, teachers also don’t come from the community in which they teach. The volunteer U.S. military hardly draws from all communities, and those who don’t know a solider are less likely to trust or respect the military.

Journalism can only function as the fourth estate when it serves as a tool to voice the concerns of the people and to inform those people of the issues that matter. Throughout the 20th century, communities of color challenged mainstream media’s limitations and highlighted that few newsrooms represented the diverse backgrounds of their audiences. As such, we saw the rise of ethnic media and a challenge to newsrooms to be smarter about their coverage. But let’s be real — even as news organizations articulate a commitment to the concerns of everyone, newsrooms have done a dreadful job of becoming more representativeOver the past decade, we’ve seen racial justice activists challenge newsrooms for their failure to cover Ferguson, Standing Rock, and other stories that affect communities of color.

Meanwhile, local journalism has nearly died. The success of local journalismdidn’t just matter because those media outlets reported the news, but because it meant that many more people were likely to know journalists. It’s easier to trust an institution when it has a human face that you know and respect. Andas fewer and fewer people know journalists, they trust the institution less and less. Meanwhile, the rise of social media, blogging, and new forms of talk radio has meant that countless individuals have stepped in to cover issues not being covered by mainstream news, often using a style and voice that is quite unlike that deployed by mainstream news media.

We’ve also seen the rise of celebrity news hosts. These hosts help push the boundaries of parasocial interactions, allowing the audience to feel deep affinity toward these individuals, as though they are true friends. Tabloid papers have long capitalized on people’s desire to feel close to celebrities by helping people feel like they know the royal family or the Kardashians. Talking heads capitalize on this, in no small part by how they communicate with their audiences. So, when people watch Rachel Maddow or listen to Alex Jones, they feel more connected to the message than they would when reading a news article. They begin to trust these people as though they are neighbors. They feel real.

No amount of drop-in journalism will make up for the loss of journalists within the fabric of local communities.

People want to be informed, but who they trust to inform them is rooted in social networks, not institutions. The trust of institutions stems from trust in people. The loss of the local paper means a loss of trusted journalists and a connection to the practices of the newsroom. As always, people turn to their social networks to get information, but what flows through those social networks is less and less likely to be mainstream news. But here’s where you also get an epistemological divide.

As Francesca Tripodi points out, many conservative Christians have developed a media literacy practice that emphasizes the “original” text rather than an intermediary. Tripodi points out that the same type of scriptural inference that Christians apply in Bible study is often also applied to reading the Constitution, tax reform bills, and Google results. This approach is radically different than the approach others take when they rely on intermediaries to interpret news for them.

As the institutional construction of news media becomes more and more proximately divorced from the vast majority of people in the United States, we can and should expect trust in news to decline. No amount of fact-checking will make up for a widespread feeling that coverage is biased. No amount of articulated ethical commitments will make up for the feeling that you are being fed clickbait headlines.

No amount of drop-in journalism will make up for the loss of journalists within the fabric of local communities. And while the population who believes that CNN and the New York Times are “fake news” are not demographically representative, the questionable tactics that news organizations use are bound to increase distrust among those who still have faith in them.

3. The Fourth Estate and Financialization Are Incompatible

If you’re still with me at this point, you’re probably deeply invested in scholarship or journalism. And, unless you’re one of my friends, you’re probably bursting at the seams to tell me that the reason journalism is all screwed up is because the internet screwed news media’s business model. So I want to ask a favor: Quiet that voice in your head, take a deep breath, and let me offer an alternative perspective.

There are many types of capitalism. After all, the only thing that defines capitalism is the private control of industry (as opposed to government control). Most Americans have been socialized into believing that all forms of capitalism are inherently good (which, by the way, was a propaganda project). But few are encouraged to untangle the different types of capitalism and different dynamics that unfold depending on which structure is operating.

I grew up in mom-and-pop America, where many people dreamed of becoming small business owners. The model was simple: Go to the bank and get a loan to open a store or a company. Pay back that loan at a reasonable interest rate — knowing that the bank was making money — until eventually you owned the company outright. Build up assets, grow your company, and create something of value that you could pass on to your children.

In the 1980s, franchises became all the rage. Wannabe entrepreneurs saw a less risky path to owning their own business. Rather than having to figure it out alone, you could open a franchise with a known brand and a clear process for running the business. In return, you had to pay some overhead to the parent company. Sure, there were rules to follow and you could only buy supplies from known suppliers and you didn’t actually have full control, but it kinda felt like you did. Like being an Uber driver, it was the illusion of entrepreneurship that was so appealing. And most new franchise owners didn’t know any better, nor were they able to read the writing on the wall when the water all around them started boiling their froggy self. I watched my mother nearly drown, and the scars are still visible all over her body.

I will never forget the U.S. Savings & Loan crisis, not because I understood it, but because it was when I first realized that my Richard Scarry impression of how banks worked was way wrong. Only two decades later did I learn to seethe FIRE industries (Finance, Insurance, and Real Estate) as extractive ones.They aren’t there to help mom-and-pop companies build responsible businesses, but to extract value from their naiveté. Like today’s post-college youth are learning, loans aren’t there to help you be smart, but to bend your will.

It doesn’t take a quasi-documentary to realize thatMcDonald’s is not a fast-food franchise; it’s a real estate business that uses a franchise structure to extract capital from naive entrepreneurs. Go talk to a wannabe restaurant owner in New York City and ask them what it takes to start a business these days. You can’t even get a bank loan or lease in 2018 without significant investor backing, which means that the system isn’t set up for you to build a business and pay back the bank, pay a reasonable rent, and develop a valuable asset.You are simply a pawn in a financialized game between your investors, the real estate companies, the insurance companies, and the bank, all of which want to extract as much value from your effort as possible. You’re just another brick in the wall.

Now let’s look at the local news ecosystem. Starting in the 1980s, savvy investors realized that many local newspapers owned prime real estate in the center of key towns. These prized assets would make for great condos and office rentals. Throughout the country, local news shops started getting eaten up by private equity and hedge funds — or consolidated by organizations controlled by the same forces. Media conglomerates sold off their newsrooms as they felt increased pressure to increase profits quarter over quarter.

Building a sustainable news business was hard enough when the news had a wealthy patron who valued the goals of the enterprise. But the finance industry doesn’t care about sustaining the news business; it wants a return on investment. And the extractive financiers who targeted the news business weren’t looking to keep the news alive. They wanted to extract as much value from those business as possible. Taking a page out of McDonald’s, they forced the newsrooms to sell their real estate. Often, news organizations had to rent from new landlords who wanted obscene sums, often forcing them to move out of their buildings. News outlets were forced to reduce staff, reproduce more junk content, sell more ads, and find countless ways to cut costs. Of course the news suffered — the goal was to push news outlets into bankruptcy or sell, especially if the companies had pensions or other costs that couldn’t be excised.

Yes, the fragmentation of the advertising industry due to the internet hastened this process. And let’s also be clear that business models in the news business have never been cleanBut no amount of innovative new business models will make up for the fact that you can’t sustain responsible journalism within a business structure that requires newsrooms to make more money quarter over quarter to appease investors. This does not mean that you can’t build a sustainable news business, but if the news is beholden to investors trying to extract value, it’s going to impossible. And if news companies have no assets to rely on (such as their now-sold real estate), they are fundamentally unstable and likely to engage in unhealthy business practices out of economic desperation.

Untangling our country from this current version of capitalism is going to be as difficult as curbing our addiction to fossil fuels. I’m not sure it can be done, but as long as we look at companies and blame their business models without looking at the infrastructure in which they are embedded, we won’t even begin taking the first steps. Fundamentally, both the New York Times and Facebook are public companies, beholden to investors and desperate to increase their market cap. Employees in both organizations believe themselves to be doing something important for society.

Of course, journalists don’t get paid well, while Facebook’s employees can easily threaten to walk out if the stock doesn’t keep rising, since they’re also investors. But we also need to recognize that the vast majority of Americans have a stake in the stock market. Pension plans, endowments, and retirement plans all depend on stocks going up — and those public companies depend on big investors investing in them. Financial managers don’t invest in news organizations that are happy to be stable break-even businesses. Heck, even Facebook is in deep trouble if it can’t continue to increase ROI, whether through attracting new customers (advertisers and users), increasing revenue per user, or diversifying its businesses. At some point, it too will get desperate, because no business can increase ROI forever.

ROI capitalism isn’t the only version of capitalism out there. We take it for granted and tacitly accept its weaknesses by creating binaries, as though the only alternative is Cold War Soviet Union–styled communism. We’re all frogs in an ocean that’s quickly getting warmer. Two degrees will affect a lot more than oceanfront properties.

Reclaiming Trust

In my mind, we have a hard road ahead of us if we actually want to rebuild trust in American society and its key institutions (which, TBH, I’m not sure is everyone’s goal). There are three key higher-order next steps, all of which are at the scale of the New Deal.

  1. Create a sustainable business structure for information intermediaries (like news organizations) that allows them to be profitable without the pressure of ROI. In the case of local journalism, this could involve subsidized rent, restrictions on types of investors or takeovers, or a smartly structured double bottom-line model. But the focus should be on strategically building news organizations as a national project to meet the needs of the fourth estateIt means moving away from a journalism model that is built on competition for scarce resources (ads, attention) to one that’s incentivized by societal benefits.
  2. Actively and strategically rebuild the social networks of America.Create programs beyond the military that incentivize people from different walks of life to come together and achieve something great for this country. This could be connected to job training programs or rooted in community service, but it cannot be done through the government alone or, perhaps, at all. We need the private sector, religious organizations, and educational institutions to come together and commit to designing programs that knit together America while also providing the tools of opportunity.
  3. Find new ways of holding those who are struggling. We don’t have a social safety net in America. For many, the church provides the only accessible net when folks are lost and struggling, but we need a lot more.We need to work together to build networks that can catch people when they’re falling. We’ve relied on volunteer labor for a long time in this domain—women, churches, volunteer civic organizations—but our current social configuration makes this extraordinarily difficult. We’re in the middle of an opiate crisis for a reason. We need to think smartly about how these structures or networks can be built and sustained so that we can collectively reach out to those who are falling through the cracks.

Fundamentally, we need to stop triggering one another because we’re facing our own perceived pain. This means we need to build large-scale cultural resilience. While we may be teaching our children “social-emotional learning”in the classroom, we also need to start taking responsibility at scale.Individually, we need to step back and empathize with others’ worldviews and reach out to support those who are struggling. But our institutions also have important work to do.

At the end of the day, if journalistic ethics means anythingnewsrooms cannot justify creating spectacle out of their reporting on suicide or other topics just because they feel pressure to create clicks. They have the privilege of choosing what to amplify, and they should focus on what is beneficial. If they can’t operate by those values, they don’t deserve our trust. While I strongly believe that technology companies have a lot of important work to do to be socially beneficial, I hold news organizations to a higher standard because of their own articulated commitments and expectations that they serve as the fourth estateAnd if they can’t operationalize ethical practices, I fear the society that must be knitted together to self-govern is bound to fragment even further.

Trust cannot be demanded. It’s only earned by being there at critical junctures when people are in crisis and need help. You don’t earn trust when things are going well; you earn trust by being a rock during a tornado. The winds are blowing really hard right now. Look around. Who is helping us find solid ground?

The case for quarantining extremist ideas

(Joan Donovan and I wrote the following op-ed for The Guardian.) 

When confronted with white supremacists, newspaper editors should consider ‘strategic silence’

kkk
 ‘The KKK of the 1920s considered media coverage their most effective recruitment tactic.’ Photograph: Library of Congress

George Lincoln Rockwell, the head of the American Nazi party, had a simple media strategy in the 1960s. He wrote in his autobiography: “Only by forcing the Jews to spread our message with their facilities could we have any hope of success in counteracting their left-wing, racemixing propaganda!”

Campus by campus, from Harvard to Brown to Columbia, he would use the violence of his ideas and brawn of his followers to become headline news. To compel media coverage, Rockwell needed: “(1) A smashing, dramatic approach which could not be ignored, without exposing the most blatant press censorship, and (2) a super-tough, hard-core of young fighting men to enable such a dramatic presentation to the public.” He understood what other groups competing for media attention knew too well: a movement could only be successful if the media amplified their message.

Contemporary Jewish community groups challenged journalists to consider not covering white supremacists’ ideas. They called this strategy “quarantine”, and it involved working with community organizations to minimize public confrontations and provide local journalists with enough context to understand why the American Nazi party was not newsworthy.

In regions where quarantine was deployed successfully, violence remained minimal and Rockwell was unable to recruit new party members. The press in those areas was aware that amplification served the agenda of the American Nazi party, so informed journalists employed strategic silence to reduce public harm.

The Media Manipulation research initiative at the Data & Society institute is concerned precisely with the legacy of this battle in discourse and the way that modern extremists undermine journalists and set media agendas. Media has always had the ability to publish or amplify particular voices, perspectives and incidents. In choosing stories and voices they will or will not prioritize, editors weigh the benefits and costs of coverage against potential social consequences. In doing so, they help create broader societal values. We call this willingness to avoid amplifying extremist messages “strategic silence”.

Editors used to engage in strategic silence – set agendas, omit extremist ideas and manage voices – without knowing they were doing so. Yet the online context has enhanced extremists’ abilities to create controversies, prompting newsrooms to justify covering their spectacles. Because competition for audience is increasingly fierce and financially consequential, longstanding newsroom norms have come undone. We believe that journalists do not rebuild reputation through a race to the bottom. Rather, we think that it’s imperative that newsrooms actively take the high ground and re-embrace strategic silence in order to defy extremists’ platforms for spreading hate.

Strategic silence is not a new idea. The Ku Klux Klan of the 1920s considered media coverage their most effective recruitment tactic and accordingly cultivated friendly journalists. According to Felix Harcourt, thousands of readers joined the KKK after the New York World ran a three-week chronicle of the group in 1921. Catholic, Jewish and black presses of the 1920s consciously differed from Protestant-owned mainstream papers in their coverage of the Klan, conspicuously avoiding giving the group unnecessary attention. The black press called this use of editorial discretion in the public interest “dignified silence”, and limited their reporting to KKK follies, such as canceled parades, rejected donations and resignations. Some mainstream journalists also grew suspicious of the KKK’s attempts to bait them with camera-ready spectacles. Eventually coverage declined.

The KKK was so intent on getting the coverage they sought that they threatened violence and white boycotts of advertisers. Knowing they could bait coverage with violence, white vigilante groups of the 1960s staged cross burnings and engaged in high-profile murders and church bombings. Civil rights protesters countered white violence with black stillness, especially during lunch counter sit-ins. Journalists and editors had to make moral choices of which voices to privilege, and they chose those of peace and justice, championing stories of black resilience and shutting out white extremism. This was strategic silence in action, and it saved lives.

The emphasis of strategic silence must be placed on the strategic over the silencing. Every story requires a choice and the recent turn toward providing equal coverage to dangerous, antisocial opinions requires acknowledging the suffering that such reporting causes. Even attempts to cover extremism critically can result in the media disseminating the methods that hate groups aim to spread, such as when Virginia’s Westmoreland News reproduced in full a local KKK recruitment flier on its front page. Media outlets who cannot argue that their reporting benefits the goal of a just and ethical society must opt for silence.

Newsrooms must understand that even with the best of intentions, they can find themselves being used by extremists. By contrast, they must also understand they have the power to defy the goals of hate groups by optimizing for core American values of equality, respect and civil discourse. All Americans have the right to speak their minds, but not every person deserves to have their opinions amplified, particularly when their goals are to sow violence, hatred and chaos.

If telling stories didn’t change lives, journalists would never have started in their careers. We know that words matter and that coverage makes a difference. In this era of increasing violence and extremism, we appeal to editors to choose strategic silence over publishing stories that fuel the radicalization of their readers.

(Visit the original version at The Guardian to read the comments and help support their organization, as a sign of appreciation for their willingness to publish our work.)

You Think You Want Media Literacy… Do You?

The below original text was the basis for Data & Society Founder and President danah boyd’s March 2018 SXSW Edu keynote,“What Hath We Wrought?” — Ed.

Growing up, I took certain truths to be self evident. Democracy is good. War is bad. And of course, all men are created equal.

My mother was a teacher who encouraged me to question everything. But I quickly learned that some questions were taboo. Is democracy inherently good? Is the military ethical? Does God exist?

I loved pushing people’s buttons with these philosophical questions, but they weren’t nearly as existentially destabilizing as the moments in my life in which my experiences didn’t line up with frames that were sacred cows in my community. Police were revered, so my boss didn’t believe me when I told him that cops were forcing me to give them free food, which is why there was food missing. Pastors were moral authorities and so our pastor’s infidelities were not to be discussed, at least not among us youth. Forgiveness is a beautiful thing, but hypocrisy is destabilizing. Nothing can radicalize someone more than feeling like you’re being lied to. Or when the world order you’ve adopted comes crumbling down.

The funny thing about education is that we ask our students to challenge their assumptions. And that process can be enlightening.

The funny thing about education is that we ask our students to challenge their assumptions. And that process can be enlightening. I will never forget being a teenager and reading “The People’s History of the United States.” The idea that there could be multiple histories, multiple truths blew my mind.Realizing that history is written by the winners shook me to my core. This is the power of education. But the hole that opens up, that invites people to look for new explanationsthat hole can be filled in deeply problematic ways.When we ask students to challenge their sacred cows but don’t give them a new framework through which to make sense of the world, others are often there to do it for us.

For the last year, I’ve been struggling with media literacy. I have a deep level of respect for the primary goal. As Renee Hobbs has written, media literacy is the “active inquiry and critical thinking about the messages we receive and create. The field talks about the development of competencies or skills to help people analyze, evaluate, and even create media. Media literacy is imagined to be empowering, enabling individuals to have agency and giving them the tools to help create a democratic society. But fundamentally, it is a form of critical thinking that asks people to doubt what they see. And that makes me nervous.

Most media literacy proponents tell me that media literacy doesn’t exist in schools. And it’s true that the ideal version that they’re aiming for definitely doesn’t. But I spent a decade in and out of all sorts of schools in the US, where I quickly learned that a perverted version of media literacy does already exist.Students are asked to distinguish between CNN and Fox. Or to identify bias in a news story. When tech is involved, it often comes in the form of “don’t trust Wikipedia; use Google.” We might collectively dismiss these practices as not-media-literacy, but these activities are often couched in those terms.

I’m painfully aware of this, in part because media literacy is regularly proposed as the “solution” to the so-called “fake news” problem. I hear this from funders and journalists, social media companies and elected officials. My colleagues Monica Bulger and Patrick Davison just released a report on media literacy in light of “fake news” given the gaps in current conversations. I don’t know what version of media literacy they’re imagining but I’m pretty certain it’s not the CNN vs Fox News version. Yet, when I drill in, they often argue for the need to combat propaganda, to get students to ask where the money is coming from, to ask who is writing the stories for what purposes, to know how to fact-check, etcetera. And when I push them further, I often hear decidedly liberal narratives. They talk about the Mercers or about InfoWars or about the Russians. They mock “alternative facts.” While I identify as a progressive, I am deeply concerned by how people understand these different conservative phenomena and what they see media literacy as solving.

get that many progressive communities are panicked about conservative media, but we live in a polarized society and I worry about how people judge those they don’t understand or respect. It also seems to me that the narrow version of media literacy that I hear as the “solution” is supposed to magically solve our political divide. It won’t. More importantly, as I’m watching social media and news media get weaponized, I’m deeply concerned that the well-intended interventions I hear people propose will backfire, because I’m fairly certain that the crass versions of critical thinking already have.

New Data & Society report on media literacy by Monica Bulger and Patrick Davison

My talk today is intended to interrogate some of the foundations upon which educating people about the media landscape dependsRather than coming at this from the idealized perspective, I am trying to come at this from the perspective of where good intentions might go awry, especially in a moment in which narrow versions of media literacy and critical thinking are being proposed as the solution to major socio-cultural issues. I want to examine the instability of our current media ecosystem to then return to the question of:what kind of media literacy should we be working towards? So let’s dig in.

Epistemological Warfare

In 2017, sociologist Francesca Tripodi was trying to understand how conservative communities made sense of the seemingly contradictory words coming out of the mouth of the US PresidentAlong her path, she encountered people talking about making sense of The Word when referencing his speeches. She began accompanying people in her study to their bible study groups. Then it clicked. Trained on critically interrogating biblical texts, evangelical conservative communities were not taking Trump’s messages as literal text. They were interpreting their meanings using the sameepistemological framework as they approached the bible. Metaphors and constructs matter more than the precision of words.

Why do we value precision in language? I sat down for breakfast with Gillian Tett, a Financial Times journalist and anthropologist. She told me that when she first moved to the States from the UK, she was confounded by our inability to talk about class. She was trying to make sense of what distinguished class in America. In her mind, it wasn’t race. Or education. It came down to what construction of language was respected and valued by whom. People became elite by mastering the language marked as elite. Academics, journalists, corporate executives, traditional politicians: they all master the art of communication. I did too. I will never forget being accused of speaking like an elite by my high school classmates when I returned home after a semester of college. More importantly, although it’s taboo in America to be explicitly condescending towards people on the basis of race or education, there’s no social cost among elites to mock someone for an inability to master language.For using terms like “shithole.”

Linguistic and communications skills are not universally valued. Those who do not define themselves through this skill loathe hearing the never-ending parade of rich and powerful people suggesting that they’re stupid, backwards, and otherwise lesser. Embracing being anti-PC has become a source of pride, a tactic of resistance. Anger boils over as people who reject “the establishment” are happy to watch the elites quiver over their institutions being dismantled. This is why this is a culture war. Everyone believes they are part of the resistance.

But what’s at the root of this cultural war? Cory Doctorow got me thinkingwhen he wrote the following:

We’re not living through a crisis about what is true, we’re living through a crisis about how we know whether something is true. We’re not disagreeing about facts,we’re disagreeing about epistemology. The “establishment” version of epistemology is, “We use evidence to arrive at the truth, vetted by independent verification (but trust us when we tell you that it’s all been independently verified by people who were properly skeptical and not the bosom buddies of the people they were supposed to be fact-checking).”

The “alternative facts” epistemological method goes like thisThe ‘independent’ experts who were supposed to be verifying the ‘evidence-based’ truth were actually in bed with the people they were supposed to be fact-checking. In the end, it’s all a matter of faith, then: you either have faith that ‘their’ experts are being truthful, or you have faith that we are. Ask your gut, what version feels more truthful?

Let’s be honest — most of us educators are deeply committed to a way of knowing that is rooted in evidence, reason, and fact. But who gets to decide what constitutes a fact? In philosophy circles, social constructivists challenge basic tenets like fact, truth, reason, and evidence. Yet, it doesn’t take a doctorate of philosophy to challenge the dominant way of constructing knowledge. Heck, 75 years ago, evidence suggesting black people were biologically inferior was regularly used to justify discrimination. And this was called science!

In many Native communities, experience trumps Western science as the key to knowledge. These communities have a different way of understanding topics like weather or climate or medicineExperience is also used in activist circles as a way of seeking truth and challenging the status quo. Experience-based epistemologies also rely on evidence, but not the kind of evidence that would be recognized or accepted by those in Western scientific communities.

Those whose worldview is rooted in religious faith, particularly Abrahamic religions, draw on different types of information to construct knowledge. Resolving scientific knowledge and faith-based knowledge has never been easy; this tension has countless political and social ramifications. As a result, American society has long danced around this yawning gulf and tried to find solutions that can appease everyone. But you can’t resolve fundamental epistemological differences through compromise.

No matter what worldview or way of knowing someone holds dear, they always believe that they are engaging in critical thinking when developing a sense of what is right and wrong, true and false, honest and deceptive. But much of what they conclude may be more rooted in their way of knowing than any specific source of information.

If we’re not careful, “media literacy” and “critical thinking”will simply be deployed as an assertion of authority over epistemology.

Right now, the conversation around fact-checking has already devolved to suggest that there’s only one truth. And we have to recognize that there are plenty of students who are taught that there’s only one legitimate way of knowing, one accepted worldview. This is particularly dicey at the collegiate level, where us professors have been taught nothing about how to teach across epistemologies.

Personally, it took me a long time to recognize the limits of my teachersLike many Americans in less-than-ideal classrooms, I was taught that history was a set of facts to be memorized. When I questioned those facts, I was sent to the principal’s office for disruption. Frustrated and confused, I thought that I was being force-fed information for someone else’s agenda. Now I can recognize that that teacher was simply exhausted, underpaid, and waiting for retirement. But it took me a long time to realize that there was value in history and that history is a powerful tool.

Weaponizing Critical Thinking

The political scientist Deen Freelon was trying to make sense of the role of critical thinking to address “fake news.” He ended up looking back at a fascinating campaign by Russian Today (known as RT). Their motto for a while was “question more.” They produced a series of advertisements as teasers for their channel. These advertisements were promptly banned in the US and UK, resulting in RT putting up additional ads about how they were banned and getting tremendous mainstream media coverage about being banned. What was so controversial? Here’s an example:

“Just how reliable is the evidence that suggests human activity impacts on climate change? The answer isn’t always clear-cut. And it’s only possible to make a balanced judgement if you are better informed. By challenging the accepted view, we reveal a side of the news that you wouldn’t normally see. Because we believe that the more you question, the more you know.”

If you don’t start from a place where you’re confident that climate change is real, this sounds quite reasonable. Why wouldn’t you want more information? Why shouldn’t you be engaged in critical thinking? Isn’t this what you’re encouraged to do at school? So why is asking this so taboo? And lest you think that this is a moment to be condescending towards climate deniers, let me offer another one of their ads.

“Is terror only committed by terrorists? The answer isn’t always clear-cut. And it’s only possible to make a balanced judgement if you are better informed. By challenging the accepted view, we reveal a side of the news that you wouldn’t normally see. Because we believe that the more you question, the more you know.”

Many progressive activists ask whether or not the US government commits terrorism in other countries. The ads all came down because they were too political, but RT got what they wanted: an effective ad campaignThey didn’t come across as conservative or liberal, but rather a media entity that was “censored” for asking questions. Furthermore, by covering the fact that they were banned, major news media legitimized their frame under the rubric of “free speech.” Under the assumption that everyone should have the right to know and to decide for themselves.

We live in a world now where we equate free speech with the right to be amplified. Does everyone have the right to be amplified? Social media gave us that infrastructure under the false imagination that if we were all gathered in one place, we’d find common ground and eliminate conflict. We’ve seen this logic before. After World War II, the world thought that connecting the globe through financial interdependence would prevent World War III. It’s not clear that this logic will hold.

For better and worse, by connecting the world through social media and allowing anyone to be amplified, information can spread at record speed.There is no true curation or editorial control. The onus is on the public to interpret what they see. To self-investigate. Since we live in a neoliberal society that prioritizes individual agency, we double down on media literacy as the “solution” to misinformation. It’s up to each of us as individuals to decide for ourselves whether or not what we’re getting is true.

Figure 1

Yet, if you talk with someone who has posted clear, unquestionable misinformation, more often than not, they know it’s bullshit. Or they don’t care whether or not it’s true. Why do they post it then? Because they’re making a statement. The people who posted this meme (figure 1) didn’t bother to fact check this claim. They didn’t care. What they wanted to signal loud and clear is that they hated Hillary Clinton. And that message was indeed heard loud and clear. As a result, they are very offended if you tell them that they’ve been duped by Russians into spreading propaganda. They don’t believe you for one second.

Misinformation is contextual. Most people believe that people they know are gullible to false information, but that they themselves are equipped to separate the wheat from the chaff. There’s widespread sentiment that we can fact check and moderate our way out of this conundrum. This will fail. Don’t forget that for many people in this country, both education and the media are seen as the enemy — two institutions who are trying to have power over how people think. Two institutions that are trying to assert authority over epistemology.

Finding the Red Pill

Growing up on Usenet, Godwin’s Law was more than an adage to me. I spent countless nights lured into conversation by the idea that someone was wrong on the internet. And I long ago lost count about how many of them ended up with someone invoking Hitler or the Holocaust. I might have even been to blame in some of these conversations.

Fast forward 15 years to the point when Nathan Poe wrote a poignant comment on an online forum dedicated to Christianity: Without a winking smiley or other blatant display of humor, it is utterly impossible to parody a Creationist in such a way that someone won’t mistake for the genuine article.”Poe’s Law, as it became known, signals that it’s hard to tell the difference between an extreme view and a parody of an extreme view on the internet.

In their book, “The Ambivalent Internet,”media studies scholars Whitney Phillips and Ryan Milner highlight how a segment of society has become so well-versed at digital communications — memes, GIFs, videos, etc. — that they can use these tools of expression to fundamentally destabilize others’communication structures and worldviewsIt’s hard to tell what’s real and what’s fiction, what’s cruel and what’s a joke. But that’s the point. That is howirony and ambiguity can be weaponized. And for some, the goal is simple:dismantle the very foundations of elite epistemological structures that are so deeply rooted in fact and evidence.

Many people, especially young people, turn to online communities to make sense of the world around them. They want to ask uncomfortable questions, interrogate assumptions, and poke holes at things they’ve heard. Welcome to youth. There are some questions that are unacceptable to ask in public and they’ve learned that. But in many online fora, no question or intellectual exploration is seen as unacceptable. To restrict the freedom of thought is to censor. And so all sorts of communities have popped up for people to explore questions of race and gender and other topics in the most extreme ways possible. And these communities have become slippery. Are those taking on such hateful views real? Or are they being ironic?

In the 1999 film The Matrix, Morpheus says to Neo: “You take the blue pill,the story ends. You wake up in your bed and believe whatever you want. You take the red pill, you stay in Wonderland, and I show you how deep the rabbit hole goes.” Most youth aren’t interested in having the wool pulled over their head, even if blind faith might be a very calming way of living. Restricted in mobility and stressed to holy hell, they want to have access to what’s inaccessible, know what’s taboo, and say what’s politically incorrect. So who wouldn’t want to take the red pill?

Image via Warner Bros.

In some online communities, taking the red pill refers to the idea of waking up to how education and media are designed to deceive you into progressive propaganda. In these environments, visitors are asked to question more. They’re invited to rid themselves of their politically correct shackles. There’s an entire online university designed to undo accepted ideas about diversity, climate, and history. Some communities are even more extreme in their agenda. These are all meant to fill in the gaps for those who are opening to questioning what they’ve been taught.

In 2012, it was hard not to avoid the names Trayvon Martin and George Zimmerman, but that didn’t mean that most people understood the storyline.In South Carolina, a white teenager who wasn’t interested in the news felt like he needed to know what the fuss was all about. He decided to go to Wikipedia to understand more. He was left with the impression that Zimmerman was clearly in the right and disgusted that everyone was defending Martin. While reading up on this case, he ran across the term “black on white crime” on Wikipedia and decided to throw that term into Google where he encountered a deeply racist website inviting him to wake up to a reality that he had never considered. He took that red pill and dove deep into a worldview whose theory of power positioned white people as victims. Over a matter of years, he began to embrace those views, to be radicalized towards extreme thinking. On June 17, 2015, he sat down for an hour with a group of African-American church-goers in Charleston South Carolina before opening fire on them, killing 9 and injuring 1. His goal was simple: he wanted to start a race war.

It’s easy to say that this domestic terrorist was insane or irrational, but he began his exploration trying to critically interrogate the media coverage of a story he didn’t understandThat led him to online fora filled with people who have spent decades working to indoctrinate people into a deeply troubling, racist worldview. They draw on countless amounts of “evidence,” engage in deeply persuasive discursive practices, and have the mechanisms to challenge countless assumptions. The difference between what is deemed missionary work, education, and radicalization depends a lot on your worldview. And your understanding of power.

Who Do You Trust?

The majority of Americans do not trust the news media. There are many explanations for this — loss of local news, financial incentiveshard to distinguish between opinion and reporting, etcBut what does it mean to encourage people to be critical of the media’s narratives when they are already predisposed against the news media?

Perhaps you want to encourage people to think critically about how information is constructed, who is paying for it, and what is being left out. Yet, among those whose prior is to not trust a news media institution, among those who see CNN and The New York Times as “fake news,” they’re already there. They’re looking for flaws. It’s not hard to find them. After all, the news industry is made of people in institutions in a society. So when youth are encouraged to be critical of the news media, they come away thinking that the media is lying. Depending on someone’s prior, they may even take what they learn to be proof that the media is in on the conspiracy. That’s where things get very dicey.

Many of my digital media and learning colleagues encourage people to make media to help understand how information is produced. Realistically, many young people have learned these skills outside the classroom as they seek to represent themselves on Instagram, get their friends excited about a meme, or gain followers on YouTube. Many are quite skilled at using media, but to what end? Every day, I watch teenagers produce anti-Semitic and misogynistic content using the same tools that activists use to combat prejudice. It’s notable that many of those who are espousing extreme viewpoints are extraordinarily skilled at using mediaToday’s neo-Nazis are a digital propaganda machine. Developing media making skills doesn’t guarantee that someone will use them for good. This is the hard part.

Most of my peers think that if more people are skilled and more people are asking hard questions, goodness will see the light. In talking about misunderstandings of the First Amendment, Nabiha Syed of Buzzfeedhighlights that the frame of the “marketplace of ideas” sounds great, but is extremely naiveDoubling down on investing in individuals as a solution to a systemic abuse of power is very American. But the best ideas don’t always surface to the top. Nervously, many of us tracking manipulation of media are starting to think that adversarial messages are far more likely to surface than well-intended ones.

This is not to say that we shouldn’t try to educate people. Or that producing critical thinkers is inherently a bad thing. I don’t want a world full of sheeple.But I also don’t want to naively assume what media literacy could do in responding to a culture war that is already underwayI want us to grapple with reality, not just the ideals that we imagine we could maybe one day build.

It’s one thing to talk about interrogating assumptions when a person can keep emotional distance from the object of study. It’s an entirely different thing to talk about these issues when the very act of asking questions is what’s being weaponized. This isn’t historical propaganda distributed through mass media. Or an exercise in understanding state power. This is about making sense of an information landscape where the very tools that people use to make sense of the world around them have been strategically perverted by other people who believe themselves to be resisting the same powerful actors that we normally seek to critique.

Take a look at the graph above. Can you guess what search term this is? This is the search query for “crisis actors.” This concept emerged as a conspiracy theory after Sandy Hook. Online communities worked hard to get this to land with the major news media after each shooting. With Parkland, they finally succeeded. Every major news outlet is now talking about crisis actors, as though it’s a real thing, or something to be debunked. When teenage witnesses of the mass shooting in Parkland speak to journalists these days, they have to now say that they are not crisis actors. They must negate a conspiracy theory that was created to dismiss them. A conspiracy theory that undermines their message from the get-go. And because of this, many people have turned to Google and Bing to ask what a crisis actor is. They quickly get to the Snopes page. Snopes provides a clear explanation of why this is a conspiracy. But you are now asked to not think of an elephant.

You may just dismiss this as craziness, but getting this narrative into the media was designed to help radicalize more people. Some number of people will keep researching, trying to understand what the fuss is all about. They’ll find online fora discussing the images of a brunette woman and ask themselves if it might be the same person. They will try to understand the fight between David Hogg and Infowars or question why Infowars is being restricted by YouTube. They may think this is censorship. Seeds of doubt will start to form. And they’ll ask whether or not any of the articulate people they see on TV might actually be crisis actors. That’s the power of weaponized narratives.

One of the main goals for those who are trying to manipulate media is to pervert the public’s thinking. It’s called gaslighting. Do you trust what is real?One of the best ways to gaslight the public is to troll the media. By getting the news media to be forced into negating frames, they can rely on the fact that people who distrust the media often respond by self-investigating. This is the power of the boomerang effect. And it has a history. After all, the CDC realized that the more news media negated the connection between autism and vaccination, the more the public believed there was something real there.

In 2016, I watched networks of online participants test this theory through an incident now known as Pizzagate. They worked hard to get the news media to negate the conspiracy theory, believing that this would prompt more people to try to research if there was something real there. They were effective. The news media covered the story to negate it. Lots of people decided to self-investigate. One guy even showed up with a gun.

Still from the trailer for “Gaslight

The term “gaslighting” originates in the context of domestic violence. The term refers back to an 1944 movie called Gas Light where a woman is manipulated by her husband in a way that leaves her thinking she’s crazy. It’sa very effective technique of controlIt makes someone submissive and disoriented, unable to respond to a relationship productively. While many anti-domestic violence activists argue that the first step is to understand that gaslighting exists, the “solution” is not to fight back against the person doing the gaslighting. Instead, it’s to get out. Furthermore, anti-domestic violence experts argue that recovery from gaslighting is a long and arduous process, requiring therapy. They recognize that once instilled, self-doubt is hard to overcome.

While we have many problems in our media landscape, the most dangerous is how it is being weaponized to gaslight people.

And unlike the domestic violence context, there is no “getting out” that is really possible in a media ecosystem. Sure, we can talk about going off the grid and opting out of social media and news media, but c’mon now.

The Cost of Triggering

In 2017, Netflix released a show called 13 Reasons Why. Before parents and educators had even heard of the darn show, millions of teenagers had watched it. For most viewers, it was a fascinating show. The storyline was enticing, the acting was phenomenal. But I’m on the board of Crisis Text Line, an amazing service where people around this country talk with trained counselors via text message when they’re in a crisis. Before the news media even began talking about the show, we started to see the impact. After all, the premise of the show is that a teen girl died by suicide and left behind 13 tapes explaining how people had bullied her to justify her decision.

At Crisis Text Line, we do active rescues every night. This means that we send emergency personnel to the homes of someone who is in the middle of a suicide attempt in an effort to save their lives. Sometimes, we succeed. Sometimes, we don’t. It’s heartbreaking work. As word of 13 Reasons Why got out and people started watching the show, our numbers went through the roof. We were drowning in young people referencing the show, signaling how it had given them a framework for ending their lives. We panicked. All hands on deck. As we got things under control, I got angry. What the hell was Netflix thinking?

Researchers know the data on suicide and media. The more the media normalizes suicide, the more suicide is put into people’s head as a possibility,the more people who are on the edge start to take it seriously and consider it for themselves. After early media effects research was published, journalists developed best practices to minimize their coverage of suicide. As Joan Donovan often discusses, this form of “strategic silence” was viable in earlier media landscapes; it’s a lot harder now. Today, journalists and media makers feel as though the fact that anyone could talk about suicide on the internet means that they should have a right to do so too.

We know that you can’t combat depression through rational discourse.Addressing depression is hard work. And I’m deeply concerned that we don’t have the foggiest clue how to approach the media landscape today. I’m confident that giving grounded people tools to think smarter can be effective.But I’m not convinced that we know how to educate people who do not share our epistemological frame. I’m not convinced that we know how to undo gaslighting. I’m not convinced that we understand how engaging people about the media intersects with those struggling with mental health issues.And I’m not convinced that we’ve even begun to think about the unintended consequences of our good — let alone naive — intentions.

In other words, I think that there are a lot of assumptions baked into how we approach educating people about sensitive issues and our current media crisis has made those painfully visible.

Oh, and by the way, the Netflix TV show ends by setting up Season 2 to start with a school shooting. WTF, Netflix?

Pulling Back Out

So what role do educators play in grappling with the contemporary media landscape? What kind of media literacy makes sense? To be honest, I don’t know. But it’s unfair to end a talk like this without offering some path forward so I’m going to make an educated guess.

I believe that we need to develop antibodies to help people not be deceived.

That’s really tricky because most people like to follow their gut more than than their mindNo one wants to hear that they’re being tricked. Still, thinkthere might be some value in helping people understand their own psychology.

Consider the power of nightly news and talk radio personalities. If you bring Sean Hannity, Rachel Maddow, or any other host into your home every night,you start to appreciate how they think. You may not agree with them, but youbuild a cognitive model of their words such that they have a coherent logic to them. They become real to you, even if they don’t know who you are. This is what scholars call parasocial interaction. And the funny thing about humanpsychology is that we trust people who we invest our energies into understanding. That’s why bridging difference requires humanizing people across viewpoints.

Empathy is a powerful emotion, one that most educators want to encourage.But when you start to empathize with worldviews that are toxic, it’s very hard to stay grounded. It requires deep cognitive strength. Scholars who spend a lot of time trying to understand dangerous worldviews work hard to keep their emotional distance. One very basic tactic is to separate the different signals. Just read the text rather than consume the multimedia presentation of that. Narrow the scopeActively taking things out of context can be helpful for analysis precisely because it creates a cognitive disconnect. This is the opposite of how most people encourage everyday analysis of media, where the goal is to appreciate the context first. Of course, the trick here is wanting to keep that emotional distance. Most people aren’t looking for that.

I also believe that it’s important to help students truly appreciate epistemological differencesIn other words, why do people from different worldviews interpret the same piece of content differently? Rather than thinking about the intention behind the production, let’s analyze the contradictions in the interpretation. This requires developing a strong sense of how others think and where the differences in perspective lie. From an educational point of view, this means building the capacity to truly hear and embrace someone else’s perspective and teaching people to understand another’s view while also holding their view firm. It’s hard work, an extension of empathy into a practice that is common among ethnographers. It’s also a skill that is honed in many debate clubs. The goal is to understand the multiple ways of making sense of the world and use that to interpret media.Of course, appreciating the view of someone who is deeply toxic isn’t always psychologically stabilizing.

Still from “Selective Attention Test

Another thing I recommend is to help students see how they fill in gaps when the information presented to them is sparse and how hard it is to overcome priors. Conversations about confirmation bias are important here because it’s important to understand what information we accept and what information we reject. Selective attention is another tool, most famously shown to students through the “gorilla experiment.” If you aren’t familiar with this experiment, it involves showing a basketball video and focusing on counting passes made by people in one color shirt and then asking if they saw the gorilla. Many people do not. Inverting these cognitive science exercises,asking students to consider different fan fiction that fills in the gaps of a story with divergent explanations is another way to train someone to recognize how their brain fills in gaps.

What’s common about the different approaches I’m suggesting is that they are designed to be cognitive strengthening exercises, to help students recognize their own fault linesnot the fault lines of the media landscape around them. I can imagine that this too could be called media literacy and if you want to bend your definition that way, I’ll accept it. But the key is to realize the humanity in ourselves and in others. We cannot and should not assert authority over epistemology, but we can encourage our students to be more aware of how interpretation is socially constructed. And to understand how that can be manipulated. Of course, just because you know you’re being manipulated doesn’t mean that you can resist it. And that’s where my proposal starts to get shaky.

Let’s be honest — our information landscape is going to get more and more complex. Educators have a critical role to play in helping individuals and societies navigate what we encounter. But the path forward isn’t about doubling down on what constitutes a fact or teaching people to assess sources.Rebuilding trust in institutions and information intermediaries is important, but we can’t assume the answer is teaching students to rely on those signals.The first wave of media literacy was responding to propaganda in a mass media context. We live in a world of networks now. We need to understand how those networks are intertwined and how information that spreads through dyadic — even if asymmetric — encounters is understood and experienced differently than that which is produced and disseminated through mass media.

Above all, we need to recognize that information can, is, and will be weaponized in new ways. Today’s propagandist messages are no longer simply created by Madison Avenue or Edward Bernays-style State campaigns. For the last 15 years, a cohort of young people has learned how to hack the attention economy in an effort to have power and status in this new information ecosystem. These aren’t just any youth. They are young people who are disenfranchised, who feel as though the information they’re getting isn’t fulfilling, who struggle to feel powerful. They are trying to make sense of an unstable world and trying to respond to it in a way that is personally fulfilling.Most youth are engaged in invigorating activities. Others are doing the same things youth have always done. But there are youth out there who feel alienated and disenfranchised, who distrust the system and want to see it all come down. Sometimes, this frustration leads to productive ends. Often it does not. But until we start understanding their response to our media society, we will not be able to produce responsible interventions. So I would argue that we need to start developing a networked response to this networked landscape. And it starts by understanding different ways of constructing knowledge.


Special thanks to Monica Bulger, Mimi Ito, Whitney Phillips, Cathy Davidson, Sam Hinds Garcia, Frank Shaw, and Alondra Nelson for feedback.


Update (March 16, 2018): I crafted some responses to the most common criticisms I’ve received to date about this work here. (Also, the original version of this blog post was published on Medium.)

The Reality of Twitter Puffery. Or Why Does Everyone Now Hate Bots?

(This was originally posted on NewCo Shift.)

A friend of mine worked for an online dating company whose audience was predominantly hetero 30-somethings. At some point, they realized that a large number of the “female” accounts were actually bait for porn sites and 1–900 numbers. I don’t remember if users complained or if they found it themselves, but they concluded that they needed to get rid of these fake profiles. So they did.

And then their numbers started dropping. And dropping. And dropping.

Trying to understand why, researchers were sent in. What they learned was that hot men were attracted to the site because there were women that they felt were out of their league. Most of these hot men didn’t really aim for these ultra-hot women, because they felt like they would be inaccessible, but they were happy to talk with women who they saw as being one rung down (as in actual hot women). These hot women, meanwhile, were excited to have these hot men (who they saw as equals) on the site. These also felt that, since there were women hotter than them, that this was a site for them. When they removed the fakes, the hot men felt the site was no longer for them. They disappeared. And then so did the hot women. Etc. The weirdest part? They reintroduced decoy profiles (not as redirects to porn but as fake women who just didn’t respond) and slowly folks came back.

Why am I telling you this story? Fake accounts and bots on social media are not new. Yet, in the last couple of weeks, there’s been newfound hysteria around Twitter bots and fake accounts. I find it deeply problematic that folks are saying that having fake followers is inauthentic. This is like saying that makeup is inauthentic. What is really going on here?

From Fakesters to Influencers

From the earliest days of Friendster and MySpace, people liked to show how cool they were by how many friends they had. As Alice Marwick eloquentlydocumentedself-branding and performing status were the name of the gamefor many in the early days of social media. This hasn’t changedPeople made entire careers out of appearing to be influential, not just actually being influential. Of course a market emerged around this so that people could buy and sell followers, friends, likes, comments, etc. Indeed, standard practice, especially in the wink-nudge world of Instagram, where monetized content is the game and so-called organic “macroinfluencers” can easily double their follower size through bots are more than happily followed by bots, paid or not.

Some sites have tried to get rid of fake accounts. Indeed, Friendster played whack-a-mole with them, killing off “Fakesters” and any account that didn’t follow their strict requirements; this prompted a mass exodus. Facebook’s real-name policy also signaled that such shenanigans would not be allowed on their site, although shhh…. lots of folks figured out how to have multiple accounts and otherwise circumvent the policy.

And let’s be honest — fake accounts are all over most online dating profiles. Ashley Madison, anyone?

Bots, Bots, Bots

Bots have been an intrinsic part of Twitter since the early days. Following the Pope’s daily text messaging services, the Vatican set up numerous bots offering Catholics regular reflections. Most major news organizations have bots so that you can keep up with the headlines of their publications. Twitter’s almost-anything-goes policy meant that people have built bots for all sorts of purposes. There are bots that do poetry, ones that argue with anti-vaxxers about their beliefs, and ones that call out sexist comments people post. I’m a big fan of the @censusAmericans bot created by FiveThirtyEight to regularly send out data from the Census about Americans.

Over the last year, sentiment towards Twitter’s bots has become decidedly negative. Perhaps most people didn’t even realize that there were bots on the site. They probably don’t think of @NYTimes as a bot. When news coverage obsesses over bots, they primarily associate the phenomenon with nefarious activities meant to seed discord, create chaos, and do harm. It can all be boiled down to: Russian bots. As a result, Congress saw bots as inherently bad and journalists keep accusing Twitter of having a “bot problem” without accounting for how their stories appear on Twitter through bots.

Although we often hear about the millions and millions of bots on Twitter as though they’re all manipulative, the stark reality is that bots can be quite fun. I had my students build Twitter bots to teach them how these things worked — they had a field day, even if they didn’t get many followers.

Of course, there are definitely bots that you can buy to puff up your status. Some of them might even be Russian built. And here’s where we get to the crux of the current conversation.

Buying Status

Typical before/after image on Instagram.

People buy bots to increase their number of followers, retweets, and likes in order to appear cooler than they are. Think of this as mascara for your digital presence. While plenty of users are happy chatting away with their friends without their makeup on, there’s an entire class of professionals who feel the need to be dolled up and giving the best impression possible. It’s a competition for popularity and status, marked by numbers.

Number games are not new, especially not in the world of media. Take a well-established firm like Nielsen. Although journalists often uncritically quote Nielsen numbers as though they are “fact,” most people in the ad and media business know that they’re crap. But they’ve long been the best crap out there. And, more importantly, they’re uniform crap so businesses can make predictable decisions off of these numbers, fully aware that they might not be that accurate. The same has long been true of page views and clicksNo major news organization should take their page views literally. And yet, lots of news agencies rank their reporters based on this data.

What makes the purchasing of Twitter bots and status so nefarious? The NYTimes story suggests that doing so is especially deceptive. Their coverage shamed Twitter into deleting a bunch of Twitter accounts, outing all of the public figures who had bought bots. It almost felt like a discussion of who had gotten Botox.

Much of this recent flurry of coverage suggests that the so-called bot problem is a new thing that is “finally” known. It boggles my mind to think that any regular Twitter user hadn’t seen automated accounts in the past. And heck, there have been services like Twitter Audit to see how many fake followers you have since at least 2012. Gilad Lotan even detailed the ecosystem of buying fake followers in 2014I think that what’s new is that the term “bot” is suddenly toxic. And it gives us an opportunity to engage in another round of social shaming targeted at insecure people’s vanity all under the false pretense of being about bad foreign actors.

I’ve never been one to feel the need to put on a lot of makeup in order to leave the house and I haven’t been someone who felt the need to buy bots to appear cool online. But I find it deeply hypocritical to listen to journalists and politicians wring their hands about fake followers and bots given that they’ve been playing at that game for a long time. Who among them is really innocent of trying to garner attention through any means possible?

At the end of the day, I don’t really blame Twitter for giving these deeply engaged users what they want and turning a blind eye towards their efforts to puff up their status online. After all, the cosmetic industry is $55 billion. Then again, even cosmetic companies sometimes change their formulas when their products receive bad press.

Note: I’m fully aware of hypotheses that bots have destroyed American democracy. That’s a different essay. But I think that the main impact that they have had, like spam, is to destabilize people’s trust in the media ecosystem. Still, we need to contend with the stark reality that they do serve a purpose and some people do want them.

Panicked about Kids’ Addiction to Tech? Here are two things you could do

Flickr: Jan Hoffman

(This was originally posted on NewCo Shift)

Ever since key Apple investors challenged the company to address kids’ phone addiction, I’ve gotten a stream of calls asking me to comment on the topic. Mostly, I want to scream. I wrote extensively about the unhelpful narrative of “addiction” in my book It’s Complicated: The Social Lives of Networked Teens. At the time, the primary concern was social media. Today, it’s the phone, but the same story still stands: young people are using technology to communicate with their friends non-stop at a point in their life when everything is about sociality and understanding your place in the social world.

As much as I want to yell at all of the parents around me to chill out, I’m painfully and acutely aware of how ineffective this is. Parents don’t like to see that they’re part of the problem or that their efforts to protect and help their children might backfire. (If you want to experience my frustration in full color, watch the Black Mirror episode called “Arkangel” (trailer here).)

Lately, I’ve been trying to find smaller interventions that can make a huge different, tools that parents can use to address the problems they panic about. So let me offer two approaches for “addiction” that work at different ages.

Parenting the Small People: Verbalizing Tech Use

In the early years, children learn values and norms by watching their parents and other caregivers. They emulate our language and our facial expressions, our quirky habits and our tastes. There’s nothing more satisfying and horrifying than listening to your child repeat something you say all too often. Guess what? They also get their cues about technology from people around them. A child would need to be alone in the woods to miss that people love their phones. From the time that they’re born, people are shoving phones in their faces to take pictures, turning to their phones to escape, and obsessively talking on their phones while ignoring them. Of course they want the attention that they see the phone as taking away. And of course they want the device to be special to them.

So, here’s what I recommend to parents of small people: Verbalize what you’re doing with your phone. Whenever you pick up your phone (or other technologies) in front of your kids, say what you’re doing. And involve them in the process if they’d like.

  • “Mama’s trying to figure out how long it will take to get to Bobby’s house. Want to look at the map with me?”
  • “Daddy’s checking out the weather. Do you want to see what it says?”
  • “Mom wants to take a picture of you. Is that OK?
  • “Papa needs a break and wants to read the headlines of the New York Times. Do you want me to read them to you?”
  • “Mommy got a text message from Mama and needs to respond. Should I tell her something from you too?”

The funny thing about verbalizing what you’re doing is that you’ll check yourself about your decisions to grab that phone. Somehow, it’s a lot less comfy saying: “Mom’s going to check work email because she can’t stop looking in case something important happens.” Once you begin saying out loud every time you look at technology, you also realize how much you’re looking at technology. And what you’re normalizing for your kids. It’s like looking at a mirror and realizing what they’re learning. So check yourself and check what you have standardized. Are you cool with the values and norms you’ve set?

Parenting the Mid-Size People: Household Contracts

I can’t tell you how many parents have told me that they have a rule in their house that their kids can’t use technology until X, where X could be “after dinner” or “after homework is done” or any other markers. And yet, consistently, I ask them if they put away their phones during dinner or until after they’ve bathed and they look at me like I’m an alien. Teenagers loathe hypocrisy. It’s the biggest thing that I’ve seen to undermine trust between a parent and a child. And boy do they have a lot to say about their parents’ addiction to their phones. Oy vay.

So if you want to curb the usage of your child’s technology use, here’s what I propose: Create a household contract. This is a contract that sets the boundaries for everyone in the house — parents and kids.

Ask your teenage or tween child to write the first draft of the contract, stipulating what they think is appropriate as the rules for everyone in the house, what they’re willing to trade-off to get technology privileges and what they think that parents should trade-off. Ask them to list the consequences of not abiding by the household rules for everyone in the house. (As a parent, you can think through or sketch the terms you think are fair, but you should not present them first.). Ask your child to pitch to you what the household rules should be. You will most likely be shocked that they’re stricter and more structured than you expected. And then start the negotiation process. You may want to argue that you should have the right to look at the phone when it’s ringing in case it’s grandma calling, but then your daughter should have the right to look at her phone to see if her best friend is looking. That kind of thing. Work through the process, but have your child lead it rather than you dictate it. And then write up those rules and hang them up in the house as a contract that can be renegotiated at different types.

Parenting Past Addiction

Many people have unhealthy habits and dynamics in their life. Some are rooted in physical addiction. Others are habitual or psychological crutches. But across that spectrum, most people are aware of when something that they’re doing isn’t healthy. They may not be able to stop. Or they may not want to stop. Untangling that is part of the challenge. When you feel as though your child has an unhealthy relationship with technology (or anything else in their life), you need to start by asking if they see this the same way you do. When parents feel as though what their child is doing is unhealthy for them, but the child does not, the intervention has to be quite different than when the child is also concerned about the issue. There are plenty of teens out there that know their psychological desire to talk non-stop with their friends for fear of missing out is putting them in a bad place. Help them through that process and work through what strategies they can develop and learn to cope. Helping them build those coping skills long term will help them a lot more than just putting rules into place.

When there is a disconnect between parent and child’s views on a situation, the best thing a parent can do is try to understand why the disconnect exists.Is it about pleasure seeking? Is it about fear of missing out? Is it about the emotional bond of friendship? Is it about a parent’s priorities being at odds with a child’s priorities? What comes next is fundamentally about values in parenting. Some parents believe that they are the masters of the house and their demands rule the day. Others acquiesce to their children’s desires with no push back. The majority of the parents are in-between. But at the end of the day, parenting is about helping children navigate the world and support them to develop agency in a healthy manner. So I would strongly recommend that parents focus their energies on negotiating a path through that allows children to be bought-in and aware of why boundaries are being set. That requires communication and energy, not a new technology to police boundaries for you. More often than not, the latter sends the wrong message and backfires, not unlike the Black Mirror episode I mentioned earlier.

Good luck parents — parenting is a non-stop adventure filled with both joy and anxiety.

Beyond the Rhetoric of Algorithmic Solutionism

(This was originally posted on Medium)

If you ever hear that implementing algorithmic decision-making tools to enable social services or other high stakes government decision-making will increase efficiency or reduce the cost to taxpayers, know that you’re being lied to. When implemented ethically, these systems cost more. And they should.

Whether we’re talking about judicial decision making (e.g., “risk assessment scoring”) or modeling who is at risk for homelessness, algorithmic systems don’t simply cost money to implement. They cost money to maintain. They cost money to audit. They cost money to evolve with the domain that they’re designed to serve. They cost money to train their users to use the data responsiblyAbove all, they make visible the brutal pain points and root causes in existing systems that require an increase of services.

Otherwise, all that these systems are doing is helping divert taxpayer money from direct services, to lining the pockets of for-profit entities under the illusion of helping people. Worse, they’re helping usher in a diversion of liability because time and time again, those in powerful positions blame the algorithms.

This doesn’t mean that these tools can’t be used responsibly. They can. And they should. The insights that large-scale data analysis can offer is inspiring. The opportunity to help people by understanding the complex interplay of contextual information is invigorating. Any social scientist with a heart desperately wants to understand how to relieve inequality and create a more fair and equitable system. So of course there’s a desire to jump in and try to make sense of the data out there to make a difference in people’s lives. But to treat data analysis as a savior to a broken system is woefully naive.

Doing so obfuscates the financial incentives of those who are building these services, the deterministic rhetoric that they use to justify their implementation, the opacity that results from having non-technical actors try to understand technical jiu-jitsu, and the stark reality of how technology is used as a political bludgeoning tool. Even more frustratingly, what data analysis does well is open up opportunities for experimentation and deeper explorationBut in a zero-sum context, that means that the resources to do something about the information that is learned is siphoned off to the technology. And, worse, because the technology is supposed to save money, there is no budget for using that data to actually help people. Instead,technology becomes a mirage. Not because the technology is inherently bad, but because of how it is deployed and used.

READ THIS BOOK!

Next week, a new book that shows the true cost of these systems is being published. Virginia Eubanks’ book“Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor” is a deeply researched accounting of how algorithmic tools are integrated into services for welfare, homelessness, and child protection. Eubanks goes deep with the people and families who are targets of these systems, telling their stories and experiences in rich detail. Further, drawing on interviews with social services clients and service providers alongside the information provided by technology vendors and government officials, Eubanks offers a clear portrait of just how algorithmic systems actually play out on the ground, despite all of the hope that goes into their implementation.

Eubanks eschews the term “ethnography” because she argues that this book is immersive journalism, not ethnography. Yet, from my perspective as a scholar and a reader, this is the best ethnography I’ve read in yearsAutomating Inequality” does exactly what a good ethnography should do — it offers a compelling account of the cultural logics surrounding a particular dynamic, and invites the reader to truly grok what’s at stake through the eyes of a diverse array of relevant people. Eubanks brings you into the world of technologically mediated social services and helps you see what this really looks like on the ground. She showcases the frustration and anxiety that these implementations produce; the ways in which both social services recipientsand taxpayers are screwed by the false promises of these technologiesShe makes visible the politics and the stakes, the costs and the hope. Above all, she brings the reader into the stark and troubling reality of what it really means to be poor in America today.

“Automating Inequality” is on par with Barbara Ehrenreich’s “Nickel and Dimed” or Matthew Desmond’s “Evicted. It’s rigorously researched, phenomenally accessible, and utterly humbling. While there are a lot of important books that touch on the costs and consequences of technology through case studies and well-reasoned logic, this book is the first one that I’ve read that really pulls you into the world of algorithmic decision-making and inequality, like a good ethnography should.

I don’t know how Eubanks chose her title, but one of the subtle things about her choice is that she’s (unintentionally?) offering a fantastic backronym for AI. Rather than thinking of AI as “artificial intelligence,” Eubanks effectively builds the case for how we should think that AI often means “automating inequality” in practice.

This book should be mandatory for anyone who works in social services, government, or the technology sector because it forces you to really think about what algorithmic decision-making tools are doing to our public sector, and the costs that this has on the people that are supposedly being served. It’s also essential reading for taxpayers and voters who need to understand why technology is not the panacea that it’s often purported to be. Or rather, how capitalizing on the benefits of technology will require serious investment and a deep commitment to improving the quality of social services, rather than a tax cut.

Please please please read this book. It’s too important not to.

Data & Society will also be hosting Virginia Eubanks to talk about her book on January 17th at 4PM ET. She will be in conversation with Julia Angwin and Alondra Nelson. The event is sold out, but it will be livestreamed online. Please feel free to join us there!

The Radicalization of Utopian Dreams

Amazon Fulfillment Center, CC Scottish Government

The following is a transcript of my lightning talk at The People’s Disruption: Platform Co-Ops for Global Challenges— held at The New School. 


When you listen to people in tech talk about the future of labor, they will tell you that AI is taking over all of the jobs. What they gloss over is the gendered dynamics of the labor force. Many of the shortages in the workforce stem from labor that is culturally gendered “feminine” and seen as low-status. There’s no conception of how workforce dynamics in tech are also gendered.

Furthermore, anxieties about automation don’t tend to focus on work that is seen as the work of immigrants, even at a time when immigration is a hotly contested conversation. As a result, when we talk about automation as the major issue in the future of work, we lose track of the broader anxiety about identities that’s shaping both technology and work.

Identities matter because they shape how people respond to the society around them. How do people whose identities have been destabilized respond to a culture where institutions and information intermediaries no longer have their back? When they can’t find their identity through their working environment?

Our current crisis around opioids offers one harrowing answer. Religious extremism offers another. Yet, we also need to consider how many people turn to activism, both healthy and destructive, as a way of finding meaning.

People often find themselves by engaging with others through collective action, but collective action isn’t always productive. Consider this in light of the broader conversation about media manipulationfor those who have grown up gaming, running a raid on America’s political establishment is thrilling. It’s exhilarating to game the media to say ridiculous things. Hacking the attention economy produces a rush. It doesn’t matter whether or not you memed the president into being if you believe you did. It doesn’t even matter if your comrades were foreign agents with a much darker agenda.

For a lot of folks in tech, being a part of tech has been a way of grounding themselves. Many who built the social media infrastructure that we know today grew up with the utopian idealism of people like John Perry Barlow. HisDeclaration of Independence of Cyberspace is now of drinking age, but today’s reality is a lot more sober. Cybernaut geeks imagined building a new world rooted in a different value structure. They wanted to resist the financialized logic of Wall Street, but ended up contributing to the latest evolution of financialized capitalism. They wanted to create a public that was more broadly accessible, but ended up enabling a new wave of corrosive populism to take hold.

They wanted to disrupt the status quo, but weren’t at all prepared for what it would mean when they controlled the infrastructure underlying democracy, the economy, the media, and communication.

Google Plex CC Sebastian Gamboa

You’re at this event today because you also want a new world, a sociotechnical reality that is more cooperative and equitable in nature. You see Silicon Valley as emblematic of corrosive neoliberalism and libertarianism run amok. I get it. But I can’t help but think of how social media was birthed out of idealism that got reworked by economic and political interests, by the stark realities of what people did with technology vs. what its designers hoped they would do.So many of the people that I knew in the early days of tech wanted what you want.

The early adopters of social technologies — and many of those sites’ creators — were self-identified and marginalized geeks, freaks, and queers. Early social tech was built by those who felt like outsiders in a society that valued suave masculinities. Geeks like me who flocked to the Bay felt disenfranchised and vulnerable and turned to technology to build solidarity and feel less alone. In doing so, we helped construct a form of geek masculinity that gave many geeky men in particular a sense of pride that made them feel empowered through their work and play.

But as many of you know, power corrupts. And the same geek masculinities that were once rejuvenating have spiraled out of control. Today, we’re watching as diversity becomes a wedge issue that can be used to radicalize disaffected young men in tech. The gendered nature of tech is getting ugly.

A decade ago, academics that I adore were celebrating participatory culture as emancipatory, noting that technology allowed people to engage with culture in unprecedented ways. Radical leftists were celebrating the possibilities of decentralized technologies as a form of resisting corporate power. Smart mobs were being touted as the mechanism by which authoritarian regimes could come crashing down.

Now, even the most hardened tech geek is quietly asking:

What hath we wrought?

Screen capture courtesy of Ethan Zuckerman

We’ve seen massively decentralized networks coordinating and mobilizing on both for-profit and not-for-profit platforms, challenging the status quo. But the movements that they’re so strategically building are shaped by tribalistic and hate-oriented values. There are many people coordinating online who are willing to share tactic without sharing end goal, yet their tactical moves collectively achieve a form of societal gaslighting that causes unbearable pain.Tech wasn’t designed to enable this, but it did so none-the-less.

Geophysics Hackathon, CC Matt

This room is filled with people who hold dear many progressive values, who see the tech sector as the new establishment, and who are pushing for a more equitable future. I share your values and desires. You rightfully want a more fair and just society. And you rage against the machine. But I also want you to know that I saw similar desires among the early developers of social media as they worked to eject the dot-com MBA culture from Silicon Valley, as they worked to resist the 1980s Wall Street culture, as they tried to operate differently than their parents.. I saw idealism corrupted, good intentions go awry, and malignant forces capitalize on weaknesses within the system.

So as you relish each other’s presence today and tomorrow, I have a favor to ask. Don’t simply focus on what would be ideal or critique the status quo.Genuinely examine how what you’re seeking could also be corrupted and abused. I believe, more than anything, that deep empathy and self-reflection is critical for us to build a healthier future.

Too often, it’s easier to rally people to tear down what we hate than it is to build a sustainable future. And yet, at this moment in time in particular, we desperately need builders. We need you.

Your Data is Being Manipulated

Excerpt from “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” Sergey Brin and Larry Page (April 1998)

What follows is the crib from my keynote at the 2017 Strata Data Conference in New York City. Full video can be found here. 


In 1998, two graduate students at Stanford decided to try to “fix” the problems with major search engines. Sergey Brin and Larry Page wrote a paper describing how their PageRank algorithm could eliminate the plethora of “junk results.” Their idea, which we all now know as the foundation of Google, was critical. But it didn’t stop people from trying to mess with their system. In fact, the rise of Google only increased the sophistication of those invested in search engine optimization.


“google bombing” — diverting search engine rankings to subversive commentary about public figure

Fast forward to 2003, when the sitting Pennsylvania senator Rick Santorum publicly compared homosexuality to bestiality and pedophilia. Needless to say, the LGBT community was outraged. Journalist Dan Savage called on his readers to find a way to “memorialize the scandal.” One of his fans created a website to associate Santorum’s name with anal sex. To the senator’s horror, countless members of the public jumped in to link to that website in an effort to influence search engines. This form of crowdsourced SEO is commonly referred to as “Google bombing,” and it’s a form of media manipulation intended to mess with data and the information landscape.


Media Manipulation and Disinformation Online (cover), March 2017. Illustration by Jim Cooke

Media manipulation is not new. As many adversarial actors know, the boundaries between propaganda and social media marketing are often fuzzy.Furthermore, any company that uses public signals to inform aspects of its product — from Likes to Comments to Reviews — knows full well that any system you create will be gamed for fun, profit, politics, ideology, and power.Even Congress is now grappling with that reality. But I’m not here to tell you what has always been happening or even what is currently happening — I’m here to help you understand what’s about to happen.


At this moment, AI is at the center of every business conversation. Companies, governments, and researchers are obsessed with data. Not surprisingly, so are adversarial actors. We are currently seeing an evolution in how data is being manipulated. If we believe that data can and should be used to inform people and fuel technology, we need to start building the infrastructure necessary to limit the corruption and abuse of that data — and grapple with how biased and problematic data might work its way into technology and, through that, into the foundations of our society.

In short, I think we need to reconsider what security looks like in a data-driven world.

Shutterstock by goir

Part 1: Gaming the System

Like search engines, social media introduced a whole new target for manipulation. This attracted all sorts of people, from social media marketers to state actors. Messing with Twitter’s trending topics or Facebook’s news feed became a hobby for many. For $5, anyone could easily buy followers, likes, and comments on almost every major site. The economic and political incentives are obvious, but alongside these powerful actors, there are also a whole host of people with less-than-obvious intentions coordinating attacks on these systems.


Piechart example of Rick-Rolling

For example, when a distributed network of people decided to help propel Rick Astley to the top of the charts 20 years after his song “Never Gonna Give You Up” first came out, they weren’t trying to help him make a profit (although they did). Like other memes created through networks on sites like 4chan, rickrolling was for kicks. Butthrough this practice, lots of people learned how to make content “go viral” or otherwise mess with systems. In other words, they learned to hack the attention economy. And, in doing so, they’ve developed strategic practices of manipulation that can and do have serious consequences.


A story like “#Pizzagate” doesn’t happen accidentally — it was produced by a wide network of folks looking to toy with the information ecosystem. They created a cross-platform network of fake accounts known as“sock puppets” which they use to subtly influence journalists and other powerful actors to pay attention to strategically produced questions, blog posts, and YouTube videos. The goal with a story like that isn’t to convince journalists that it’s true, but to get them to foolishly use their amplification channels to negate it. This produces a Boomerang effect,” whereby those who don’t trust the media believe that there must be merit to the conspiracy, prompting some to “self-investigate.”


Hydrargyrum CC BY-SA 2.0

Then there’s the universe of content designed to “open the Overton window” — or increase the range of topics that are acceptable to discuss in public. Journalists are tricked into spreading problematic frames. Moreover,recommendation engines can be used to encourage those who are open to problematic frames to go deeper. Researcher Joan Donovan studies white supremacy; after work, she can’t open Amazon, Netflix, or YouTube without being recommended to consume neo-Nazi music, videos, and branded objectsRadical trolls also know how to leverage this infrastructure to cause trouble. Without tripping any of Twitter’s protective mechanisms, the well-known troll weev managed to use the company’s ad infrastructure to amplify white supremacist ideas to those focused on social justice, causing outrage and anger.

By and large, these games have been fairly manual attacks of algorithmic systems, but as we all know, that’s been changing. And it’s about to change again.


Part 2: Vulnerable Training Sets

Training a machine learning system requires data. Lots of it. While there are some standard corpuses, computer science researchers, startups, and big companies are increasingly hungry for new — and different — data.

Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study, June 29, 2017

The first problem is that all data is biased, most notably and recognizably by reflecting the biases of humans and of society in general. Take, for example, the popular ImageNet dataset. Because humans categorize by shape faster than they categorize by color, you end up with some weird artifacts in that data.


(a) and (c) demonstrate ads for two indvidual’s names, (b) and (d) demonstrate that the advertising was suggesting criminal histories based on name type, not actual records

Things get even messier when you’re dealing with social prejudices. WhenLatanya Sweeney searched for her name on Google, she was surprised to be given ads inviting her to find out if she had a criminal record. As a curious computer scientist, she decided to run a range of common black and white names through the system to see which ads popped up. Unsurprisingly, onlyblack names produced ads for criminal justice products. This isn’t because Google knowingly treated the names differently, but because searchers were more likely to click on criminal justice ads when searching for black names.Google learned American racism and amplified it back at all of its users.

Addressing implicit and explicit cultural biases in data is going to be a huge challenge for everyone who is trying to build a system dependent on data classified by or about humans.


But there’s also a new challenge emerging. The same decentralized networks of people — and state actors — who have been messing with social media and search engines are increasingly eyeing the data that various companies use to train and improve their systems.

Consider, for example, the role of reddit and Twitter data as training data. Computer scientists have long pulled from the very generous APIs of these companies to train all sorts of models, trying to understand natural language, develop metadata around links, and track social patterns. They’ve trained models to detect depression, rank news, and engage in conversation. Ignoring the fact that this data is not representative in the first place, most engineers who use these APIs believe that it’s possible to clean the data and remove all problematic content. I can promise you it’s not.

No amount of excluding certain subreddits, removing of categories of tweets, or ignoring content with problematic words will prepare you for those who are hellbent on messing with you.

I’m watching countless actors experimenting with ways to mess with public data with an eye on major companies’ systems. They are trying to fly below the radar. If you don’t have a structure in place for strategically grappling with how those with an agenda might try to route around your best laid plans, you’re vulnerable. This isn’t about accidental or natural content. It’s not even about culturally biased dataThis is about strategically gamified content injected into systems by people who are trying to guess what you’ll do.


If you want to grasp what that means, consider the experiment Nicolas Papernot and his colleagues published last year. In order to understand the vulnerabilities of computer vision algorithms, they decided to alter images of stop signs so that they still resembled a stop sign to a human viewer even as the underlying neural network interpreted them as a yield sign. Think about what this means for autonomous vehicles. Will this technology be widely adopted if the classifier can be manipulated so easily?

Practical Black-Box Attacks against Machine, March 19, 2017. The images in the top row are altered to disrupt the neural network leading to the misinterpretation on the bottom row. The alterations are not visible to the human eye.

Right now, most successful data-injection attacks on machine learning modelsare happening in the world of research, but more and more, we are seeing people try to mess with mainstream systems. Just because they haven’t been particularly successful yet doesn’t mean that they aren’t learning and evolving their attempts.


Part 3: Building Technical Antibodies

Many companies spent decades not taking security vulnerabilities seriously, until breach after breach hit the news. Do we need to go through the same pain before we start building the tools to address this new vulnerability?

If you are building data-driven systems, you need to start thinking about how that data can be corrupted, by whom, and for what purpose.


In the tech industry, we have lost the culture of Test. Part of the blame rests on the shoulders of social media. Fifteen years ago, we got the bright idea to shift to a culture of the “perpetual beta.” We invited the public to be our quality assurance engineers. But internal QA wasn’t simply about finding bugs. It was about integrating adversarial thinking into the design and development process. And asking the public to find bugs in our systems doesn’t work well when some of those same people are trying to mess with our systems.Furthermore, there is currently no incentive — or path — for anyone to privately tell us where things go wrong. Only when journalists shame us by finding ways to trick our systems into advertising to neo-Nazis do we pay attention. Yet, far more maliciously intended actors are starting to play the long game in messing with our data. Why aren’t we trying to get ahead of this?


On the bright side, there’s an emergent world of researchers building adversarial thinking into the advanced development of machine learning systems.

Consider, for example, the research into generative adversarial networks (or GANs). For those unfamiliar with this line of work, the idea is that you have two unsupervised ML algorithms — one is trying to generate content for the other to evaluate. The first is trying to trick the second into accepting “wrong” information. This work is all about trying to find the boundaries of your model and the latent space of your data. We need to see a lot more R&D work like this — this is the research end of a culture of Test, with true adversarial thinking baked directly into the process of building models.


White Hat Hackers — those who hack for “the right reasons.” For instance, testing the security or vulnerabilities of a system (Image: CC Magicon, HU)

But these research efforts are not enough. We need to actively and intentionally build a culture of adversarial testing, auditing, and learning into our development practice. We need to build analytic approaches to assess the biases of any dataset we use. And we need to build tools to monitor how our systems evolve with as much effort as we build our models in the first place.My colleague Matt Goerzen argues that we also need to strategically invite white hat trolls to mess with our systems and help us understand our vulnerabilities.


The tech industry is no longer the passion play of a bunch of geeks trying to do cool shit in the world. It’s now the foundation of our democracy, economy, and information landscape.

We no longer have the luxury of only thinking about the world we want to build. We must also strategically think about how others want to manipulate our systems to do harm and cause chaos.

Data & Society’s Next Stage

In March 2013, in a flurry of days, I decided to start a research institute. I’d always dreamed of doing so, but it was really my amazing mentor and boss – Jennifer Chayes – who put the fire under my toosh. I’d been driving her crazy about the need to have more people deeply interrogating how data-driven technologies were intersecting with society. Microsoft Research didn’t have the structure to allow me to move fast (and break things). University infrastructure was even slower. There were a few amazing research centers and think tanks, but I wanted to see the efforts scale faster. And I wanted to build the structures to connect research and practices, convene conversations across sectors, and bring together a band of what I loved to call “misfit toys.”  So, with the support of Jennifer and Microsoft, I put pen to paper. And to my surprise, I got the green light to help start a wholly independent research institute.

I knew nothing about building an organization. I had never managed anyone, didn’t know squat about how to put together a budget, and couldn’t even create a check list of to-dos. So I called up people smarter than I to help learn how other organizations worked and figure out what I should learn to turn a crazy idea into reality. At first, I thought that I should just go and find someone to run the organization, but I was consistently told that I needed to do it myself, to prove that it could work. So I did. It was a crazy adventure. Not only did I learn a lot about fundraising, management, and budgeting, but I also learned all sorts of things about topics I didn’t even know I would learn to understand – architecture, human resources, audits, non-profit law. I screwed up plenty of things along the way, but most people were patient with me and helped me learn from my mistakes. I am forever grateful to all of the funders, organizations, practitioners, and researchers who took a chance on me.

Still, over the next four years, I never lost that nagging feeling that someone smarter and more capable than me should be running Data & Society. I felt like I was doing the organization a disservice by not focusing on research strategy and public engagement. So when I turned to the board and said, it’s time for an executive director to take over, everyone agreed. We sat down and mapped out what we needed – a strategic and capable leader who’s passionate about building a healthy and sustainable research organization to be impactful in the world. Luckily, we had hired exactly that person to drive program and strategy a year before when I was concerned that I was flailing at managing the fieldbuilding and outreach part of the organization.

I am overwhelmingly OMG ecstatically bouncing for joy to announce that Janet Haven has agreed to become Data & Society’s first executive director. You can read more about Janet through the formal organizational announcement here.  But since this is my blog and I’m telling my story, what I want to say is more personal. I was truly breaking when we hired Janet. I had taken off more than I could chew. I was hitting rock bottom and trying desperately to put on a strong face to support everyone else. As I see it, Janet came in, took one look at the duct tape upon which I’d built the organization and got to work with steel, concrete, and wood in her hands. She helped me see what could happen if we fixed this and that. And then she started helping me see new pathways for moving forward. Over the last 18 months, I’ve grown increasingly confident that what we’re doing makes sense and that we can build an organization that can last. I’ve also been in awe watching her enable others to shine.

I’m not leaving Data & Society. To the contrary, I’m actually taking on the role that my title – founder and president – signals. And I’m ecstatic. Over the last 4.5 years, I’ve learned what I’m good at and what I’m not, what excites me and what makes me want to stay in bed. I built Data & Society because I believe that it needs to exist in this world. But I also realize that I’m the classic founder – the crazy visionary that can kickstart insanity but who isn’t necessarily the right person to take an organization to the next stage. Lucky for me, Janet is. And together, I can’t wait to take Data & Society to the next level!

How “Demo-or-Die” Helped My Career

I left the Media Lab 15 years ago this week. At the time, I never would’ve predicted that I learned one of the most useful skills in my career there: demo-or-die.

(Me debugging an exhibit in 2002)

The culture of “demo-or-die” has been heavily critiqued over the years. In doing so, most folks focus on the words themselves. Sure, the “or-die” piece is definitely an exaggeration, but the important message there is the notion of pressure. But that’s not what most people focus on. They focus on the notion of a “demo.”

To the best that anyone can recall, the root of the term stems back from early days at the Media Lab, most likely because of Nicholas Negroponte’s dismissal of “publish-or-perish” in academia. So the idea was to focus not on writing words but producing artifacts. In mocking what it was that the Media Lab produced, many critics focused on the way in which the Lab had a tendency to create vaporware, performed to visitors through the demo. In 1987, Stewart Brand called this “handwaving.” The historian Molly Steenson has a more nuanced view so I can’t wait to read her upcoming book. But the mockery of the notion of a demo hasn’t died. Given this, it’s not surprising that the current Director (Joi Ito) has pushed people to stop talking about demoing and start thinking about deploying. Hence, “deploy-or-die.”

I would argue that what makes “demo-or-die” so powerful has absolutely nothing to do with the production of a demo. It has to do with the act of doing a demo. And that distinction is important because that’s where the skill development that I relish lies.

When I was at the Lab, we regularly received an onslaught of visitors. I was a part of the “Sociable Media Group,” run by Judith Donath. From our first day in the group, we were trained to be able to tell the story of the Media Lab, the mission of our group, and the goal of everyone’s research projects. Furthermore, we had to actually demo their quasi functioning code and pray that it wouldn’t fall apart in front of an important visitor. We were each assigned a day where we were “on call” to do demos to any surprise visitor. You could expect to have at least one visitor every day, not to mention hundreds of visitors on days that were officially sanctioned as “Sponsor Days.”

The motivations and interests of visitors ranged wildly. You’d have tour groups of VIP prospective students, dignitaries from foreign governments, Hollywood types, school teachers, engineers, and a whole host of different corporate actors. If you were lucky, you knew who was visiting ahead of time. But that was rare. Often, someone would walk in the door with someone else from the Lab and introduce you to someone for whom you’d have to drum up a demo in very short order with limited information. You’d have to quickly discern what this visitor was interested in, figure out which of the team’s research projects would be most likely to appeal, determine how to tell the story of that research in a way that connected to the visitor, and be prepared to field any questions that might emerge. And oy vay could the questions run the gamut.

I *hated* the culture of demo-or-die. I felt like a zoo animal on display for others’ benefit. I hated the emotional work that was needed to manage stupid questions, not to mention the requirement to smile and play nice even when being treated like shit by a visitor. I hated the disruptions and the stressful feeling when a demo collapsed. Drawing on my experience working in fast food, I developed a set of tricks for staying calm. Count how many times a visitor said a certain word. Nod politely while thinking about unicorns. Experiment with the wording of a particular demo to see if I could provoke a reaction. Etc.

When I left the Media Lab, I was ecstatic to never have to do another demo in my life. Except, that’s the funny thing about learning something important… you realize that you are forever changed by the experience.

I no longer produce demos, but as I developed in my career, I realized that “demo-or-die” wasn’t really about the demo itself. At the end of the day, the goal wasn’t to pitch the demo — it was to help the visitor change their perspective of the world through the lens of the demo. In trying to shift their thinking, we had to invite them to see the world differently. The demo was a prop. Everything about what I do as a researcher is rooted in the goal of using empirical work to help challenge people’s assumptions and generate new frames that people can work with. I have to understand where they’re coming from, appreciate their perspective, and then strategically engage them to shift their point of view. Like my days at the Media Lab, I don’t always succeed and it is indeed frustrating, especially because I don’t have a prop that I can rely on when everything goes wrong. But spending two years developing that muscle has been so essential for my work as an ethnographer, researcher, and public speaker.

I get why Joi reframed it as “deploy-or-die.” When it comes to actually building systems, impact is everything. But I really hope that the fundamental practice of “demo-or-die” isn’t gone. Those of us who build systems or generate knowledge day in and day out often have too little experience explaining ourselves to the wide array of folks who showed up to visit the Media Lab. It’s easy to explain what you do to people who share your ideas, values, and goals. It’s a lot harder to explain your contributions to those who live in other worlds. Impact isn’t just about deploying a system; it’s about understanding how that system or idea will be used. And that requires being able to explain your thinking to anyone at any moment. And that’s the skill that I learned from the “demo-or-die” culture.