Author Archives: zephoria

The case for quarantining extremist ideas

(Joan Donovan and I wrote the following op-ed for The Guardian.) 

When confronted with white supremacists, newspaper editors should consider ‘strategic silence’

kkk
 ‘The KKK of the 1920s considered media coverage their most effective recruitment tactic.’ Photograph: Library of Congress

George Lincoln Rockwell, the head of the American Nazi party, had a simple media strategy in the 1960s. He wrote in his autobiography: “Only by forcing the Jews to spread our message with their facilities could we have any hope of success in counteracting their left-wing, racemixing propaganda!”

Campus by campus, from Harvard to Brown to Columbia, he would use the violence of his ideas and brawn of his followers to become headline news. To compel media coverage, Rockwell needed: “(1) A smashing, dramatic approach which could not be ignored, without exposing the most blatant press censorship, and (2) a super-tough, hard-core of young fighting men to enable such a dramatic presentation to the public.” He understood what other groups competing for media attention knew too well: a movement could only be successful if the media amplified their message.

Contemporary Jewish community groups challenged journalists to consider not covering white supremacists’ ideas. They called this strategy “quarantine”, and it involved working with community organizations to minimize public confrontations and provide local journalists with enough context to understand why the American Nazi party was not newsworthy.

In regions where quarantine was deployed successfully, violence remained minimal and Rockwell was unable to recruit new party members. The press in those areas was aware that amplification served the agenda of the American Nazi party, so informed journalists employed strategic silence to reduce public harm.

The Media Manipulation research initiative at the Data & Society institute is concerned precisely with the legacy of this battle in discourse and the way that modern extremists undermine journalists and set media agendas. Media has always had the ability to publish or amplify particular voices, perspectives and incidents. In choosing stories and voices they will or will not prioritize, editors weigh the benefits and costs of coverage against potential social consequences. In doing so, they help create broader societal values. We call this willingness to avoid amplifying extremist messages “strategic silence”.

Editors used to engage in strategic silence – set agendas, omit extremist ideas and manage voices – without knowing they were doing so. Yet the online context has enhanced extremists’ abilities to create controversies, prompting newsrooms to justify covering their spectacles. Because competition for audience is increasingly fierce and financially consequential, longstanding newsroom norms have come undone. We believe that journalists do not rebuild reputation through a race to the bottom. Rather, we think that it’s imperative that newsrooms actively take the high ground and re-embrace strategic silence in order to defy extremists’ platforms for spreading hate.

Strategic silence is not a new idea. The Ku Klux Klan of the 1920s considered media coverage their most effective recruitment tactic and accordingly cultivated friendly journalists. According to Felix Harcourt, thousands of readers joined the KKK after the New York World ran a three-week chronicle of the group in 1921. Catholic, Jewish and black presses of the 1920s consciously differed from Protestant-owned mainstream papers in their coverage of the Klan, conspicuously avoiding giving the group unnecessary attention. The black press called this use of editorial discretion in the public interest “dignified silence”, and limited their reporting to KKK follies, such as canceled parades, rejected donations and resignations. Some mainstream journalists also grew suspicious of the KKK’s attempts to bait them with camera-ready spectacles. Eventually coverage declined.

The KKK was so intent on getting the coverage they sought that they threatened violence and white boycotts of advertisers. Knowing they could bait coverage with violence, white vigilante groups of the 1960s staged cross burnings and engaged in high-profile murders and church bombings. Civil rights protesters countered white violence with black stillness, especially during lunch counter sit-ins. Journalists and editors had to make moral choices of which voices to privilege, and they chose those of peace and justice, championing stories of black resilience and shutting out white extremism. This was strategic silence in action, and it saved lives.

The emphasis of strategic silence must be placed on the strategic over the silencing. Every story requires a choice and the recent turn toward providing equal coverage to dangerous, antisocial opinions requires acknowledging the suffering that such reporting causes. Even attempts to cover extremism critically can result in the media disseminating the methods that hate groups aim to spread, such as when Virginia’s Westmoreland News reproduced in full a local KKK recruitment flier on its front page. Media outlets who cannot argue that their reporting benefits the goal of a just and ethical society must opt for silence.

Newsrooms must understand that even with the best of intentions, they can find themselves being used by extremists. By contrast, they must also understand they have the power to defy the goals of hate groups by optimizing for core American values of equality, respect and civil discourse. All Americans have the right to speak their minds, but not every person deserves to have their opinions amplified, particularly when their goals are to sow violence, hatred and chaos.

If telling stories didn’t change lives, journalists would never have started in their careers. We know that words matter and that coverage makes a difference. In this era of increasing violence and extremism, we appeal to editors to choose strategic silence over publishing stories that fuel the radicalization of their readers.

(Visit the original version at The Guardian to read the comments and help support their organization, as a sign of appreciation for their willingness to publish our work.)

You Think You Want Media Literacy… Do You?

The below original text was the basis for Data & Society Founder and President danah boyd’s March 2018 SXSW Edu keynote,“What Hath We Wrought?” — Ed.

Growing up, I took certain truths to be self evident. Democracy is good. War is bad. And of course, all men are created equal.

My mother was a teacher who encouraged me to question everything. But I quickly learned that some questions were taboo. Is democracy inherently good? Is the military ethical? Does God exist?

I loved pushing people’s buttons with these philosophical questions, but they weren’t nearly as existentially destabilizing as the moments in my life in which my experiences didn’t line up with frames that were sacred cows in my community. Police were revered, so my boss didn’t believe me when I told him that cops were forcing me to give them free food, which is why there was food missing. Pastors were moral authorities and so our pastor’s infidelities were not to be discussed, at least not among us youth. Forgiveness is a beautiful thing, but hypocrisy is destabilizing. Nothing can radicalize someone more than feeling like you’re being lied to. Or when the world order you’ve adopted comes crumbling down.

The funny thing about education is that we ask our students to challenge their assumptions. And that process can be enlightening.

The funny thing about education is that we ask our students to challenge their assumptions. And that process can be enlightening. I will never forget being a teenager and reading “The People’s History of the United States.” The idea that there could be multiple histories, multiple truths blew my mind.Realizing that history is written by the winners shook me to my core. This is the power of education. But the hole that opens up, that invites people to look for new explanationsthat hole can be filled in deeply problematic ways.When we ask students to challenge their sacred cows but don’t give them a new framework through which to make sense of the world, others are often there to do it for us.

For the last year, I’ve been struggling with media literacy. I have a deep level of respect for the primary goal. As Renee Hobbs has written, media literacy is the “active inquiry and critical thinking about the messages we receive and create. The field talks about the development of competencies or skills to help people analyze, evaluate, and even create media. Media literacy is imagined to be empowering, enabling individuals to have agency and giving them the tools to help create a democratic society. But fundamentally, it is a form of critical thinking that asks people to doubt what they see. And that makes me nervous.

Most media literacy proponents tell me that media literacy doesn’t exist in schools. And it’s true that the ideal version that they’re aiming for definitely doesn’t. But I spent a decade in and out of all sorts of schools in the US, where I quickly learned that a perverted version of media literacy does already exist.Students are asked to distinguish between CNN and Fox. Or to identify bias in a news story. When tech is involved, it often comes in the form of “don’t trust Wikipedia; use Google.” We might collectively dismiss these practices as not-media-literacy, but these activities are often couched in those terms.

I’m painfully aware of this, in part because media literacy is regularly proposed as the “solution” to the so-called “fake news” problem. I hear this from funders and journalists, social media companies and elected officials. My colleagues Monica Bulger and Patrick Davison just released a report on media literacy in light of “fake news” given the gaps in current conversations. I don’t know what version of media literacy they’re imagining but I’m pretty certain it’s not the CNN vs Fox News version. Yet, when I drill in, they often argue for the need to combat propaganda, to get students to ask where the money is coming from, to ask who is writing the stories for what purposes, to know how to fact-check, etcetera. And when I push them further, I often hear decidedly liberal narratives. They talk about the Mercers or about InfoWars or about the Russians. They mock “alternative facts.” While I identify as a progressive, I am deeply concerned by how people understand these different conservative phenomena and what they see media literacy as solving.

get that many progressive communities are panicked about conservative media, but we live in a polarized society and I worry about how people judge those they don’t understand or respect. It also seems to me that the narrow version of media literacy that I hear as the “solution” is supposed to magically solve our political divide. It won’t. More importantly, as I’m watching social media and news media get weaponized, I’m deeply concerned that the well-intended interventions I hear people propose will backfire, because I’m fairly certain that the crass versions of critical thinking already have.

New Data & Society report on media literacy by Monica Bulger and Patrick Davison

My talk today is intended to interrogate some of the foundations upon which educating people about the media landscape dependsRather than coming at this from the idealized perspective, I am trying to come at this from the perspective of where good intentions might go awry, especially in a moment in which narrow versions of media literacy and critical thinking are being proposed as the solution to major socio-cultural issues. I want to examine the instability of our current media ecosystem to then return to the question of:what kind of media literacy should we be working towards? So let’s dig in.

Epistemological Warfare

In 2017, sociologist Francesca Tripodi was trying to understand how conservative communities made sense of the seemingly contradictory words coming out of the mouth of the US PresidentAlong her path, she encountered people talking about making sense of The Word when referencing his speeches. She began accompanying people in her study to their bible study groups. Then it clicked. Trained on critically interrogating biblical texts, evangelical conservative communities were not taking Trump’s messages as literal text. They were interpreting their meanings using the sameepistemological framework as they approached the bible. Metaphors and constructs matter more than the precision of words.

Why do we value precision in language? I sat down for breakfast with Gillian Tett, a Financial Times journalist and anthropologist. She told me that when she first moved to the States from the UK, she was confounded by our inability to talk about class. She was trying to make sense of what distinguished class in America. In her mind, it wasn’t race. Or education. It came down to what construction of language was respected and valued by whom. People became elite by mastering the language marked as elite. Academics, journalists, corporate executives, traditional politicians: they all master the art of communication. I did too. I will never forget being accused of speaking like an elite by my high school classmates when I returned home after a semester of college. More importantly, although it’s taboo in America to be explicitly condescending towards people on the basis of race or education, there’s no social cost among elites to mock someone for an inability to master language.For using terms like “shithole.”

Linguistic and communications skills are not universally valued. Those who do not define themselves through this skill loathe hearing the never-ending parade of rich and powerful people suggesting that they’re stupid, backwards, and otherwise lesser. Embracing being anti-PC has become a source of pride, a tactic of resistance. Anger boils over as people who reject “the establishment” are happy to watch the elites quiver over their institutions being dismantled. This is why this is a culture war. Everyone believes they are part of the resistance.

But what’s at the root of this cultural war? Cory Doctorow got me thinkingwhen he wrote the following:

We’re not living through a crisis about what is true, we’re living through a crisis about how we know whether something is true. We’re not disagreeing about facts,we’re disagreeing about epistemology. The “establishment” version of epistemology is, “We use evidence to arrive at the truth, vetted by independent verification (but trust us when we tell you that it’s all been independently verified by people who were properly skeptical and not the bosom buddies of the people they were supposed to be fact-checking).”

The “alternative facts” epistemological method goes like thisThe ‘independent’ experts who were supposed to be verifying the ‘evidence-based’ truth were actually in bed with the people they were supposed to be fact-checking. In the end, it’s all a matter of faith, then: you either have faith that ‘their’ experts are being truthful, or you have faith that we are. Ask your gut, what version feels more truthful?

Let’s be honest — most of us educators are deeply committed to a way of knowing that is rooted in evidence, reason, and fact. But who gets to decide what constitutes a fact? In philosophy circles, social constructivists challenge basic tenets like fact, truth, reason, and evidence. Yet, it doesn’t take a doctorate of philosophy to challenge the dominant way of constructing knowledge. Heck, 75 years ago, evidence suggesting black people were biologically inferior was regularly used to justify discrimination. And this was called science!

In many Native communities, experience trumps Western science as the key to knowledge. These communities have a different way of understanding topics like weather or climate or medicineExperience is also used in activist circles as a way of seeking truth and challenging the status quo. Experience-based epistemologies also rely on evidence, but not the kind of evidence that would be recognized or accepted by those in Western scientific communities.

Those whose worldview is rooted in religious faith, particularly Abrahamic religions, draw on different types of information to construct knowledge. Resolving scientific knowledge and faith-based knowledge has never been easy; this tension has countless political and social ramifications. As a result, American society has long danced around this yawning gulf and tried to find solutions that can appease everyone. But you can’t resolve fundamental epistemological differences through compromise.

No matter what worldview or way of knowing someone holds dear, they always believe that they are engaging in critical thinking when developing a sense of what is right and wrong, true and false, honest and deceptive. But much of what they conclude may be more rooted in their way of knowing than any specific source of information.

If we’re not careful, “media literacy” and “critical thinking”will simply be deployed as an assertion of authority over epistemology.

Right now, the conversation around fact-checking has already devolved to suggest that there’s only one truth. And we have to recognize that there are plenty of students who are taught that there’s only one legitimate way of knowing, one accepted worldview. This is particularly dicey at the collegiate level, where us professors have been taught nothing about how to teach across epistemologies.

Personally, it took me a long time to recognize the limits of my teachersLike many Americans in less-than-ideal classrooms, I was taught that history was a set of facts to be memorized. When I questioned those facts, I was sent to the principal’s office for disruption. Frustrated and confused, I thought that I was being force-fed information for someone else’s agenda. Now I can recognize that that teacher was simply exhausted, underpaid, and waiting for retirement. But it took me a long time to realize that there was value in history and that history is a powerful tool.

Weaponizing Critical Thinking

The political scientist Deen Freelon was trying to make sense of the role of critical thinking to address “fake news.” He ended up looking back at a fascinating campaign by Russian Today (known as RT). Their motto for a while was “question more.” They produced a series of advertisements as teasers for their channel. These advertisements were promptly banned in the US and UK, resulting in RT putting up additional ads about how they were banned and getting tremendous mainstream media coverage about being banned. What was so controversial? Here’s an example:

“Just how reliable is the evidence that suggests human activity impacts on climate change? The answer isn’t always clear-cut. And it’s only possible to make a balanced judgement if you are better informed. By challenging the accepted view, we reveal a side of the news that you wouldn’t normally see. Because we believe that the more you question, the more you know.”

If you don’t start from a place where you’re confident that climate change is real, this sounds quite reasonable. Why wouldn’t you want more information? Why shouldn’t you be engaged in critical thinking? Isn’t this what you’re encouraged to do at school? So why is asking this so taboo? And lest you think that this is a moment to be condescending towards climate deniers, let me offer another one of their ads.

“Is terror only committed by terrorists? The answer isn’t always clear-cut. And it’s only possible to make a balanced judgement if you are better informed. By challenging the accepted view, we reveal a side of the news that you wouldn’t normally see. Because we believe that the more you question, the more you know.”

Many progressive activists ask whether or not the US government commits terrorism in other countries. The ads all came down because they were too political, but RT got what they wanted: an effective ad campaignThey didn’t come across as conservative or liberal, but rather a media entity that was “censored” for asking questions. Furthermore, by covering the fact that they were banned, major news media legitimized their frame under the rubric of “free speech.” Under the assumption that everyone should have the right to know and to decide for themselves.

We live in a world now where we equate free speech with the right to be amplified. Does everyone have the right to be amplified? Social media gave us that infrastructure under the false imagination that if we were all gathered in one place, we’d find common ground and eliminate conflict. We’ve seen this logic before. After World War II, the world thought that connecting the globe through financial interdependence would prevent World War III. It’s not clear that this logic will hold.

For better and worse, by connecting the world through social media and allowing anyone to be amplified, information can spread at record speed.There is no true curation or editorial control. The onus is on the public to interpret what they see. To self-investigate. Since we live in a neoliberal society that prioritizes individual agency, we double down on media literacy as the “solution” to misinformation. It’s up to each of us as individuals to decide for ourselves whether or not what we’re getting is true.

Figure 1

Yet, if you talk with someone who has posted clear, unquestionable misinformation, more often than not, they know it’s bullshit. Or they don’t care whether or not it’s true. Why do they post it then? Because they’re making a statement. The people who posted this meme (figure 1) didn’t bother to fact check this claim. They didn’t care. What they wanted to signal loud and clear is that they hated Hillary Clinton. And that message was indeed heard loud and clear. As a result, they are very offended if you tell them that they’ve been duped by Russians into spreading propaganda. They don’t believe you for one second.

Misinformation is contextual. Most people believe that people they know are gullible to false information, but that they themselves are equipped to separate the wheat from the chaff. There’s widespread sentiment that we can fact check and moderate our way out of this conundrum. This will fail. Don’t forget that for many people in this country, both education and the media are seen as the enemy — two institutions who are trying to have power over how people think. Two institutions that are trying to assert authority over epistemology.

Finding the Red Pill

Growing up on Usenet, Godwin’s Law was more than an adage to me. I spent countless nights lured into conversation by the idea that someone was wrong on the internet. And I long ago lost count about how many of them ended up with someone invoking Hitler or the Holocaust. I might have even been to blame in some of these conversations.

Fast forward 15 years to the point when Nathan Poe wrote a poignant comment on an online forum dedicated to Christianity: Without a winking smiley or other blatant display of humor, it is utterly impossible to parody a Creationist in such a way that someone won’t mistake for the genuine article.”Poe’s Law, as it became known, signals that it’s hard to tell the difference between an extreme view and a parody of an extreme view on the internet.

In their book, “The Ambivalent Internet,”media studies scholars Whitney Phillips and Ryan Milner highlight how a segment of society has become so well-versed at digital communications — memes, GIFs, videos, etc. — that they can use these tools of expression to fundamentally destabilize others’communication structures and worldviewsIt’s hard to tell what’s real and what’s fiction, what’s cruel and what’s a joke. But that’s the point. That is howirony and ambiguity can be weaponized. And for some, the goal is simple:dismantle the very foundations of elite epistemological structures that are so deeply rooted in fact and evidence.

Many people, especially young people, turn to online communities to make sense of the world around them. They want to ask uncomfortable questions, interrogate assumptions, and poke holes at things they’ve heard. Welcome to youth. There are some questions that are unacceptable to ask in public and they’ve learned that. But in many online fora, no question or intellectual exploration is seen as unacceptable. To restrict the freedom of thought is to censor. And so all sorts of communities have popped up for people to explore questions of race and gender and other topics in the most extreme ways possible. And these communities have become slippery. Are those taking on such hateful views real? Or are they being ironic?

In the 1999 film The Matrix, Morpheus says to Neo: “You take the blue pill,the story ends. You wake up in your bed and believe whatever you want. You take the red pill, you stay in Wonderland, and I show you how deep the rabbit hole goes.” Most youth aren’t interested in having the wool pulled over their head, even if blind faith might be a very calming way of living. Restricted in mobility and stressed to holy hell, they want to have access to what’s inaccessible, know what’s taboo, and say what’s politically incorrect. So who wouldn’t want to take the red pill?

Image via Warner Bros.

In some online communities, taking the red pill refers to the idea of waking up to how education and media are designed to deceive you into progressive propaganda. In these environments, visitors are asked to question more. They’re invited to rid themselves of their politically correct shackles. There’s an entire online university designed to undo accepted ideas about diversity, climate, and history. Some communities are even more extreme in their agenda. These are all meant to fill in the gaps for those who are opening to questioning what they’ve been taught.

In 2012, it was hard not to avoid the names Trayvon Martin and George Zimmerman, but that didn’t mean that most people understood the storyline.In South Carolina, a white teenager who wasn’t interested in the news felt like he needed to know what the fuss was all about. He decided to go to Wikipedia to understand more. He was left with the impression that Zimmerman was clearly in the right and disgusted that everyone was defending Martin. While reading up on this case, he ran across the term “black on white crime” on Wikipedia and decided to throw that term into Google where he encountered a deeply racist website inviting him to wake up to a reality that he had never considered. He took that red pill and dove deep into a worldview whose theory of power positioned white people as victims. Over a matter of years, he began to embrace those views, to be radicalized towards extreme thinking. On June 17, 2015, he sat down for an hour with a group of African-American church-goers in Charleston South Carolina before opening fire on them, killing 9 and injuring 1. His goal was simple: he wanted to start a race war.

It’s easy to say that this domestic terrorist was insane or irrational, but he began his exploration trying to critically interrogate the media coverage of a story he didn’t understandThat led him to online fora filled with people who have spent decades working to indoctrinate people into a deeply troubling, racist worldview. They draw on countless amounts of “evidence,” engage in deeply persuasive discursive practices, and have the mechanisms to challenge countless assumptions. The difference between what is deemed missionary work, education, and radicalization depends a lot on your worldview. And your understanding of power.

Who Do You Trust?

The majority of Americans do not trust the news media. There are many explanations for this — loss of local news, financial incentiveshard to distinguish between opinion and reporting, etcBut what does it mean to encourage people to be critical of the media’s narratives when they are already predisposed against the news media?

Perhaps you want to encourage people to think critically about how information is constructed, who is paying for it, and what is being left out. Yet, among those whose prior is to not trust a news media institution, among those who see CNN and The New York Times as “fake news,” they’re already there. They’re looking for flaws. It’s not hard to find them. After all, the news industry is made of people in institutions in a society. So when youth are encouraged to be critical of the news media, they come away thinking that the media is lying. Depending on someone’s prior, they may even take what they learn to be proof that the media is in on the conspiracy. That’s where things get very dicey.

Many of my digital media and learning colleagues encourage people to make media to help understand how information is produced. Realistically, many young people have learned these skills outside the classroom as they seek to represent themselves on Instagram, get their friends excited about a meme, or gain followers on YouTube. Many are quite skilled at using media, but to what end? Every day, I watch teenagers produce anti-Semitic and misogynistic content using the same tools that activists use to combat prejudice. It’s notable that many of those who are espousing extreme viewpoints are extraordinarily skilled at using mediaToday’s neo-Nazis are a digital propaganda machine. Developing media making skills doesn’t guarantee that someone will use them for good. This is the hard part.

Most of my peers think that if more people are skilled and more people are asking hard questions, goodness will see the light. In talking about misunderstandings of the First Amendment, Nabiha Syed of Buzzfeedhighlights that the frame of the “marketplace of ideas” sounds great, but is extremely naiveDoubling down on investing in individuals as a solution to a systemic abuse of power is very American. But the best ideas don’t always surface to the top. Nervously, many of us tracking manipulation of media are starting to think that adversarial messages are far more likely to surface than well-intended ones.

This is not to say that we shouldn’t try to educate people. Or that producing critical thinkers is inherently a bad thing. I don’t want a world full of sheeple.But I also don’t want to naively assume what media literacy could do in responding to a culture war that is already underwayI want us to grapple with reality, not just the ideals that we imagine we could maybe one day build.

It’s one thing to talk about interrogating assumptions when a person can keep emotional distance from the object of study. It’s an entirely different thing to talk about these issues when the very act of asking questions is what’s being weaponized. This isn’t historical propaganda distributed through mass media. Or an exercise in understanding state power. This is about making sense of an information landscape where the very tools that people use to make sense of the world around them have been strategically perverted by other people who believe themselves to be resisting the same powerful actors that we normally seek to critique.

Take a look at the graph above. Can you guess what search term this is? This is the search query for “crisis actors.” This concept emerged as a conspiracy theory after Sandy Hook. Online communities worked hard to get this to land with the major news media after each shooting. With Parkland, they finally succeeded. Every major news outlet is now talking about crisis actors, as though it’s a real thing, or something to be debunked. When teenage witnesses of the mass shooting in Parkland speak to journalists these days, they have to now say that they are not crisis actors. They must negate a conspiracy theory that was created to dismiss them. A conspiracy theory that undermines their message from the get-go. And because of this, many people have turned to Google and Bing to ask what a crisis actor is. They quickly get to the Snopes page. Snopes provides a clear explanation of why this is a conspiracy. But you are now asked to not think of an elephant.

You may just dismiss this as craziness, but getting this narrative into the media was designed to help radicalize more people. Some number of people will keep researching, trying to understand what the fuss is all about. They’ll find online fora discussing the images of a brunette woman and ask themselves if it might be the same person. They will try to understand the fight between David Hogg and Infowars or question why Infowars is being restricted by YouTube. They may think this is censorship. Seeds of doubt will start to form. And they’ll ask whether or not any of the articulate people they see on TV might actually be crisis actors. That’s the power of weaponized narratives.

One of the main goals for those who are trying to manipulate media is to pervert the public’s thinking. It’s called gaslighting. Do you trust what is real?One of the best ways to gaslight the public is to troll the media. By getting the news media to be forced into negating frames, they can rely on the fact that people who distrust the media often respond by self-investigating. This is the power of the boomerang effect. And it has a history. After all, the CDC realized that the more news media negated the connection between autism and vaccination, the more the public believed there was something real there.

In 2016, I watched networks of online participants test this theory through an incident now known as Pizzagate. They worked hard to get the news media to negate the conspiracy theory, believing that this would prompt more people to try to research if there was something real there. They were effective. The news media covered the story to negate it. Lots of people decided to self-investigate. One guy even showed up with a gun.

Still from the trailer for “Gaslight

The term “gaslighting” originates in the context of domestic violence. The term refers back to an 1944 movie called Gas Light where a woman is manipulated by her husband in a way that leaves her thinking she’s crazy. It’sa very effective technique of controlIt makes someone submissive and disoriented, unable to respond to a relationship productively. While many anti-domestic violence activists argue that the first step is to understand that gaslighting exists, the “solution” is not to fight back against the person doing the gaslighting. Instead, it’s to get out. Furthermore, anti-domestic violence experts argue that recovery from gaslighting is a long and arduous process, requiring therapy. They recognize that once instilled, self-doubt is hard to overcome.

While we have many problems in our media landscape, the most dangerous is how it is being weaponized to gaslight people.

And unlike the domestic violence context, there is no “getting out” that is really possible in a media ecosystem. Sure, we can talk about going off the grid and opting out of social media and news media, but c’mon now.

The Cost of Triggering

In 2017, Netflix released a show called 13 Reasons Why. Before parents and educators had even heard of the darn show, millions of teenagers had watched it. For most viewers, it was a fascinating show. The storyline was enticing, the acting was phenomenal. But I’m on the board of Crisis Text Line, an amazing service where people around this country talk with trained counselors via text message when they’re in a crisis. Before the news media even began talking about the show, we started to see the impact. After all, the premise of the show is that a teen girl died by suicide and left behind 13 tapes explaining how people had bullied her to justify her decision.

At Crisis Text Line, we do active rescues every night. This means that we send emergency personnel to the homes of someone who is in the middle of a suicide attempt in an effort to save their lives. Sometimes, we succeed. Sometimes, we don’t. It’s heartbreaking work. As word of 13 Reasons Why got out and people started watching the show, our numbers went through the roof. We were drowning in young people referencing the show, signaling how it had given them a framework for ending their lives. We panicked. All hands on deck. As we got things under control, I got angry. What the hell was Netflix thinking?

Researchers know the data on suicide and media. The more the media normalizes suicide, the more suicide is put into people’s head as a possibility,the more people who are on the edge start to take it seriously and consider it for themselves. After early media effects research was published, journalists developed best practices to minimize their coverage of suicide. As Joan Donovan often discusses, this form of “strategic silence” was viable in earlier media landscapes; it’s a lot harder now. Today, journalists and media makers feel as though the fact that anyone could talk about suicide on the internet means that they should have a right to do so too.

We know that you can’t combat depression through rational discourse.Addressing depression is hard work. And I’m deeply concerned that we don’t have the foggiest clue how to approach the media landscape today. I’m confident that giving grounded people tools to think smarter can be effective.But I’m not convinced that we know how to educate people who do not share our epistemological frame. I’m not convinced that we know how to undo gaslighting. I’m not convinced that we understand how engaging people about the media intersects with those struggling with mental health issues.And I’m not convinced that we’ve even begun to think about the unintended consequences of our good — let alone naive — intentions.

In other words, I think that there are a lot of assumptions baked into how we approach educating people about sensitive issues and our current media crisis has made those painfully visible.

Oh, and by the way, the Netflix TV show ends by setting up Season 2 to start with a school shooting. WTF, Netflix?

Pulling Back Out

So what role do educators play in grappling with the contemporary media landscape? What kind of media literacy makes sense? To be honest, I don’t know. But it’s unfair to end a talk like this without offering some path forward so I’m going to make an educated guess.

I believe that we need to develop antibodies to help people not be deceived.

That’s really tricky because most people like to follow their gut more than than their mindNo one wants to hear that they’re being tricked. Still, thinkthere might be some value in helping people understand their own psychology.

Consider the power of nightly news and talk radio personalities. If you bring Sean Hannity, Rachel Maddow, or any other host into your home every night,you start to appreciate how they think. You may not agree with them, but youbuild a cognitive model of their words such that they have a coherent logic to them. They become real to you, even if they don’t know who you are. This is what scholars call parasocial interaction. And the funny thing about humanpsychology is that we trust people who we invest our energies into understanding. That’s why bridging difference requires humanizing people across viewpoints.

Empathy is a powerful emotion, one that most educators want to encourage.But when you start to empathize with worldviews that are toxic, it’s very hard to stay grounded. It requires deep cognitive strength. Scholars who spend a lot of time trying to understand dangerous worldviews work hard to keep their emotional distance. One very basic tactic is to separate the different signals. Just read the text rather than consume the multimedia presentation of that. Narrow the scopeActively taking things out of context can be helpful for analysis precisely because it creates a cognitive disconnect. This is the opposite of how most people encourage everyday analysis of media, where the goal is to appreciate the context first. Of course, the trick here is wanting to keep that emotional distance. Most people aren’t looking for that.

I also believe that it’s important to help students truly appreciate epistemological differencesIn other words, why do people from different worldviews interpret the same piece of content differently? Rather than thinking about the intention behind the production, let’s analyze the contradictions in the interpretation. This requires developing a strong sense of how others think and where the differences in perspective lie. From an educational point of view, this means building the capacity to truly hear and embrace someone else’s perspective and teaching people to understand another’s view while also holding their view firm. It’s hard work, an extension of empathy into a practice that is common among ethnographers. It’s also a skill that is honed in many debate clubs. The goal is to understand the multiple ways of making sense of the world and use that to interpret media.Of course, appreciating the view of someone who is deeply toxic isn’t always psychologically stabilizing.

Still from “Selective Attention Test

Another thing I recommend is to help students see how they fill in gaps when the information presented to them is sparse and how hard it is to overcome priors. Conversations about confirmation bias are important here because it’s important to understand what information we accept and what information we reject. Selective attention is another tool, most famously shown to students through the “gorilla experiment.” If you aren’t familiar with this experiment, it involves showing a basketball video and focusing on counting passes made by people in one color shirt and then asking if they saw the gorilla. Many people do not. Inverting these cognitive science exercises,asking students to consider different fan fiction that fills in the gaps of a story with divergent explanations is another way to train someone to recognize how their brain fills in gaps.

What’s common about the different approaches I’m suggesting is that they are designed to be cognitive strengthening exercises, to help students recognize their own fault linesnot the fault lines of the media landscape around them. I can imagine that this too could be called media literacy and if you want to bend your definition that way, I’ll accept it. But the key is to realize the humanity in ourselves and in others. We cannot and should not assert authority over epistemology, but we can encourage our students to be more aware of how interpretation is socially constructed. And to understand how that can be manipulated. Of course, just because you know you’re being manipulated doesn’t mean that you can resist it. And that’s where my proposal starts to get shaky.

Let’s be honest — our information landscape is going to get more and more complex. Educators have a critical role to play in helping individuals and societies navigate what we encounter. But the path forward isn’t about doubling down on what constitutes a fact or teaching people to assess sources.Rebuilding trust in institutions and information intermediaries is important, but we can’t assume the answer is teaching students to rely on those signals.The first wave of media literacy was responding to propaganda in a mass media context. We live in a world of networks now. We need to understand how those networks are intertwined and how information that spreads through dyadic — even if asymmetric — encounters is understood and experienced differently than that which is produced and disseminated through mass media.

Above all, we need to recognize that information can, is, and will be weaponized in new ways. Today’s propagandist messages are no longer simply created by Madison Avenue or Edward Bernays-style State campaigns. For the last 15 years, a cohort of young people has learned how to hack the attention economy in an effort to have power and status in this new information ecosystem. These aren’t just any youth. They are young people who are disenfranchised, who feel as though the information they’re getting isn’t fulfilling, who struggle to feel powerful. They are trying to make sense of an unstable world and trying to respond to it in a way that is personally fulfilling.Most youth are engaged in invigorating activities. Others are doing the same things youth have always done. But there are youth out there who feel alienated and disenfranchised, who distrust the system and want to see it all come down. Sometimes, this frustration leads to productive ends. Often it does not. But until we start understanding their response to our media society, we will not be able to produce responsible interventions. So I would argue that we need to start developing a networked response to this networked landscape. And it starts by understanding different ways of constructing knowledge.


Special thanks to Monica Bulger, Mimi Ito, Whitney Phillips, Cathy Davidson, Sam Hinds Garcia, Frank Shaw, and Alondra Nelson for feedback.


Update (March 16, 2018): I crafted some responses to the most common criticisms I’ve received to date about this work here. (Also, the original version of this blog post was published on Medium.)

The Reality of Twitter Puffery. Or Why Does Everyone Now Hate Bots?

(This was originally posted on NewCo Shift.)

A friend of mine worked for an online dating company whose audience was predominantly hetero 30-somethings. At some point, they realized that a large number of the “female” accounts were actually bait for porn sites and 1–900 numbers. I don’t remember if users complained or if they found it themselves, but they concluded that they needed to get rid of these fake profiles. So they did.

And then their numbers started dropping. And dropping. And dropping.

Trying to understand why, researchers were sent in. What they learned was that hot men were attracted to the site because there were women that they felt were out of their league. Most of these hot men didn’t really aim for these ultra-hot women, because they felt like they would be inaccessible, but they were happy to talk with women who they saw as being one rung down (as in actual hot women). These hot women, meanwhile, were excited to have these hot men (who they saw as equals) on the site. These also felt that, since there were women hotter than them, that this was a site for them. When they removed the fakes, the hot men felt the site was no longer for them. They disappeared. And then so did the hot women. Etc. The weirdest part? They reintroduced decoy profiles (not as redirects to porn but as fake women who just didn’t respond) and slowly folks came back.

Why am I telling you this story? Fake accounts and bots on social media are not new. Yet, in the last couple of weeks, there’s been newfound hysteria around Twitter bots and fake accounts. I find it deeply problematic that folks are saying that having fake followers is inauthentic. This is like saying that makeup is inauthentic. What is really going on here?

From Fakesters to Influencers

From the earliest days of Friendster and MySpace, people liked to show how cool they were by how many friends they had. As Alice Marwick eloquentlydocumentedself-branding and performing status were the name of the gamefor many in the early days of social media. This hasn’t changedPeople made entire careers out of appearing to be influential, not just actually being influential. Of course a market emerged around this so that people could buy and sell followers, friends, likes, comments, etc. Indeed, standard practice, especially in the wink-nudge world of Instagram, where monetized content is the game and so-called organic “macroinfluencers” can easily double their follower size through bots are more than happily followed by bots, paid or not.

Some sites have tried to get rid of fake accounts. Indeed, Friendster played whack-a-mole with them, killing off “Fakesters” and any account that didn’t follow their strict requirements; this prompted a mass exodus. Facebook’s real-name policy also signaled that such shenanigans would not be allowed on their site, although shhh…. lots of folks figured out how to have multiple accounts and otherwise circumvent the policy.

And let’s be honest — fake accounts are all over most online dating profiles. Ashley Madison, anyone?

Bots, Bots, Bots

Bots have been an intrinsic part of Twitter since the early days. Following the Pope’s daily text messaging services, the Vatican set up numerous bots offering Catholics regular reflections. Most major news organizations have bots so that you can keep up with the headlines of their publications. Twitter’s almost-anything-goes policy meant that people have built bots for all sorts of purposes. There are bots that do poetry, ones that argue with anti-vaxxers about their beliefs, and ones that call out sexist comments people post. I’m a big fan of the @censusAmericans bot created by FiveThirtyEight to regularly send out data from the Census about Americans.

Over the last year, sentiment towards Twitter’s bots has become decidedly negative. Perhaps most people didn’t even realize that there were bots on the site. They probably don’t think of @NYTimes as a bot. When news coverage obsesses over bots, they primarily associate the phenomenon with nefarious activities meant to seed discord, create chaos, and do harm. It can all be boiled down to: Russian bots. As a result, Congress saw bots as inherently bad and journalists keep accusing Twitter of having a “bot problem” without accounting for how their stories appear on Twitter through bots.

Although we often hear about the millions and millions of bots on Twitter as though they’re all manipulative, the stark reality is that bots can be quite fun. I had my students build Twitter bots to teach them how these things worked — they had a field day, even if they didn’t get many followers.

Of course, there are definitely bots that you can buy to puff up your status. Some of them might even be Russian built. And here’s where we get to the crux of the current conversation.

Buying Status

Typical before/after image on Instagram.

People buy bots to increase their number of followers, retweets, and likes in order to appear cooler than they are. Think of this as mascara for your digital presence. While plenty of users are happy chatting away with their friends without their makeup on, there’s an entire class of professionals who feel the need to be dolled up and giving the best impression possible. It’s a competition for popularity and status, marked by numbers.

Number games are not new, especially not in the world of media. Take a well-established firm like Nielsen. Although journalists often uncritically quote Nielsen numbers as though they are “fact,” most people in the ad and media business know that they’re crap. But they’ve long been the best crap out there. And, more importantly, they’re uniform crap so businesses can make predictable decisions off of these numbers, fully aware that they might not be that accurate. The same has long been true of page views and clicksNo major news organization should take their page views literally. And yet, lots of news agencies rank their reporters based on this data.

What makes the purchasing of Twitter bots and status so nefarious? The NYTimes story suggests that doing so is especially deceptive. Their coverage shamed Twitter into deleting a bunch of Twitter accounts, outing all of the public figures who had bought bots. It almost felt like a discussion of who had gotten Botox.

Much of this recent flurry of coverage suggests that the so-called bot problem is a new thing that is “finally” known. It boggles my mind to think that any regular Twitter user hadn’t seen automated accounts in the past. And heck, there have been services like Twitter Audit to see how many fake followers you have since at least 2012. Gilad Lotan even detailed the ecosystem of buying fake followers in 2014I think that what’s new is that the term “bot” is suddenly toxic. And it gives us an opportunity to engage in another round of social shaming targeted at insecure people’s vanity all under the false pretense of being about bad foreign actors.

I’ve never been one to feel the need to put on a lot of makeup in order to leave the house and I haven’t been someone who felt the need to buy bots to appear cool online. But I find it deeply hypocritical to listen to journalists and politicians wring their hands about fake followers and bots given that they’ve been playing at that game for a long time. Who among them is really innocent of trying to garner attention through any means possible?

At the end of the day, I don’t really blame Twitter for giving these deeply engaged users what they want and turning a blind eye towards their efforts to puff up their status online. After all, the cosmetic industry is $55 billion. Then again, even cosmetic companies sometimes change their formulas when their products receive bad press.

Note: I’m fully aware of hypotheses that bots have destroyed American democracy. That’s a different essay. But I think that the main impact that they have had, like spam, is to destabilize people’s trust in the media ecosystem. Still, we need to contend with the stark reality that they do serve a purpose and some people do want them.

Panicked about Kids’ Addiction to Tech? Here are two things you could do

Flickr: Jan Hoffman

(This was originally posted on NewCo Shift)

Ever since key Apple investors challenged the company to address kids’ phone addiction, I’ve gotten a stream of calls asking me to comment on the topic. Mostly, I want to scream. I wrote extensively about the unhelpful narrative of “addiction” in my book It’s Complicated: The Social Lives of Networked Teens. At the time, the primary concern was social media. Today, it’s the phone, but the same story still stands: young people are using technology to communicate with their friends non-stop at a point in their life when everything is about sociality and understanding your place in the social world.

As much as I want to yell at all of the parents around me to chill out, I’m painfully and acutely aware of how ineffective this is. Parents don’t like to see that they’re part of the problem or that their efforts to protect and help their children might backfire. (If you want to experience my frustration in full color, watch the Black Mirror episode called “Arkangel” (trailer here).)

Lately, I’ve been trying to find smaller interventions that can make a huge different, tools that parents can use to address the problems they panic about. So let me offer two approaches for “addiction” that work at different ages.

Parenting the Small People: Verbalizing Tech Use

In the early years, children learn values and norms by watching their parents and other caregivers. They emulate our language and our facial expressions, our quirky habits and our tastes. There’s nothing more satisfying and horrifying than listening to your child repeat something you say all too often. Guess what? They also get their cues about technology from people around them. A child would need to be alone in the woods to miss that people love their phones. From the time that they’re born, people are shoving phones in their faces to take pictures, turning to their phones to escape, and obsessively talking on their phones while ignoring them. Of course they want the attention that they see the phone as taking away. And of course they want the device to be special to them.

So, here’s what I recommend to parents of small people: Verbalize what you’re doing with your phone. Whenever you pick up your phone (or other technologies) in front of your kids, say what you’re doing. And involve them in the process if they’d like.

  • “Mama’s trying to figure out how long it will take to get to Bobby’s house. Want to look at the map with me?”
  • “Daddy’s checking out the weather. Do you want to see what it says?”
  • “Mom wants to take a picture of you. Is that OK?
  • “Papa needs a break and wants to read the headlines of the New York Times. Do you want me to read them to you?”
  • “Mommy got a text message from Mama and needs to respond. Should I tell her something from you too?”

The funny thing about verbalizing what you’re doing is that you’ll check yourself about your decisions to grab that phone. Somehow, it’s a lot less comfy saying: “Mom’s going to check work email because she can’t stop looking in case something important happens.” Once you begin saying out loud every time you look at technology, you also realize how much you’re looking at technology. And what you’re normalizing for your kids. It’s like looking at a mirror and realizing what they’re learning. So check yourself and check what you have standardized. Are you cool with the values and norms you’ve set?

Parenting the Mid-Size People: Household Contracts

I can’t tell you how many parents have told me that they have a rule in their house that their kids can’t use technology until X, where X could be “after dinner” or “after homework is done” or any other markers. And yet, consistently, I ask them if they put away their phones during dinner or until after they’ve bathed and they look at me like I’m an alien. Teenagers loathe hypocrisy. It’s the biggest thing that I’ve seen to undermine trust between a parent and a child. And boy do they have a lot to say about their parents’ addiction to their phones. Oy vay.

So if you want to curb the usage of your child’s technology use, here’s what I propose: Create a household contract. This is a contract that sets the boundaries for everyone in the house — parents and kids.

Ask your teenage or tween child to write the first draft of the contract, stipulating what they think is appropriate as the rules for everyone in the house, what they’re willing to trade-off to get technology privileges and what they think that parents should trade-off. Ask them to list the consequences of not abiding by the household rules for everyone in the house. (As a parent, you can think through or sketch the terms you think are fair, but you should not present them first.). Ask your child to pitch to you what the household rules should be. You will most likely be shocked that they’re stricter and more structured than you expected. And then start the negotiation process. You may want to argue that you should have the right to look at the phone when it’s ringing in case it’s grandma calling, but then your daughter should have the right to look at her phone to see if her best friend is looking. That kind of thing. Work through the process, but have your child lead it rather than you dictate it. And then write up those rules and hang them up in the house as a contract that can be renegotiated at different types.

Parenting Past Addiction

Many people have unhealthy habits and dynamics in their life. Some are rooted in physical addiction. Others are habitual or psychological crutches. But across that spectrum, most people are aware of when something that they’re doing isn’t healthy. They may not be able to stop. Or they may not want to stop. Untangling that is part of the challenge. When you feel as though your child has an unhealthy relationship with technology (or anything else in their life), you need to start by asking if they see this the same way you do. When parents feel as though what their child is doing is unhealthy for them, but the child does not, the intervention has to be quite different than when the child is also concerned about the issue. There are plenty of teens out there that know their psychological desire to talk non-stop with their friends for fear of missing out is putting them in a bad place. Help them through that process and work through what strategies they can develop and learn to cope. Helping them build those coping skills long term will help them a lot more than just putting rules into place.

When there is a disconnect between parent and child’s views on a situation, the best thing a parent can do is try to understand why the disconnect exists.Is it about pleasure seeking? Is it about fear of missing out? Is it about the emotional bond of friendship? Is it about a parent’s priorities being at odds with a child’s priorities? What comes next is fundamentally about values in parenting. Some parents believe that they are the masters of the house and their demands rule the day. Others acquiesce to their children’s desires with no push back. The majority of the parents are in-between. But at the end of the day, parenting is about helping children navigate the world and support them to develop agency in a healthy manner. So I would strongly recommend that parents focus their energies on negotiating a path through that allows children to be bought-in and aware of why boundaries are being set. That requires communication and energy, not a new technology to police boundaries for you. More often than not, the latter sends the wrong message and backfires, not unlike the Black Mirror episode I mentioned earlier.

Good luck parents — parenting is a non-stop adventure filled with both joy and anxiety.

Beyond the Rhetoric of Algorithmic Solutionism

(This was originally posted on Medium)

If you ever hear that implementing algorithmic decision-making tools to enable social services or other high stakes government decision-making will increase efficiency or reduce the cost to taxpayers, know that you’re being lied to. When implemented ethically, these systems cost more. And they should.

Whether we’re talking about judicial decision making (e.g., “risk assessment scoring”) or modeling who is at risk for homelessness, algorithmic systems don’t simply cost money to implement. They cost money to maintain. They cost money to audit. They cost money to evolve with the domain that they’re designed to serve. They cost money to train their users to use the data responsiblyAbove all, they make visible the brutal pain points and root causes in existing systems that require an increase of services.

Otherwise, all that these systems are doing is helping divert taxpayer money from direct services, to lining the pockets of for-profit entities under the illusion of helping people. Worse, they’re helping usher in a diversion of liability because time and time again, those in powerful positions blame the algorithms.

This doesn’t mean that these tools can’t be used responsibly. They can. And they should. The insights that large-scale data analysis can offer is inspiring. The opportunity to help people by understanding the complex interplay of contextual information is invigorating. Any social scientist with a heart desperately wants to understand how to relieve inequality and create a more fair and equitable system. So of course there’s a desire to jump in and try to make sense of the data out there to make a difference in people’s lives. But to treat data analysis as a savior to a broken system is woefully naive.

Doing so obfuscates the financial incentives of those who are building these services, the deterministic rhetoric that they use to justify their implementation, the opacity that results from having non-technical actors try to understand technical jiu-jitsu, and the stark reality of how technology is used as a political bludgeoning tool. Even more frustratingly, what data analysis does well is open up opportunities for experimentation and deeper explorationBut in a zero-sum context, that means that the resources to do something about the information that is learned is siphoned off to the technology. And, worse, because the technology is supposed to save money, there is no budget for using that data to actually help people. Instead,technology becomes a mirage. Not because the technology is inherently bad, but because of how it is deployed and used.

READ THIS BOOK!

Next week, a new book that shows the true cost of these systems is being published. Virginia Eubanks’ book“Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor” is a deeply researched accounting of how algorithmic tools are integrated into services for welfare, homelessness, and child protection. Eubanks goes deep with the people and families who are targets of these systems, telling their stories and experiences in rich detail. Further, drawing on interviews with social services clients and service providers alongside the information provided by technology vendors and government officials, Eubanks offers a clear portrait of just how algorithmic systems actually play out on the ground, despite all of the hope that goes into their implementation.

Eubanks eschews the term “ethnography” because she argues that this book is immersive journalism, not ethnography. Yet, from my perspective as a scholar and a reader, this is the best ethnography I’ve read in yearsAutomating Inequality” does exactly what a good ethnography should do — it offers a compelling account of the cultural logics surrounding a particular dynamic, and invites the reader to truly grok what’s at stake through the eyes of a diverse array of relevant people. Eubanks brings you into the world of technologically mediated social services and helps you see what this really looks like on the ground. She showcases the frustration and anxiety that these implementations produce; the ways in which both social services recipientsand taxpayers are screwed by the false promises of these technologiesShe makes visible the politics and the stakes, the costs and the hope. Above all, she brings the reader into the stark and troubling reality of what it really means to be poor in America today.

“Automating Inequality” is on par with Barbara Ehrenreich’s “Nickel and Dimed” or Matthew Desmond’s “Evicted. It’s rigorously researched, phenomenally accessible, and utterly humbling. While there are a lot of important books that touch on the costs and consequences of technology through case studies and well-reasoned logic, this book is the first one that I’ve read that really pulls you into the world of algorithmic decision-making and inequality, like a good ethnography should.

I don’t know how Eubanks chose her title, but one of the subtle things about her choice is that she’s (unintentionally?) offering a fantastic backronym for AI. Rather than thinking of AI as “artificial intelligence,” Eubanks effectively builds the case for how we should think that AI often means “automating inequality” in practice.

This book should be mandatory for anyone who works in social services, government, or the technology sector because it forces you to really think about what algorithmic decision-making tools are doing to our public sector, and the costs that this has on the people that are supposedly being served. It’s also essential reading for taxpayers and voters who need to understand why technology is not the panacea that it’s often purported to be. Or rather, how capitalizing on the benefits of technology will require serious investment and a deep commitment to improving the quality of social services, rather than a tax cut.

Please please please read this book. It’s too important not to.

Data & Society will also be hosting Virginia Eubanks to talk about her book on January 17th at 4PM ET. She will be in conversation with Julia Angwin and Alondra Nelson. The event is sold out, but it will be livestreamed online. Please feel free to join us there!

The Radicalization of Utopian Dreams

Amazon Fulfillment Center, CC Scottish Government

The following is a transcript of my lightning talk at The People’s Disruption: Platform Co-Ops for Global Challenges— held at The New School. 


When you listen to people in tech talk about the future of labor, they will tell you that AI is taking over all of the jobs. What they gloss over is the gendered dynamics of the labor force. Many of the shortages in the workforce stem from labor that is culturally gendered “feminine” and seen as low-status. There’s no conception of how workforce dynamics in tech are also gendered.

Furthermore, anxieties about automation don’t tend to focus on work that is seen as the work of immigrants, even at a time when immigration is a hotly contested conversation. As a result, when we talk about automation as the major issue in the future of work, we lose track of the broader anxiety about identities that’s shaping both technology and work.

Identities matter because they shape how people respond to the society around them. How do people whose identities have been destabilized respond to a culture where institutions and information intermediaries no longer have their back? When they can’t find their identity through their working environment?

Our current crisis around opioids offers one harrowing answer. Religious extremism offers another. Yet, we also need to consider how many people turn to activism, both healthy and destructive, as a way of finding meaning.

People often find themselves by engaging with others through collective action, but collective action isn’t always productive. Consider this in light of the broader conversation about media manipulationfor those who have grown up gaming, running a raid on America’s political establishment is thrilling. It’s exhilarating to game the media to say ridiculous things. Hacking the attention economy produces a rush. It doesn’t matter whether or not you memed the president into being if you believe you did. It doesn’t even matter if your comrades were foreign agents with a much darker agenda.

For a lot of folks in tech, being a part of tech has been a way of grounding themselves. Many who built the social media infrastructure that we know today grew up with the utopian idealism of people like John Perry Barlow. HisDeclaration of Independence of Cyberspace is now of drinking age, but today’s reality is a lot more sober. Cybernaut geeks imagined building a new world rooted in a different value structure. They wanted to resist the financialized logic of Wall Street, but ended up contributing to the latest evolution of financialized capitalism. They wanted to create a public that was more broadly accessible, but ended up enabling a new wave of corrosive populism to take hold.

They wanted to disrupt the status quo, but weren’t at all prepared for what it would mean when they controlled the infrastructure underlying democracy, the economy, the media, and communication.

Google Plex CC Sebastian Gamboa

You’re at this event today because you also want a new world, a sociotechnical reality that is more cooperative and equitable in nature. You see Silicon Valley as emblematic of corrosive neoliberalism and libertarianism run amok. I get it. But I can’t help but think of how social media was birthed out of idealism that got reworked by economic and political interests, by the stark realities of what people did with technology vs. what its designers hoped they would do.So many of the people that I knew in the early days of tech wanted what you want.

The early adopters of social technologies — and many of those sites’ creators — were self-identified and marginalized geeks, freaks, and queers. Early social tech was built by those who felt like outsiders in a society that valued suave masculinities. Geeks like me who flocked to the Bay felt disenfranchised and vulnerable and turned to technology to build solidarity and feel less alone. In doing so, we helped construct a form of geek masculinity that gave many geeky men in particular a sense of pride that made them feel empowered through their work and play.

But as many of you know, power corrupts. And the same geek masculinities that were once rejuvenating have spiraled out of control. Today, we’re watching as diversity becomes a wedge issue that can be used to radicalize disaffected young men in tech. The gendered nature of tech is getting ugly.

A decade ago, academics that I adore were celebrating participatory culture as emancipatory, noting that technology allowed people to engage with culture in unprecedented ways. Radical leftists were celebrating the possibilities of decentralized technologies as a form of resisting corporate power. Smart mobs were being touted as the mechanism by which authoritarian regimes could come crashing down.

Now, even the most hardened tech geek is quietly asking:

What hath we wrought?

Screen capture courtesy of Ethan Zuckerman

We’ve seen massively decentralized networks coordinating and mobilizing on both for-profit and not-for-profit platforms, challenging the status quo. But the movements that they’re so strategically building are shaped by tribalistic and hate-oriented values. There are many people coordinating online who are willing to share tactic without sharing end goal, yet their tactical moves collectively achieve a form of societal gaslighting that causes unbearable pain.Tech wasn’t designed to enable this, but it did so none-the-less.

Geophysics Hackathon, CC Matt

This room is filled with people who hold dear many progressive values, who see the tech sector as the new establishment, and who are pushing for a more equitable future. I share your values and desires. You rightfully want a more fair and just society. And you rage against the machine. But I also want you to know that I saw similar desires among the early developers of social media as they worked to eject the dot-com MBA culture from Silicon Valley, as they worked to resist the 1980s Wall Street culture, as they tried to operate differently than their parents.. I saw idealism corrupted, good intentions go awry, and malignant forces capitalize on weaknesses within the system.

So as you relish each other’s presence today and tomorrow, I have a favor to ask. Don’t simply focus on what would be ideal or critique the status quo.Genuinely examine how what you’re seeking could also be corrupted and abused. I believe, more than anything, that deep empathy and self-reflection is critical for us to build a healthier future.

Too often, it’s easier to rally people to tear down what we hate than it is to build a sustainable future. And yet, at this moment in time in particular, we desperately need builders. We need you.

Your Data is Being Manipulated

Excerpt from “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” Sergey Brin and Larry Page (April 1998)

What follows is the crib from my keynote at the 2017 Strata Data Conference in New York City. Full video can be found here. 


In 1998, two graduate students at Stanford decided to try to “fix” the problems with major search engines. Sergey Brin and Larry Page wrote a paper describing how their PageRank algorithm could eliminate the plethora of “junk results.” Their idea, which we all now know as the foundation of Google, was critical. But it didn’t stop people from trying to mess with their system. In fact, the rise of Google only increased the sophistication of those invested in search engine optimization.


“google bombing” — diverting search engine rankings to subversive commentary about public figure

Fast forward to 2003, when the sitting Pennsylvania senator Rick Santorum publicly compared homosexuality to bestiality and pedophilia. Needless to say, the LGBT community was outraged. Journalist Dan Savage called on his readers to find a way to “memorialize the scandal.” One of his fans created a website to associate Santorum’s name with anal sex. To the senator’s horror, countless members of the public jumped in to link to that website in an effort to influence search engines. This form of crowdsourced SEO is commonly referred to as “Google bombing,” and it’s a form of media manipulation intended to mess with data and the information landscape.


Media Manipulation and Disinformation Online (cover), March 2017. Illustration by Jim Cooke

Media manipulation is not new. As many adversarial actors know, the boundaries between propaganda and social media marketing are often fuzzy.Furthermore, any company that uses public signals to inform aspects of its product — from Likes to Comments to Reviews — knows full well that any system you create will be gamed for fun, profit, politics, ideology, and power.Even Congress is now grappling with that reality. But I’m not here to tell you what has always been happening or even what is currently happening — I’m here to help you understand what’s about to happen.


At this moment, AI is at the center of every business conversation. Companies, governments, and researchers are obsessed with data. Not surprisingly, so are adversarial actors. We are currently seeing an evolution in how data is being manipulated. If we believe that data can and should be used to inform people and fuel technology, we need to start building the infrastructure necessary to limit the corruption and abuse of that data — and grapple with how biased and problematic data might work its way into technology and, through that, into the foundations of our society.

In short, I think we need to reconsider what security looks like in a data-driven world.

Shutterstock by goir

Part 1: Gaming the System

Like search engines, social media introduced a whole new target for manipulation. This attracted all sorts of people, from social media marketers to state actors. Messing with Twitter’s trending topics or Facebook’s news feed became a hobby for many. For $5, anyone could easily buy followers, likes, and comments on almost every major site. The economic and political incentives are obvious, but alongside these powerful actors, there are also a whole host of people with less-than-obvious intentions coordinating attacks on these systems.


Piechart example of Rick-Rolling

For example, when a distributed network of people decided to help propel Rick Astley to the top of the charts 20 years after his song “Never Gonna Give You Up” first came out, they weren’t trying to help him make a profit (although they did). Like other memes created through networks on sites like 4chan, rickrolling was for kicks. Butthrough this practice, lots of people learned how to make content “go viral” or otherwise mess with systems. In other words, they learned to hack the attention economy. And, in doing so, they’ve developed strategic practices of manipulation that can and do have serious consequences.


A story like “#Pizzagate” doesn’t happen accidentally — it was produced by a wide network of folks looking to toy with the information ecosystem. They created a cross-platform network of fake accounts known as“sock puppets” which they use to subtly influence journalists and other powerful actors to pay attention to strategically produced questions, blog posts, and YouTube videos. The goal with a story like that isn’t to convince journalists that it’s true, but to get them to foolishly use their amplification channels to negate it. This produces a Boomerang effect,” whereby those who don’t trust the media believe that there must be merit to the conspiracy, prompting some to “self-investigate.”


Hydrargyrum CC BY-SA 2.0

Then there’s the universe of content designed to “open the Overton window” — or increase the range of topics that are acceptable to discuss in public. Journalists are tricked into spreading problematic frames. Moreover,recommendation engines can be used to encourage those who are open to problematic frames to go deeper. Researcher Joan Donovan studies white supremacy; after work, she can’t open Amazon, Netflix, or YouTube without being recommended to consume neo-Nazi music, videos, and branded objectsRadical trolls also know how to leverage this infrastructure to cause trouble. Without tripping any of Twitter’s protective mechanisms, the well-known troll weev managed to use the company’s ad infrastructure to amplify white supremacist ideas to those focused on social justice, causing outrage and anger.

By and large, these games have been fairly manual attacks of algorithmic systems, but as we all know, that’s been changing. And it’s about to change again.


Part 2: Vulnerable Training Sets

Training a machine learning system requires data. Lots of it. While there are some standard corpuses, computer science researchers, startups, and big companies are increasingly hungry for new — and different — data.

Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study, June 29, 2017

The first problem is that all data is biased, most notably and recognizably by reflecting the biases of humans and of society in general. Take, for example, the popular ImageNet dataset. Because humans categorize by shape faster than they categorize by color, you end up with some weird artifacts in that data.


(a) and (c) demonstrate ads for two indvidual’s names, (b) and (d) demonstrate that the advertising was suggesting criminal histories based on name type, not actual records

Things get even messier when you’re dealing with social prejudices. WhenLatanya Sweeney searched for her name on Google, she was surprised to be given ads inviting her to find out if she had a criminal record. As a curious computer scientist, she decided to run a range of common black and white names through the system to see which ads popped up. Unsurprisingly, onlyblack names produced ads for criminal justice products. This isn’t because Google knowingly treated the names differently, but because searchers were more likely to click on criminal justice ads when searching for black names.Google learned American racism and amplified it back at all of its users.

Addressing implicit and explicit cultural biases in data is going to be a huge challenge for everyone who is trying to build a system dependent on data classified by or about humans.


But there’s also a new challenge emerging. The same decentralized networks of people — and state actors — who have been messing with social media and search engines are increasingly eyeing the data that various companies use to train and improve their systems.

Consider, for example, the role of reddit and Twitter data as training data. Computer scientists have long pulled from the very generous APIs of these companies to train all sorts of models, trying to understand natural language, develop metadata around links, and track social patterns. They’ve trained models to detect depression, rank news, and engage in conversation. Ignoring the fact that this data is not representative in the first place, most engineers who use these APIs believe that it’s possible to clean the data and remove all problematic content. I can promise you it’s not.

No amount of excluding certain subreddits, removing of categories of tweets, or ignoring content with problematic words will prepare you for those who are hellbent on messing with you.

I’m watching countless actors experimenting with ways to mess with public data with an eye on major companies’ systems. They are trying to fly below the radar. If you don’t have a structure in place for strategically grappling with how those with an agenda might try to route around your best laid plans, you’re vulnerable. This isn’t about accidental or natural content. It’s not even about culturally biased dataThis is about strategically gamified content injected into systems by people who are trying to guess what you’ll do.


If you want to grasp what that means, consider the experiment Nicolas Papernot and his colleagues published last year. In order to understand the vulnerabilities of computer vision algorithms, they decided to alter images of stop signs so that they still resembled a stop sign to a human viewer even as the underlying neural network interpreted them as a yield sign. Think about what this means for autonomous vehicles. Will this technology be widely adopted if the classifier can be manipulated so easily?

Practical Black-Box Attacks against Machine, March 19, 2017. The images in the top row are altered to disrupt the neural network leading to the misinterpretation on the bottom row. The alterations are not visible to the human eye.

Right now, most successful data-injection attacks on machine learning modelsare happening in the world of research, but more and more, we are seeing people try to mess with mainstream systems. Just because they haven’t been particularly successful yet doesn’t mean that they aren’t learning and evolving their attempts.


Part 3: Building Technical Antibodies

Many companies spent decades not taking security vulnerabilities seriously, until breach after breach hit the news. Do we need to go through the same pain before we start building the tools to address this new vulnerability?

If you are building data-driven systems, you need to start thinking about how that data can be corrupted, by whom, and for what purpose.


In the tech industry, we have lost the culture of Test. Part of the blame rests on the shoulders of social media. Fifteen years ago, we got the bright idea to shift to a culture of the “perpetual beta.” We invited the public to be our quality assurance engineers. But internal QA wasn’t simply about finding bugs. It was about integrating adversarial thinking into the design and development process. And asking the public to find bugs in our systems doesn’t work well when some of those same people are trying to mess with our systems.Furthermore, there is currently no incentive — or path — for anyone to privately tell us where things go wrong. Only when journalists shame us by finding ways to trick our systems into advertising to neo-Nazis do we pay attention. Yet, far more maliciously intended actors are starting to play the long game in messing with our data. Why aren’t we trying to get ahead of this?


On the bright side, there’s an emergent world of researchers building adversarial thinking into the advanced development of machine learning systems.

Consider, for example, the research into generative adversarial networks (or GANs). For those unfamiliar with this line of work, the idea is that you have two unsupervised ML algorithms — one is trying to generate content for the other to evaluate. The first is trying to trick the second into accepting “wrong” information. This work is all about trying to find the boundaries of your model and the latent space of your data. We need to see a lot more R&D work like this — this is the research end of a culture of Test, with true adversarial thinking baked directly into the process of building models.


White Hat Hackers — those who hack for “the right reasons.” For instance, testing the security or vulnerabilities of a system (Image: CC Magicon, HU)

But these research efforts are not enough. We need to actively and intentionally build a culture of adversarial testing, auditing, and learning into our development practice. We need to build analytic approaches to assess the biases of any dataset we use. And we need to build tools to monitor how our systems evolve with as much effort as we build our models in the first place.My colleague Matt Goerzen argues that we also need to strategically invite white hat trolls to mess with our systems and help us understand our vulnerabilities.


The tech industry is no longer the passion play of a bunch of geeks trying to do cool shit in the world. It’s now the foundation of our democracy, economy, and information landscape.

We no longer have the luxury of only thinking about the world we want to build. We must also strategically think about how others want to manipulate our systems to do harm and cause chaos.

Data & Society’s Next Stage

In March 2013, in a flurry of days, I decided to start a research institute. I’d always dreamed of doing so, but it was really my amazing mentor and boss – Jennifer Chayes – who put the fire under my toosh. I’d been driving her crazy about the need to have more people deeply interrogating how data-driven technologies were intersecting with society. Microsoft Research didn’t have the structure to allow me to move fast (and break things). University infrastructure was even slower. There were a few amazing research centers and think tanks, but I wanted to see the efforts scale faster. And I wanted to build the structures to connect research and practices, convene conversations across sectors, and bring together a band of what I loved to call “misfit toys.”  So, with the support of Jennifer and Microsoft, I put pen to paper. And to my surprise, I got the green light to help start a wholly independent research institute.

I knew nothing about building an organization. I had never managed anyone, didn’t know squat about how to put together a budget, and couldn’t even create a check list of to-dos. So I called up people smarter than I to help learn how other organizations worked and figure out what I should learn to turn a crazy idea into reality. At first, I thought that I should just go and find someone to run the organization, but I was consistently told that I needed to do it myself, to prove that it could work. So I did. It was a crazy adventure. Not only did I learn a lot about fundraising, management, and budgeting, but I also learned all sorts of things about topics I didn’t even know I would learn to understand – architecture, human resources, audits, non-profit law. I screwed up plenty of things along the way, but most people were patient with me and helped me learn from my mistakes. I am forever grateful to all of the funders, organizations, practitioners, and researchers who took a chance on me.

Still, over the next four years, I never lost that nagging feeling that someone smarter and more capable than me should be running Data & Society. I felt like I was doing the organization a disservice by not focusing on research strategy and public engagement. So when I turned to the board and said, it’s time for an executive director to take over, everyone agreed. We sat down and mapped out what we needed – a strategic and capable leader who’s passionate about building a healthy and sustainable research organization to be impactful in the world. Luckily, we had hired exactly that person to drive program and strategy a year before when I was concerned that I was flailing at managing the fieldbuilding and outreach part of the organization.

I am overwhelmingly OMG ecstatically bouncing for joy to announce that Janet Haven has agreed to become Data & Society’s first executive director. You can read more about Janet through the formal organizational announcement here.  But since this is my blog and I’m telling my story, what I want to say is more personal. I was truly breaking when we hired Janet. I had taken off more than I could chew. I was hitting rock bottom and trying desperately to put on a strong face to support everyone else. As I see it, Janet came in, took one look at the duct tape upon which I’d built the organization and got to work with steel, concrete, and wood in her hands. She helped me see what could happen if we fixed this and that. And then she started helping me see new pathways for moving forward. Over the last 18 months, I’ve grown increasingly confident that what we’re doing makes sense and that we can build an organization that can last. I’ve also been in awe watching her enable others to shine.

I’m not leaving Data & Society. To the contrary, I’m actually taking on the role that my title – founder and president – signals. And I’m ecstatic. Over the last 4.5 years, I’ve learned what I’m good at and what I’m not, what excites me and what makes me want to stay in bed. I built Data & Society because I believe that it needs to exist in this world. But I also realize that I’m the classic founder – the crazy visionary that can kickstart insanity but who isn’t necessarily the right person to take an organization to the next stage. Lucky for me, Janet is. And together, I can’t wait to take Data & Society to the next level!

How “Demo-or-Die” Helped My Career

I left the Media Lab 15 years ago this week. At the time, I never would’ve predicted that I learned one of the most useful skills in my career there: demo-or-die.

(Me debugging an exhibit in 2002)

The culture of “demo-or-die” has been heavily critiqued over the years. In doing so, most folks focus on the words themselves. Sure, the “or-die” piece is definitely an exaggeration, but the important message there is the notion of pressure. But that’s not what most people focus on. They focus on the notion of a “demo.”

To the best that anyone can recall, the root of the term stems back from early days at the Media Lab, most likely because of Nicholas Negroponte’s dismissal of “publish-or-perish” in academia. So the idea was to focus not on writing words but producing artifacts. In mocking what it was that the Media Lab produced, many critics focused on the way in which the Lab had a tendency to create vaporware, performed to visitors through the demo. In 1987, Stewart Brand called this “handwaving.” The historian Molly Steenson has a more nuanced view so I can’t wait to read her upcoming book. But the mockery of the notion of a demo hasn’t died. Given this, it’s not surprising that the current Director (Joi Ito) has pushed people to stop talking about demoing and start thinking about deploying. Hence, “deploy-or-die.”

I would argue that what makes “demo-or-die” so powerful has absolutely nothing to do with the production of a demo. It has to do with the act of doing a demo. And that distinction is important because that’s where the skill development that I relish lies.

When I was at the Lab, we regularly received an onslaught of visitors. I was a part of the “Sociable Media Group,” run by Judith Donath. From our first day in the group, we were trained to be able to tell the story of the Media Lab, the mission of our group, and the goal of everyone’s research projects. Furthermore, we had to actually demo their quasi functioning code and pray that it wouldn’t fall apart in front of an important visitor. We were each assigned a day where we were “on call” to do demos to any surprise visitor. You could expect to have at least one visitor every day, not to mention hundreds of visitors on days that were officially sanctioned as “Sponsor Days.”

The motivations and interests of visitors ranged wildly. You’d have tour groups of VIP prospective students, dignitaries from foreign governments, Hollywood types, school teachers, engineers, and a whole host of different corporate actors. If you were lucky, you knew who was visiting ahead of time. But that was rare. Often, someone would walk in the door with someone else from the Lab and introduce you to someone for whom you’d have to drum up a demo in very short order with limited information. You’d have to quickly discern what this visitor was interested in, figure out which of the team’s research projects would be most likely to appeal, determine how to tell the story of that research in a way that connected to the visitor, and be prepared to field any questions that might emerge. And oy vay could the questions run the gamut.

I *hated* the culture of demo-or-die. I felt like a zoo animal on display for others’ benefit. I hated the emotional work that was needed to manage stupid questions, not to mention the requirement to smile and play nice even when being treated like shit by a visitor. I hated the disruptions and the stressful feeling when a demo collapsed. Drawing on my experience working in fast food, I developed a set of tricks for staying calm. Count how many times a visitor said a certain word. Nod politely while thinking about unicorns. Experiment with the wording of a particular demo to see if I could provoke a reaction. Etc.

When I left the Media Lab, I was ecstatic to never have to do another demo in my life. Except, that’s the funny thing about learning something important… you realize that you are forever changed by the experience.

I no longer produce demos, but as I developed in my career, I realized that “demo-or-die” wasn’t really about the demo itself. At the end of the day, the goal wasn’t to pitch the demo — it was to help the visitor change their perspective of the world through the lens of the demo. In trying to shift their thinking, we had to invite them to see the world differently. The demo was a prop. Everything about what I do as a researcher is rooted in the goal of using empirical work to help challenge people’s assumptions and generate new frames that people can work with. I have to understand where they’re coming from, appreciate their perspective, and then strategically engage them to shift their point of view. Like my days at the Media Lab, I don’t always succeed and it is indeed frustrating, especially because I don’t have a prop that I can rely on when everything goes wrong. But spending two years developing that muscle has been so essential for my work as an ethnographer, researcher, and public speaker.

I get why Joi reframed it as “deploy-or-die.” When it comes to actually building systems, impact is everything. But I really hope that the fundamental practice of “demo-or-die” isn’t gone. Those of us who build systems or generate knowledge day in and day out often have too little experience explaining ourselves to the wide array of folks who showed up to visit the Media Lab. It’s easy to explain what you do to people who share your ideas, values, and goals. It’s a lot harder to explain your contributions to those who live in other worlds. Impact isn’t just about deploying a system; it’s about understanding how that system or idea will be used. And that requires being able to explain your thinking to anyone at any moment. And that’s the skill that I learned from the “demo-or-die” culture.

Tech Culture Can Change

We need: Recognition, Repentance, Respect, and Reparation.

To be honest, what surprises me most about the current conversation about the inhospitable nature of tech for women is that people are surprised. To say that discrimination, harassment, and sexual innuendos are an open secret is an understatement. I don’t know a woman in tech who doesn’t have war stories. Yet, for whatever reason, we are now in a moment where people are paying attention. And for that, I am grateful.

Like many women in tech, I’ve developed strategies for coping. I’ve had to in order to stay in the field. I’ve tried to be “one of the guys,” pretending to blend into the background as sexist speech was jockeyed about in the hopes that I could just fit in. I’ve tried to be the kid sister, the freaky weirdo, the asexual geek, etc. I’ve even tried to use my sexuality to my advantage in the hopes that maybe I could recover some of the lost opportunity that I faced by being a woman. It took me years to realize that none of these strategies would make me feel like I belonged. Many even made me feel worse.

For years, I included Ani DiFranco lyrics in every snippet of code I wrote, as well as my signature. I’ve maintained a lyrics site since I was 18 because her words give me strength for coping with the onslaught of commentary and gross behavior. “Self-preservation is a full-time occupation.” I can’t tell you how often I’ve sat in a car during a conference or after a meeting singing along off-key at full volume with tears streaming down my face, just trying to keep my head together.

What’s at stake is not about a few bad actors. There’s also a range of behaviors getting lumped together, resulting in folks asking if inescapable sexual overtures are really that bad compared to assault. That’s an unproductive conversation because the fundamental problem is the normalization of atrocious behavior that makes room for a wide range of inappropriate actions. Fundamentally, the problem with systemic sexism is that it’s not the individual people who are the problem. It’s the culture. And navigating the culture is exhausting and disheartening. It’s the collection of particles of sand that quickly becomes a mountain that threatens to bury you.

It’s having to constantly stomach sexist comments with a smile, having to work twice as hard to be heard in a meeting, having to respond to people who ask if you’re on the panel because they needed a woman. It’s about going to conferences where deals are made in the sauna but being told that you have to go to the sauna with “the wives” (a pejoratively constructed use of the word). It’s about people assuming you’re sleeping with whoever said something nice about you. It’s being told “you’re kinda smart for a chick” when you volunteer to help a founder. It’s knowing that you’ll receive sexualized threats for commenting on certain topics as a blogger. It’s giving a talk at a conference and being objectified by the audience. It’s building whisper campaigns among women to indicate which guys to avoid. It’s using Dodgeball/Foursquare to know which parties not to attend based on who has checked in. It’s losing friends because you won’t work with a founder who you watched molest a woman at a party (and then watching Justin Timberlake portray that founder’s behavior as entertainment).

Lots of people in tech have said completely inappropriate things to women. I also recognize that many of those guys are trying to fit into the sexist norms of tech too, trying to replicate the culture that they see around them because they too are struggling for status. But that’s the problem. Once guys receive power and status within the sector, they don’t drop their inappropriate language. They don’t change their behavior or call out others on how insidious it is. They let the same dynamics fester as though it’s just part of the hazing ritual.

For women who succeed in tech, the barrage of sexism remains. It just changes shape as we get older.

On Friday night, after reading the NYTimes article on tech industry harassment, I was deeply sad. Not because the stories were shocking — frankly, those incidents are minor compared to some of what I’ve seen. I was upset because stories like this typically polarize and prompt efforts to focus on individuals rather than the culture. There’s an assumption that these are one-off incidents. They’re not.

I appreciate that Dave and Chris owned up to their role in contributing to a hostile culture. I know that it’s painful to hear that something you said or did hurt someone else when you didn’t intend that to be the case. I hope that they’re going through a tremendous amount of soul-searching and self-reflection. I appreciate Chris’ willingness to take to Medium to effectively say “I screwed up.” Ideally, they will both come out of this willing to make amends and right their wrongs.

Unfortunately, most people don’t actually respond productively when they’re called out. Shaming can often backfire.

One of the reasons that most people don’t speak up is that it’s far more common for guys who are called out on their misdeeds to respond the way that Marc Canter appeared to do, by justifying his behavior and demonizing the woman who accused him of sexualizing her. Given my own experiences with his sexist commentary, I decided to tweet out in solidarity by publicly sharing how he repeatedly asked me for a threesome with his wife early on in my career. At the time, I was young and I was genuinely scared of him; I spent a lot of time and emotional energy avoiding him, and struggled with how to navigate him at various conferences. I wasn’t the only one who faced his lewd comments, often framed as being sex-positive even when they were an abuse of power. My guess is that Marc has no idea how many women he’s made feel uncomfortable, ashamed, and scared. The question is whether or not he will admit that to himself, let alone to others.

I’m not interested in calling people out for sadistic pleasure. I want to see the change that most women in tech long for. At its core, the tech industry is idealistic and dreamy, imagining innovations that could change the world. Yet, when it comes to self-reflexivity, tech is just as regressive as many other male-dominated sectors. Still, I fully admit that I hold it to a higher standard in no small part because of the widespread commitment in tech to change the world for the better, however flawed that fantastical idealism is.

Given this, what I want from men in tech boils down to four Rs: Recognition. Repentance. Respect. Reparation.

Recognition. I want to see everyone — men and women — recognize how contributing to a culture of sexism takes us down an unhealthy path, not only making tech inhospitable for women but also undermining the quality of innovation and enabling the creation of tech that does societal harm. I want men in particular to reflect on how the small things that they do and say that they self-narrate as part of the game can do real and lasting harm, regardless of what they intended or what status level they have within the sector. I want those who witness the misdeeds of others to understand that they’re contributing to the problem.

Repentance. I want guys in tech — and especially those founders and funders who hold the keys to others’ opportunity — to take a moment and think about those that they’ve hurt in their path to success and actively, intentionally, and voluntarily apologize and ask for forgiveness. I want them to reach out to someone they said something inappropriate to, someone whose life they made difficult and say “I’m sorry.”

Respect. I want to see a culture of respect actively nurtured and encouraged alongside a culture of competition. Respect requires acknowledging others’ struggles, appreciating each others’ strengths and weaknesses, and helping each other through hard times. Many of the old-timers in tech are nervous that tech culture is being subsumed by financialization. Part of resisting this transformation is putting respect front and center. Long-term success requires thinking holistically about society, not just focusing on current capitalization.

Reparation. Every guy out there who wants to see tech thrive owes it to the field to actively seek out and mentor, support, fund, open doors for, and otherwise empower women and people of color. No excuses, no self-justifications, no sexualized bullshit. Just behavior change. Plain and simple. If our sector is about placing bets, let’s bet on a better world. And let’s solve for social equity.

I have a lot of respect for the women who are telling their stories, but we owe it to them to listen to the culture that they’re describing. Sadly, there are so many more stories that are not yet told. I realize that these stories are more powerful when people are named. My only hope is that those who are risking the backlash to name names will not suffer for doing so. Ideally, those who are named will not try to self-justify but acknowledge and accept that they’ve caused pain. I strongly believe that changing the norms is the only path forward. So while I want to see people held accountable, I especially want to see the industry work towards encouraging and supporting behavior change. At the end of the day, we will not solve the systemic culture of sexism by trying to weed out bad people, but we can work towards rendering bad behavior permanently unacceptable.