Author Archives: zephoria

Why Parents Help Children Violate Facebook’s 13+ Rule

Announcing new journal article: “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, First Monday.

“At what age should I let my child join Facebook?” This is a question that countless parents have asked my collaborators and me. Often, it’s followed by the following: “I know that 13 is the minimum age to join Facebook, but is it really so bad that my 12-year-old is on the site?”

While parents are struggling to determine what social media sites are appropriate for their children, government tries to help parents by regulating what data internet companies can collect about children without parental permission. Yet, as has been the case for the last decade, this often backfires. Many general-purpose communication platforms and social media sites restrict access to only those 13+ in response to a law meant to empower parents: the Children’s Online Privacy Protection Act (COPPA). This forces parents to make a difficult choice: help uphold the minimum age requirements and limit their children’s access to services that let kids connect with family and friends OR help their children lie about their age to circumvent the age-based restrictions and eschew the protections that COPPA is meant to provide.

In order to understand how parents were approaching this dilemma, my collaborators — Eszter Hargittai (Northwestern University), Jason Schultz (University of California, Berkeley), John Palfrey (Harvard University) — and I decided to survey parents. In many ways, we were responding to a flurry of studies (e.g. Pew’s) that revealed that millions of U.S. children have violated Facebook’s Terms of Service and joined the site underage. These findings prompted outrage back in May as politicians blamed Facebook for failing to curb underage usage. Embedded in this furor was an assumption that by not strictly guarding its doors and keeping children out, Facebook was undermining parental authority and thumbing its nose at the law. Facebook responded by defending its practices — and highlighting how it regularly ejects children from its site. More controversially, Facebook’s founder Mark Zuckerberg openly questioned the value of COPPA in the first place.

While Facebook has often sparked anger over its cavalier attitudes towards user privacy, Zuckerberg’s challenge with regard to COPPA has merit. It’s imperative that we question the assumptions embedded in this policy. All too often, the public takes COPPA at face-value and politicians angle to build new laws based on it without examining its efficacy.

Eszter, Jason, John, and I decided to focus on one core question: Does COPPA actually empower parents? In order to do so, we surveyed parents about their household practices with respect to social media and their attitudes towards age restrictions online. We are proud to release our findings today, in a new paper published at First Monday called “Why parents help their children lie to Facebook about age: Unintended consequences of the ‘Children’s Online Privacy Protection Act’.” From a national sample of 1,007 U.S. parents who have children living with them between the ages of 10-14 conducted July 5-14, 2011, we found:

  • Although Facebook’s minimum age is 13, parents of 13- and 14-year-olds report that, on average, their child joined Facebook at age 12.
  • Half (55%) of parents of 12-year-olds report their child has a Facebook account, and most (82%) of these parents knew when their child signed up. Most (76%) also assisted their 12-year old in creating the account.
  • A third (36%) of all parents surveyed reported that their child joined Facebook before the age of 13, and two-thirds of them (68%) helped their child create the account.
  • Half (53%) of parents surveyed think Facebook has a minimum age and a third (35%) of these parents think that this is a recommendation and not a requirement.
  • Most (78%) parents think it is acceptable for their child to violate minimum age restrictions on online services.

The status quo is not working if large numbers of parents are helping their children lie to get access to online services. Parents do appear to be having conversations with their children, as COPPA intended. Yet, what does it mean if they’re doing so in order to violate the restrictions that COPPA engendered?

One reaction to our data might be that companies should not be allowed to restrict access to children on their sites. Unfortunately, getting the parental permission required by COPPA is technologically difficult, financially costly, and ethically problematic. Sites that target children take on this challenge, but often by excluding children whose parents lack resources to pay for the service, those who lack credit cards, and those who refuse to provide extra data about their children in order to offer permission. The situation is even more complicated for children who are in abusive households, have absentee parents, or regularly experience shifts in guardianship. General-purpose sites, including communication platforms like Gmail and Skype and social media services like Facebook and Twitter, generally prefer to avoid the social, technical, economic, and free speech complications involved.

While there is merit to thinking about how to strengthen parent permission structures, focusing on this obscures the issues that COPPA is intended to address: data privacy and online safety. COPPA predates the rise of social media. Its architects never imagined a world where people would share massive quantities of data as a central part of participation. It no longer makes sense to focus on how data are collected; we must instead question how those data are used. Furthermore, while children may be an especially vulnerable population, they are not the only vulnerable population. Most adults have little sense of how their data are being stored, shared, and sold.

COPPA is a well-intentioned piece of legislation with unintended consequences for parents, educators, and the public writ large. It has stifled innovation for sites focused on children and its implementations have made parenting more challenging. Our data clearly show that parents are concerned about privacy and online safety. Many want the government to help, but they don’t want solutions that unintentionally restrict their children’s access. Instead, they want guidance and recommendations to help them make informed decisions. Parents often want their children to learn how to be responsible digital citizens. Allowing them access is often the first step.

Educators face a different set of issues. Those who want to help youth navigate commercial tools often encounter the complexities of age restrictions. Consider the 7th grade teacher whose students are heavy Facebook users. Should she admonish her students for being on Facebook underage? Or should she make sure that they understand how privacy settings work? Where does digital literacy fit in when what children are doing is in violation of websites’ Terms of Service?

At first blush, the issues surrounding COPPA may seem to only apply to technology companies and the government, but their implications extend much further. COPPA affects parenting, education, and issues surrounding youth rights. It affects those who care about free speech and those who are concerned about how violence shapes home life. It’s important that all who care about youth pay attention to these issues. They’re complex and messy, full of good intention and unintended consequences. But rather than reinforcing or extending a legal regime that produces age-based restrictions which parents actively circumvent, we need to step back and rethink the underlying goals behind COPPA and develop new ways of achieving them. This begins with a public conversation.

We are excited to release our new study in the hopes that it will contribute to that conversation. To read our complete findings and learn more about their implications for policy makers, see “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, published in First Monday.

To learn more about the Children’s Online Privacy Protection Act (COPPA), make sure to check out the Federal Trade Commission’s website.

(Versions of this post were originally written for the Huffington Post and for the Digital Media and Learning Blog.)

Image Credit: Tim Roe

The Unintended Consequences of Cyberbullying Rhetoric

We all know that teen bullying – both online and offline – has devastating consequences. Jamey Rodemeyer’s suicide is a tragedy. He was tormented for being gay. He knew he was being bullied and he regularly talked about the fact that he was being bullied. Online, he even wrote: “I always say how bullied I am, but no one listens. What do I have to do so people will listen to me?” The fact that he could admit that he was being tormented coupled with the fact that he asked for help and folks didn’t help him should be a big wake-up call. We have a problem. And that problem is that most of us adults don’t have the foggiest clue how to help youth address bullying.

It doesn’t take a tragedy to know that we need to find a way to combat bullying. Countless regulators and educators are desperate to do something – anything – to put an end to the victimization. But in their desperation to find a solution, they often turn a blind’s eye to both research and the voices of youth.

The canonical research definition of bullying was written by Olweus and it has three components:

  • Bullying is aggressive behavior that involves unwanted, negative actions.
  • Bullying involves a pattern of behavior repeated over time.
  • Bullying involves an imbalance of power or strength.

What Rodemeyer faced was clearly bullying, but a lot of the reciprocal relational aggression that teens experience online is not actually bullying. Still, in the public eye, these concepts are blurred and so when parents and teachers and regulators talk about wanting to stop bullying, they talk about wanting to stop all forms of relational aggression too. The problem is that many teens do not – and, for good reasons, cannot – identify a lot of what they experience as bullying. Thus, all of the new fangled programs to stop bullying are often missing the mark entirely. In a new paper that Alice Marwick and I co-authored – called “The Drama! Teen Conflict, Gossip, and Bullying in Networked Publics” – we analyzed the language of youth and realized that their use the language of “drama” serves many purposes, not the least of which is to distance themselves from the perpetrator / victim rhetoric of bullying in order to save face and maintain agency.

For most teenagers, the language of bullying does not resonate. When teachers come in and give anti-bullying messages, it has little effect on most teens. Why? Because most teens are not willing to recognize themselves as a victim or as an aggressor. To do so would require them to recognize themselves as disempowered or abusive. They aren’t willing to go there. And when they are, they need support immediately. Yet, few teens have the support structures necessary to make their lives better. Rodemeyer is a case in point. Few schools have the resources to provide youth with the necessary psychological counseling to work through these issues. But if we want to help youth who are bullied, we need there to be infrastructure to help young people when they are willing to recognize themselves as victimized.

To complicate matters more, although school after school is scrambling to implement anti-bullying programs, no one is assessing the effectiveness of these programs. This is not to say that we don’t need education – we do. But we need the interventions to be tested. And my educated hunch is that we need to be focusing more on positive frames that use the language of youth rather than focusing on the negative.

I want to change the frame of our conversation because we need to change the frame if we’re going to help youth. I’ve spent the last seven years talking to youth about bullying and drama and it nearly killed me when I realized that all of the effort that adults are putting into anti-bullying campaigns are falling on deaf ears and doing little to actually address what youth are experiencing. Even hugely moving narratives like “It Gets Better” aren’t enough when a teen can make a video for other teens and then kill himself because he’s unable to make it better in his own community.

In an effort to ground the bullying conversation, Alice Marwick and I just released a draft of our new paper: “The Drama! Teen Conflict, Gossip, and Bullying in Networked Publics.” We also co-authored a New York Times Op-Ed in the hopes of reaching a wider audience: “Why Cyberbullying Rhetoric Misses the Mark.” Please read these and send us feedback or criticism. We are in this to help the youth that we spend so much time with and we’re both deeply worried that adult rhetoric is going in the wrong direction and failing to realize why it’s counterproductive.

Image from Flickr by Brandon Christopher Warren

Continue reading

Six Provocations for Big Data

The era of “Big Data” has begun. Computer scientists, physicists, economists, mathematicians, political scientists, bio-informaticists, sociologists, and many others are clamoring for access to the massive quantities of information produced by and about people, things, and their interactions. Diverse groups argue about the potential benefits and costs of analyzing information from Twitter, Google, Verizon, 23andMe, Facebook, Wikipedia, and every space where large groups of people leave digital traces and deposit data. Significant questions emerge. Will large-scale analysis of DNA help cure diseases? Or will it usher in a new wave of medical inequality? Will data analytics help make people’s access to information more efficient and effective? Or will it be used to track protesters in the streets of major cities? Will it transform how we study human communication and culture, or narrow the palette of research options and alter what ‘research’ means? Some or all of the above?

Kate Crawford and I decided to sit down and interrogate some of the assumptions and biases embedded into the rhetoric surrounding “Big Data.” The resulting piece – “Six Provocations for Big Data” – offers a multi-discplinary social analysis of the phenomenon with the goal of sparking a conversation. This paper is intended to be presented as a keynote address at the Oxford Internet Institute’s 10th Anniversary “A Decade in Internet Time” Symposium.

Feedback is more than welcome!

Guilt Through Algorithmic Association

You’re a 16-year-old Muslim kid in America. Say your name is Mohammad Abdullah. Your schoolmates are convinced that you’re a terrorist. They keep typing in Google queries likes “is Mohammad Abdullah a terrorist?” and “Mohammad Abdullah al Qaeda.” Google’s search engine learns. All of a sudden, auto-complete starts suggesting terms like “Al Qaeda” as the next term in relation to your name. You know that colleges are looking up your name and you’re afraid of the impression that they might get based on that auto-complete. You are already getting hostile comments in your hometown, a decidedly anti-Muslim environment. You know that you have nothing to do with Al Qaeda, but Google gives the impression that you do. And people are drawing that conclusion. You write to Google but nothing comes of it. What do you do?

This is guilt through algorithmic association. And while this example is not a real case, I keep hearing about real cases. Cases where people are algorithmically associated with practices, organizations, and concepts that paint them in a problematic light even though there’s nothing on the web that associates them with that term. Cases where people are getting accused of affiliations that get produced by Google’s auto-complete. Reputation hits that stem from what people _search_ not what they _write_.

It’s one thing to be slandered by another person on a website, on a blog, in comments. It’s another to have your reputation slandered by computer algorithms. The algorithmic associations do reveal the attitudes and practices of people, but those people are invisible; all that’s visible is the product of the algorithm, without any context of how or why the search engine conveyed that information. What becomes visible is the data point of the algorithmic association. But what gets interpreted is the “fact” implied by said data point, and that gives an impression of guilt. The damage comes from creating the algorithmic association. It gets magnified by conveying it.

  1. What are the consequences of guilt through algorithmic association?
  2. What are the correction mechanisms?
  3. Who is accountable?
  4. What can or should be done?

Note: The image used here is Photoshopped. I did not use real examples so as to protect the reputations of people who told me their story.

Update: Guilt through algorithmic association is not constrained to Google. This is an issue for any and all systems that learn from people and convey collective “intelligence” back to users. All of the examples that I was given from people involved Google because Google is the dominant search engine. I’m not blaming Google. Rather, I think that this is a serious issue for all of us in the tech industry to consider. And the questions that I’m asking are genuine questions, not rhetorical ones.

Exciting News: Me @ Microsoft Research + New York University

When I was finishing my PhD and starting to think about post-school plans, I made a list of my favorite university departments. At the top of the list was New York University’s “Media, Culture, and Communication” (MCC) department. I am in awe of their faculty and greatly admire the students who I know who graduated from there. I decided that MCC was my dream department.

When I joined Microsoft Research, I had a bit of a pang of sadness over the fact that I was opting out of the formal academic job market before it opened, in part because I was really hoping that MCC would have a job opening. But I also realized that I’d be a fool not to take the MSR job. Working at Microsoft Research is a complete dream come true. I have enormous freedom, unbelievable support, and the opportunity to really create a community of researchers.

But then I started wondering… would there be any way to do both? Yes, this is a twisted thought coming from a workaholic, but it kept nagging at the back of my brain. Countless Microsoft Research faculty in Redmond have joint appointments at University of Washington. And I’m already splitting time between New York and Boston for personal reasons and will be spending more time in New York in the future. So, maybe I could have my cake and eat it too…

One day, hanging out with Helen Nissenbaum, I mentioned that I lurved her department from the bottom of my heart. And, in an off-hand comment, I said something about how I would love love love to have a joint position at MCC. And somehow, what began as a side comment, slowly blossomed into a flower when Marita Sturken – the MCC Chair – told me that she thought that this was a great idea. We started talking and negotiating and plotting and imagining. And, to my surprise and delight, Marita called to say that it was possible to create a joint position for me between MSR and MCC.

So, I am tickled pink to announce that I now have a joint appointment at NYU’s Media, Culture, and Communication department. I am joining the faculty as a Research Assistant Professor. I won’t be teaching any formal classes this year, although I’m looking forward to teaching in the future. In the meantime, I will be advising students and collaborating on research and getting involved in the department life. I will not be leaving Microsoft Research – I still don’t see why anyone would leave MSR. My primary affiliation will still be MSR and MSR will continue to be my academic home. But I’m also excited to have a joint appointment at NYU’s MCC that allows me to engage with the scholarly community and with students in new ways. And I’m really really really excited about this!

w000t!!!

I do not speak for my employer.

I don’t know whether to laugh or cry when people imply that when I make arguments, I’m speaking on behalf of Microsoft. Anyone who knows me knows that my opinions are my own. (This blog sez so too but no one ever seems to reads that.) What I most appreciate about my employer is that they allow me to speak my mind, even when we disagree. This is what it means to have freedom as a researcher and it’s one of the reasons that I love love love Microsoft Research. I never ever speak on behalf of Microsoft but I have zero clue of why people desperately want to perpetuate this myth. This is what makes me want to cry.

What makes me want to laugh is the irony of folks thinking I speak on behalf of Microsoft when I am critiquing an industry-wide practice that is most prominent because of Google’s recent implementation. Yes, I work for Microsoft. But I used to work for Google on social products. Many of my friends – and my brother – work for Google. I also used to work for Bradley Horowitz (one of the folks in charge of Google Plus) when we were both at Yahoo! and I adore him to pieces. I have nothing but respect for the challenges involved in building products, but I also have no qualms about highlighting problematic corporate logic. My arguments are not coming from a point of hatred towards any company or individual, but stemming from a determination to speak up for those who are voiceless in many of these discussions and to provide a different perspective with which to understand the issues.

I write and critique decisions in the tech industry when I feel as though those decisions have unintended consequences for those being affected. I’m particularly passionate when what’s at stake has implications for equality. I recognize and respect the libertarian ethos that persists in the Valley, but I think that it’s critical that privileged folks understand the cultural logic of those who are not that privileged. And, as someone who has an obscene amount of privilege at this stage in the game, I’m committed to using my stature to draw attention to issues that affect people who are marginalized. And when I get pissed off about something, I rant. And that can be both good and bad. But I’ve found that my rants often make people think. That’s what motivates me to keep ranting.

Sometimes, what I say pisses people off. Sometimes, it sounds like I’m dissing particular products or people. Usually, though, I’m critiquing assumptions that persist in the tech industry and the policies that unfold because of those assumptions. And I recognize that those who don’t know me have a bad tendency to misinterpret what I’m saying. I struggle every time I write to do my darndest to be understandable to as many people as I can. And when I’m most visible, folks often think I’m saying the darndest things. But even though I don’t correct everyone, that doesn’t mean that it’s not frustrating to be taken out of context so frequently.

And so it goes… and so it goes..

“Oh, how I miss substituting the conclusion to confrontation with a kiss.”

Designing for Social Norms (or How Not to Create Angry Mobs)

In his seminal book “Code”, Larry Lessig argued that social systems are regulated by four forces: 1) the market; 2) the law; 3) social norms; and 4) architecture or code. In thinking about social media systems, plenty of folks think about monetization. Likewise, as issues like privacy pop up, we regularly see legal regulation become a factor. And, of course, folks are always thinking about what the code enables or not. But it’s depressing to me how few people think about the power of social norms. In fact, social norms are usually only thought of as a regulatory process when things go terribly wrong. And then they’re out of control and reactionary and confusing to everyone around. We’ve seen this with privacy issues and we’re seeing this with the “real name” policy debates. As I read through the discussion that I provoked on this issue, I couldn’t help but think that we need a more critical conversation about the importance of designing with social norms in mind.

Good UX designers know that they have the power to shape certain kinds of social practices by how they design systems. And engineers often fail to give UX folks credit for the important work that they do. But designing the system itself is only a fraction of the design challenge when thinking about what unfolds. Social norms aren’t designed into the system. They don’t emerge by telling people how they should behave. And they don’t necessarily follow market logic. Social norms emerge as people – dare we say “users” – work out how a technology makes sense and fits into their lives. Social norms take hold as people bring their own personal values and beliefs to a system and help frame how future users can understand the system. And just as “first impressions matter” for social interactions, I cannot underestimate the importance of early adopters. Early adopters configure the technology in critical ways and they play a central role in shaping the social norms that surround a particular system.

How a new social media system rolls out is of critical importance. Your understanding of a particular networked system will be heavily shaped by the people who introduce you to that system. When a system unfolds slowly, there’s room for the social norms to slowly bake, for people to work out what the norms should be. When a system unfolds quickly, there’s a whole lot of chaos in terms of social norms. Whenever a network system unfolds, there are inevitably competing norms that arise from people who are disconnected to one another. (I can’t tell you how much I loved watching Friendster when the gay men, Burners, and bloggers were oblivious to one another.) Yet, the faster things move, the faster those collisions occur, and the more confusing it is for the norms to settle.

The “real name” culture on Facebook didn’t unfold because of the “real name” policy. It unfolded because the norms were set by early adopters and most people saw that and reacted accordingly. Likewise, the handle culture on MySpace unfolded because people saw what others did and reproduced those norms. When social dynamics are allowed to unfold organically, social norms are a stronger regulatory force than any formalized policy. At that point, you can often formalize the dominant social norms without too much pushback, particularly if you leave wiggle room. Yet, when you start with a heavy-handed regulatory policy that is not driven by social norms – as Google Plus did – the backlash is intense.

Think back to Friendster for a moment… Remember Fakester? (I wrote about them here.) Friendster spent ridiculous amounts of time playing whack-a-mole, killing off “fake” accounts and pissing off some of the most influential of its userbase. The “Fakester genocide” prompted an amazing number of people to leave Friendster and head over to MySpace, most notably bands, all because they didn’t want to be configured by the company. The notion of Fakesters died down on MySpace, but the most central practice – the ability for groups (bands) to have recognizable representations – ended up being the most central feature of MySpace.

People don’t like to be configured. They don’t like to be forcibly told how they should use a service. They don’t want to be told to behave like the designers intended them to be. Heavy-handed policies don’t make for good behavior; they make for pissed off users.

This doesn’t mean that you can’t or shouldn’t design to encourage certain behaviors. Of course you should. The whole point of design is to help create an environment where people engage in the most fruitful and healthy way possible. But designing a system to encourage the growth of healthy social norms is fundamentally different than coming in and forcefully telling people how they must behave. No one likes being spanked, especially not a crowd of opinionated adults.

Ironically, most people who were adopting Google Plus early on were using their real names, out of habit, out of understanding how they thought the service should work. A few weren’t. Most of those who weren’t were using a recognizable pseudonym, not even trying to trick anyone. Going after them was just plain stupid. It was an act of force and people felt disempowered. And they got pissed. And at this point, it’s no longer about whether or not the “real names” policy was a good idea in the first place; it’s now an act of oppression. Google Plus would’ve been ten bazillion times better off had they subtly encouraged the policy without making a big deal out of it, had they chosen to only enforce it in the most egregious situations. But now they’re stuck between a rock and a hard place. They either have to stick with their policy and deal with the angry mob or let go of their policy as a peace offering in the hopes that the anger will calm down. It didn’t have to be this way though and it wouldn’t have been had they thought more about encouraging the practices they wanted through design rather than through force.

Of course there’s a legitimate reason to want to encourage civil behavior online. And of course trolls wreak serious havoc on a social media system. But a “real names” policy doesn’t stop an unrepentant troll; it’s just another hurdle that the troll will love mounting. In my work with teens, I see textual abuse (“bullying”) every day among people who know exactly who each other is on Facebook. The identities of many trolls are known. But that doesn’t solve the problem. What matters is how the social situation is configured, the norms about what’s appropriate, and the mechanisms by which people can regulate them (through social shaming and/or technical intervention). A culture where people can build reputation through their online presence (whether “real” names or pseudonyms) goes a long way in combating trolls (although it is by no means a fullproof solution). But you don’t get that culture by force; you get it by encouraging the creation of healthy social norms.

Companies that build systems that people use have power. But they have to be very very very careful about how they assert that power. It’s really easy to come in and try to configure the user through force. It’s a lot harder to work diligently to design and build the ecosystem in which healthy norms emerge. Yet, the latter is of critical importance to the creation of a healthy community. Cuz you can’t get to a healthy community through force.

“Real Names” Policies Are an Abuse of Power

Everyone’s abuzz with the “nymwars,” mostly in response to Google Plus’ decision to enforce its “real names” policy. At first, Google Plus went on a deleting spree, killing off accounts that violated its policy. When the community reacted with outrage, Google Plus leaders tried to calm the anger by detailing their “new and improved” mechanism to enforce “real names” (without killing off accounts). This only sparked increased discussion about the value of pseudonymity. Dozens of blog posts have popped up with people expressing their support for pseudonymity and explaining their reasons. One of the posts, by Kirrily “Skud” Robert included a list of explanations that came from people she polled, including:

  • “I am a high school teacher, privacy is of the utmost importance.”
  • “I have used this name/account in a work context, my entire family know this name and my friends know this name. It enables me to participate online without being subject to harassment that at one point in time lead to my employer having to change their number so that calls could get through.”
  • “I do not feel safe using my real name online as I have had people track me down from my online presence and had coworkers invade my private life.”
  • “I’ve been stalked. I’m a rape survivor. I am a government employee that is prohibited from using my IRL.”
  • “As a former victim of stalking that impacted my family I’ve used [my nickname] online for about 7 years.”
  • “[this name] is a pseudonym I use to protect myself. My web site can be rather controversial and it has been used against me once.”
  • “I started using [this name] to have at least a little layer of anonymity between me and people who act inappropriately/criminally. I think the “real names” policy hurts women in particular.
  • “I enjoy being part of a global and open conversation, but I don’t wish for my opinions to offend conservative and religious people I know or am related to. Also I don’t want my husband’s Govt career impacted by his opinionated wife, or for his staff to feel in any way uncomfortable because of my views.”
  • “I have privacy concerns for being stalked in the past. I’m not going to change my name for a google+ page. The price I might pay isn’t worth it.”
  • “We get death threats at the blog, so while I’m not all that concerned with, you know, sane people finding me. I just don’t overly share information and use a pen name.”
  • “This identity was used to protect my real identity as I am gay and my family live in a small village where if it were openly known that their son was gay they would have problems.”
  • “I go by pseudonym for safety reasons. Being female, I am wary of internet harassment.”

You’ll notice a theme here…

Another site has popped up called “My Name Is Me” where people vocalize their support for pseudonyms. What’s most striking is the list of people who are affected by “real names” policies, including abuse survivors, activists, LGBT people, women, and young people.

Over and over again, people keep pointing to Facebook as an example where “real names” policies work. This makes me laugh hysterically. One of the things that became patently clear to me in my fieldwork is that countless teens who signed up to Facebook late into the game chose to use pseudonyms or nicknames. What’s even more noticeable in my data is that an extremely high percentage of people of color used pseudonyms as compared to the white teens that I interviewed. Of course, this would make sense…

The people who most heavily rely on pseudonyms in online spaces are those who are most marginalized by systems of power. “Real names” policies aren’t empowering; they’re an authoritarian assertion of power over vulnerable people. These ideas and issues aren’t new (and I’ve even talked about this before), but what is new is that marginalized people are banding together and speaking out loudly. And thank goodness.

What’s funny to me is that people also don’t seem to understand the history of Facebook’s “real names” culture. When early adopters (first the elite college students…) embraced Facebook, it was a trusted community. They gave the name that they used in the context of college or high school or the corporation that they were a part of. They used the name that fit into the network that they joined Facebook with. The names they used weren’t necessarily their legal names; plenty of people chose Bill instead of William. But they were, for all intents and purposes, “real.” As the site grew larger, people had to grapple with new crowds being present and discomfort emerged over the norms. But the norms were set and people kept signing up and giving the name that they were most commonly known by. By the time celebrities kicked in, Facebook wasn’t demanding that Lady Gaga call herself Stefani Germanotta, but of course, she had a “fan page” and was separate in the eyes of the crowd. Meanwhile, what many folks failed to notice is that countless black and Latino youth signed up to Facebook using handles. Most people don’t notice what black and Latino youth do online. Likewise, people from outside of the US started signing up to Facebook and using alternate names. Again, no one noticed because names transliterated from Arabic or Malaysian or containing phrases in Portuguese weren’t particularly visible to the real name enforcers. Real names are by no means universal on Facebook, but it’s the importance of real names is a myth that Facebook likes to shill out. And, for the most part, privileged white Americans use their real name on Facebook. So it “looks” right.

Then along comes Google Plus, thinking that it can just dictate a “real names” policy. Only, they made a huge mistake. They allowed the tech crowd to join within 48 hours of launching. The thing about the tech crowd is that it has a long history of nicks and handles and pseudonyms. And this crowd got to define the early social norms of the site, rather than being socialized into the norms set up by trusting college students who had joined a site that they thought was college-only. This was not a recipe for “real name” norm setting. Quite the opposite. Worse for Google… Tech folks are VERY happy to speak LOUDLY when they’re pissed off. So while countless black and Latino folks have been using nicks all over Facebook (just like they did on MySpace btw), they never loudly challenged Facebook’s policy. There was more of a “live and let live” approach to this. Not so lucky for Google and its name-bending community. Folks are now PISSED OFF.

Personally, I’m ecstatic to see this much outrage. And I’m really really glad to see seriously privileged people take up the issue, because while they are the least likely to actually be harmed by “real names” policies, they have the authority to be able to speak truth to power. And across the web, I’m seeing people highlight that this issue has more depth to it than fun names (and is a whole lot more complicated than boiling it down to being about anonymity, as Facebook’s Randi Zuckerberg foolishly did).

What’s at stake is people’s right to protect themselves, their right to actually maintain a form of control that gives them safety. If companies like Facebook and Google are actually committed to the safety of its users, they need to take these complaints seriously. Not everyone is safer by giving out their real name. Quite the opposite; many people are far LESS safe when they are identifiable. And those who are least safe are often those who are most vulnerable.

Likewise, the issue of reputation must be turned on its head when thinking about marginalized people. Folks point to the issue of people using pseudonyms to obscure their identity and, in theory, “protect” their reputation. The assumption baked into this is that the observer is qualified to actually assess someone’s reputation. All too often, and especially with marginalized people, the observer takes someone out of context and judges them inappropriately based on what they get online. Let me explain this in a concrete example that many of you have heard before. Years ago, I received a phone call from an Ivy League college admissions officer who wanted to accept a young black man from South Central in LA into their college; the student had written an application about how he wanted to leave behind the gang-ridden community he came from, but the admissions officers had found his MySpace which was filled with gang insignia. The question that was asked of me was “Why would he lie to us when we can tell the truth online?” Knowing that community, I was fairly certain that he was being honest with the college; he was also doing what it took to keep himself alive in his community. If he had used a pseudonym, the college wouldn’t have been able to get data out of context about him and inappropriately judge him. But they didn’t. They thought that their frame mattered most. I really hope that he got into that school.

There is no universal context, no matter how many times geeks want to tell you that you can be one person to everyone at every point. But just because people are doing what it takes to be appropriate in different contexts, to protect their safety, and to make certain that they are not judged out of context, doesn’t mean that everyone is a huckster. Rather, people are responsibly and reasonably responding to the structural conditions of these new media. And there’s nothing acceptable about those who are most privileged and powerful telling those who aren’t that it’s OK for their safety to be undermined. And you don’t guarantee safety by stopping people from using pseudonyms, but you do undermine people’s safety by doing so.

Thus, from my perspective, enforcing “real names” policies in online spaces is an abuse of power.

The Unintended Consequences of Obsessing Over Consequences (or why to support youth risk-taking)

Developmental psychologists love to remind us that the frontal lobe isn’t fully developed until humans are in their mid-20s. The prefrontal cortex is responsible for our ability to assess the consequences of our decisions, our ability to understand how what we do will play out into the future. This is often used to explain why teens (and, increasingly, college-aged people) lack the cognitive ability to be wise. Following from this logic, there’s a belief that we must protect the vulnerable young people from their actions because they don’t understand their consequences.

This logic assumes that understanding future consequences is *better* than not understanding them. I’m not sure that I believe this to be true.

Certainly, when we send young people off to fight our wars, we don’t want them to think about the consequences of what they have to do to survive (and, thus, help us survive). It’s not that we want them to shoot first and ask questions later, but we don’t want them to overthink their survival instincts when they’re being shot at.

Reproduction is an interesting counter-example. There’s no doubt that teens moms do little in the way of thinking about the consequence of getting pregnant. But folks in their 30s spend an obscene amount of time thinking about what it means to reproduce. Intensive parenting is clearly the product of constantly thinking about consequences, but I’m not sure that it’s actually healthier for kids or parents. I would hypothesize that biology wins when we don’t overthink parenting while the planet (as a delicate environmental ecosystem that can barely support the population) wins when we do overthink these things. Just a guess.

Creativity is another interesting area. We often talk about how older people are more rigid in their thinking. I love listening to mathematicians discuss whether or not someone who has not had a breakthrough insight in their 20s can have one in their 40s/50s. Certainly in the tech industry, we’re obsessed with youth. But our obsession in many ways is rooted in risk-taking, in not thinking too much about the future.

As I get older, I’m painfully aware of my brain getting more ‘conservative’ (not in a political sense). I am more strategic in my thinking, more judgmental of people who just try something radical. I spend a lot more time telling the little voice of fear and anxiety and neuroticism to STFU. I look back at my younger years and reflect on how stupid I was and then I laugh when I think about how well some of my more ridiculous ideas paid off. I find myself actually thinking about consequences before taking risks and then I get really annoyed at myself because I’ve always prided myself on my fly-by-the-seat-of-my-pants quality. In short, I can feel myself getting old and I think it’s really weird.

Most people judge from their current mental mindset, unable to remember a different mindset. Thus, I totally get why most people, if they’re undergoing the cognitive transition that I’ve watched myself do, would see young people’s risk-taking as inherently horrible. Sure, old folks respect the outcomes of some youth who change the world. But since most people don’t become Mark Zuckerberg, there’s more pressure to protect (and, often, confine) youth than to encourage their radical risk taking. And, of course, most risk-taking doesn’t result in a billion dollar valuation. Hell, most risk-taking has no chance of paying off. But it’s a weird, connected package. The same mindset that propelled me to do some seriously reckless, outright dangerous, and sometimes illegal things also prompted me to never say no to other institutional authorities in ways that allowed me to succeed professionally. This is why I don’t regret even the stupidist of things that I did as a youth. Of course, I’m also damn lucky that I never got caught.

I’m worried about our societal assumption that risk-taking without thinking of the consequences is an inherently bad thing. We need some radical thinking to solve many of the world’s biggest problems. And I don’t believe that it’s so easy to separate out what adults perceive as ‘good’ risk-taking from what they think is ‘bad’ risk-taking. But how many brilliant minds will we destroy by punishing their radical acts of defying authority? How many brilliant minds will we destroy by punishing them for ‘being stupid’? It’s easy to get caught up in a binary of ‘right’ and ‘wrong’ when all that you can think about is the consequences. But change has never happened when people simply play by the rules. You have to break the rules to create a better society. And I don’t think that it’s easy to do this when you’re always thinking about the consequences of your actions.

I’m not arguing for anarchy. I’m too old for that. But I am arguing that we should question our assumption that people are better off when they have the cognitive capacity to think through consequences. Or that society is better off when all individuals have that mental capability. From my perspective, there are definitely pros and cons to overthinking and while there are certainly cases where future-aware thought is helpful, there are also cases where it’s not. And I also think that there are some serious consequences of imprisoning youth until they grow up.

Anyhow, fun thoughts to munch on this weekend…