Tag Archives: facebook

Facebook Must Be Accountable to the Public

A pair of Gizmodo stories have prompted journalists to ask questions about Facebook’s power to manipulate political opinion in an already heated election year. If the claims are accurate, Facebook contractors have depressed some conservative news, and their curatorial hand affects the Facebook Trending list more than the public realizes. Mark Zuckerberg took to his Facebook page yesterday to argue that Facebook does everything possible to be neutral and that there are significant procedures in place to minimize biased coverage. He also promises to look into the accusations.

Watercolor by John Orlando Parry, “A London Street Scene” 1835, in the Alfred Dunhill Collection.

As this conversation swirls around intentions and explicit manipulation, there are some significant issues missing. First, all systems are biased. There is no such thing as neutrality when it comes to media. That has long been a fiction, one that traditional news media needs and insists on, even as scholars highlight that journalists reveal their biases through everything from small facial twitches to choice of frames and topics of interests. It’s also dangerous to assume that the “solution” is to make sure that “both” sides of an argument are heard equally. This is the source of tremendous conflict around how heated topics like climate change and evolution are covered. Itis even more dangerous, however, to think that removing humans and relying more on algorithms and automation will remove this bias.

Recognizing bias and enabling processes to grapple with it must be part of any curatorial process, algorithmic or otherwise. As we move into the development of algorithmic models to shape editorial decisions and curation, we need to find a sophisticated way of grappling with the biases that shape development, training sets, quality assurance, and error correction, not to mention an explicit act of “human” judgment.

There never was neutrality, and there never will be.

This issue goes far beyond the Trending box in the corner of your Facebook profile, and this latest wave of concerns is only the tip of the iceberg around how powerful actors can affect or shape political discourse. What is of concern right now is not that human beings are playing a role in shaping the news — they always have — it is the veneer of objectivity provided by Facebook’s interface, the claims of neutrality enabled by the integration of algorithmic processes, and the assumption that what is prioritized reflects only the interests and actions of the users (the “public sphere”) and not those of Facebook, advertisers, or other powerful entities.

The key challenge that emerges out of this debate concerns accountability.In theory, news media is accountable to the public. Like neutrality, this is more of a desired goal than something that’s consistently realized. While traditional news media has aspired to — but not always realized — meaningful accountability, there are a host of processes in place to address the possibility of manipulation: ombudspeople, whistleblowers, public editors, and myriad alternate media organizations. Facebook and other technology companies have not, historically, been included in that conversation.

I have tremendous respect for Mark Zuckerberg, but I think his stance that Facebook will be neutral as long as he’s in charge is a dangerous statement.This is what it means to be a benevolent dictator, and there are plenty of people around the world who disagree with his values, commitments, and logics. As a progressive American, I have a lot more in common with Mark than not, but I am painfully aware of the neoliberal American value systems that are baked into the very architecture of Facebook and our society as a whole.

Who Controls the Public Sphere in an Era of Algorithms?

In light of this public conversation, I’m delighted to announce that Data & Society has been developing a project that asks who controls the public sphere in an era of algorithms. As part of this process, we convened a workshop and have produced a series of documents that we think are valuable to the conversation:

These documents provide historical context, highlight how media has always been engaged in power struggles, showcase the challenges that new media face, and offer case studies that reveal the complexities going forward.

This conversation is by no means over. It is only just beginning. My hope is that we quickly leave the state of fear and start imagining mechanisms of accountability that we, as a society, can live with. Institutions like Facebook have tremendous power and they can wield that power for good or evil. Butfor society to function responsibly, there must be checks and balances regardless of the intentions of any one institution or its leader.

This work is a part of Data & Society’s developing Algorithms and Publics project, including a set of documents occasioned by the Who Controls the Public Sphere in an Era of Algorithms? workshop. More posts from workshop participants:

An Old Fogey’s Analysis of a Teenager’s View on Social Media

In the days that followed Andrew Watts’ “A Teenager’s View on Social Media written by an actual teen” post, dozens of people sent me a link. I found myself getting uncomfortable and angry by the folks who are pointing me to this. I feel the need to offer my perspective as someone who is not a teenager but who has thought about these issues extensively for years.

Almost all of them work in the tech industry and many of them are tech executives or venture capitalists. The general sentiment has been: “Look! Here’s an interesting kid who’s captured what kids these days are doing with social media!” Most don’t even ask for my interpretation, sending it to me as though it is gospel.

We’ve been down this path before. Andrew is not the first teen to speak as an “actual” teen and have his story picked up. Every few years, a (typically white male) teen with an interest in technology writes about technology among his peers on a popular tech platform and gets traction. Tons of conferences host teen panels, usually drawing on privileged teens in the community or related to the organizers. I’m not bothered by these teens’ comments; I’m bothered by the way they are interpreted and treated by the tech press and the digerati.

I’m a researcher. I’ve been studying American teens’ engagement with social media for over a decade. I wrote a book on the topic. I don’t speak on behalf of teens, but I do amplify their voices and try to make sense of the diversity of experiences teens have. I work hard to account for the biases in whose voices I have access to because I’m painfully aware that it’s hard to generalize about a population that’s roughly 16 million people strong. They are very diverse and, yet, journalists and entrepreneurs want to label them under one category and describe them as one thing.

Andrew is a very lucid writer and I completely trust his depiction of his peer group’s use of social media. He wrote a brilliant post about his life, his experiences, and his interpretations. His voice should be heard. And his candor is delightful to read. But his analysis cannot and should not be used to make claims about all teenagers. I don’t blame Andrew for this; I blame the readers — and especially tech elites and journalists — for their interpretation of Andrew’s post because they should know better by now. What he’s sharing is not indicative of all teens. More significantly, what he’s sharing reinforces existing biases in the tech industry and journalism that worry me tremendously.

His coverage of Twitter should raise a big red flag to anyone who has spent an iota of time paying attention to the news. Over the last six months, we’ve seen a phenomenal uptick in serious US-based activism by many youth in light of what took place in Ferguson. It’s hard to ignore Twitter’s role in this phenomenon, with hashtags like #blacklivesmatter and #IfTheyGunnedMeDown not only flowing from Twitter onto other social media platforms, but also getting serious coverage from major media. Andrew’s statement that “a lot of us simply do not understand the point of Twitter” should raise eyebrows, but it’s the rest of his description of Twitter that should serve as a stark reminder of Andrew’s position within the social media landscape.

Let me put this bluntly: teens’ use of social media is significantly shaped by race and class, geography and cultural background. Let me repeat that for emphasis.

Teens’ use of social media is significantly shaped by race and class, geography and cultural background.

The world of Twitter is many things and what journalists and tech elites see from Twitter is not even remotely similar to what many of the teens that I study see, especially black and brown urban youth. For starters, their Twitter feed doesn’t have links; this is often shocking to journalists and digerati whose entire stream is filled with URLs. But I’m also bothered by Andrew’s depiction of Twitter users as first and foremost doing so to “complain/express themselves.” While he offers other professional categorizations, it’s hard not to read this depiction in light of what I see in low-status communities and the ways that privileged folks interpret the types of expression that exists in these communities. When black and brown teens offer their perspective on the world using the language of their community, it is often derided as a complaint or dismissed as self-expression. I doubt that Andrew is trying to make an explicitly racist comment here, but I want to caution every reader out there that critiques of youth use of Twitter are often seen in a negative light because of the heavy use by low-status black and brown youth.

Andrew’s depiction of his peers’ use of social media is a depiction of a segment of the population, notably the segment most like those in the tech industry. In other words, what the tech elite are seeing and sharing is what people like them would’ve been doing with social media X years ago. It resonates. But it is not a full portrait of today’s youth. And its uptake and interpretation by journalists and the tech elite whitewashes teens practices in deeply problematic ways.

I’m not saying he’s wrong; I’m saying his story is incomplete and the incompleteness is important. His commentary on Facebook is probably the most generalizable, if we’re talking about urban and suburban American youth. Of course, his comments shouldn’t be shocking to anyone at this point (as Andrew himself points out). Somehow, though, declarations of Facebook’s lack of emotional weight with teens continues to be front page news. All that said, this does render invisible the cultural work of Facebook in rural areas and outside of the US.

Andrew is very visible about where he stands. He’s very clear about his passion for technology (and his love of blogging on Medium should be a big ole hint to anyone who missed his byline). He’s also a college student and talks about his peers as being obviously on path to college. But as readers, let’s not forget that only about half of US 19-year-olds are in college. He talks about WhatsApp being interesting when you go abroad, the practice of “going abroad” is itself privileged, with less than 1/3 of US citizens even holding passports. Furthermore, this renders invisible the ways in which many US-based youth use WhatsApp to communicate with family and friends who live outside of the US. Immigration isn’t part of his narrative.

I don’t for a second fault Andrew for not having a perspective beyond his peer group. But I do fault both the tech elite and journalists for not thinking critically through what he posted and presuming that a single person’s experience can speak on behalf of an entire generation. There’s a reason why researchers and organizations like Pew Research are doing the work that they do — they do so to make sure that we don’t forget about the populations that aren’t already in our networks. The fact that professionals prefer anecdotes from people like us over concerted efforts to understand a demographic as a whole is shameful. More importantly, it’s downright dangerous. It shapes what the tech industry builds and invests in, what gets promoted by journalists, and what gets legitimized by institutions of power. This is precisely why and how the tech industry is complicit in the increasing structural inequality that is plaguing our society.

This post was originally published to The Message at Medium on January 12, 2015

What does the Facebook experiment teach us?

I’m intrigued by the reaction that has unfolded around the Facebook “emotion contagion” study. (If you aren’t familiar with this, read this primer.) As others have pointed out, the practice of A/B testing content is quite common. And Facebook has a long history of experimenting on how it can influence people’s attitudes and practices, even in the realm of research. An earlier study showed that Facebook decisions could shape voters’ practices. But why is it that *this* study has sparked a firestorm?

In asking people about this, I’ve been given two dominant reasons:

  1. People’s emotional well-being is sacred.
  2. Research is different than marketing practices.

I don’t find either of these responses satisfying.

The Consequences of Facebook’s Experiment

Facebook’s research team is not truly independent of product. They have a license to do research and publish it, provided that it contributes to the positive development of the company. If Facebook knew that this research would spark the negative PR backlash, they never would’ve allowed it to go forward or be published. I can only imagine the ugliness of the fight inside the company now, but I’m confident that PR is demanding silence from researchers.

I do believe that the research was intended to be helpful to Facebook. So what was the intended positive contribution of this study? I get the sense from Adam Kramer’s comments that the goal was to determine if content sentiment could affect people’s emotional response after being on Facebook. In other words, given that Facebook wants to keep people on Facebook, if people came away from Facebook feeling sadder, presumably they’d not want to come back to Facebook again. Thus, it’s in Facebook’s better interest to leave people feeling happier. And this study suggests that the sentiment of the content influences this. This suggests that one applied take-away for product is to downplay negative content. Presumably this is better for users and better for Facebook.

We can debate all day long as to whether or not this is what that study actually shows, but let’s work with this for a second. Let’s say that pre-study Facebook showed 1 negative post for every 3 positive and now, because of this study, Facebook shows 1 negative post for every 10 positive ones. If that’s the case, was the one week treatment worth the outcome for longer term content exposure? Who gets to make that decision?

Folks keep talking about all of the potential harm that could’ve happened by the study – the possibility of suicides, the mental health consequences. But what about the potential harm of negative content on Facebook more generally? Even if we believe that there were subtle negative costs to those who received the treatment, the ongoing costs of negative content on Facebook every week other than that 1 week experiment must be more costly. How then do we account for positive benefits to users if Facebook increased positive treatments en masse as a result of this study? Of course, the problem is that Facebook is a black box. We don’t know what they did with this study. The only thing we know is what is published in PNAS and that ain’t much.

Of course, if Facebook did make the content that users see more positive, should we simply be happy? What would it mean that you’re more likely to see announcements from your friends when they are celebrating a new child or a fun night on the town, but less likely to see their posts when they’re offering depressive missives or angsting over a relationship in shambles? If Alice is happier when she is oblivious to Bob’s pain because Facebook chooses to keep that from her, are we willing to sacrifice Bob’s need for support and validation? This is a hard ethical choice at the crux of any decision of what content to show when you’re making choices. And the reality is that Facebook is making these choices every day without oversight, transparency, or informed consent.

Algorithmic Manipulation of Attention and Emotions

Facebook actively alters the content you see. Most people focus on the practice of marketing, but most of what Facebook’s algorithms do involve curating content to provide you with what they think you want to see. Facebook algorithmically determines which of your friends’ posts you see. They don’t do this for marketing reasons. They do this because they want you to want to come back to the site day after day. They want you to be happy. They don’t want you to be overwhelmed. Their everyday algorithms are meant to manipulate your emotions. What factors go into this? We don’t know.

Facebook is not alone in algorithmically predicting what content you wish to see. Any recommendation system or curatorial system is prioritizing some content over others. But let’s compare what we glean from this study with standard practice. Most sites, from major news media to social media, have some algorithm that shows you the content that people click on the most. This is what drives media entities to produce listicals, flashy headlines, and car crash news stories. What do you think garners more traffic – a detailed analysis of what’s happening in Syria or 29 pictures of the cutest members of the animal kingdom? Part of what media learned long ago is that fear and salacious gossip sell papers. 4chan taught us that grotesque imagery and cute kittens work too. What this means online is that stories about child abductions, dangerous islands filled with snakes, and celebrity sex tape scandals are often the most clicked on, retweeted, favorited, etc. So an entire industry has emerged to produce crappy click bait content under the banner of “news.”

Guess what? When people are surrounded by fear-mongering news media, they get anxious. They fear the wrong things. Moral panics emerge. And yet, we as a society believe that it’s totally acceptable for news media – and its click bait brethren – to manipulate people’s emotions through the headlines they produce and the content they cover. And we generally accept that algorithmic curators are perfectly well within their right to prioritize that heavily clicked content over others, regardless of the psychological toll on individuals or the society. What makes their practice different? (Other than the fact that the media wouldn’t hold itself accountable for its own manipulative practices…)

Somehow, shrugging our shoulders and saying that we promoted content because it was popular is acceptable because those actors don’t voice that their intention is to manipulate your emotions so that you keep viewing their reporting and advertisements. And it’s also acceptable to manipulate people for advertising because that’s just business. But when researchers admit that they’re trying to learn if they can manipulate people’s emotions, they’re shunned. What this suggests is that the practice is acceptable, but admitting the intention and being transparent about the process is not.

But Research is Different!!

As this debate has unfolded, whenever people point out that these business practices are commonplace, folks respond by highlighting that research or science is different. What unfolds is a high-browed notion about the purity of research and its exclusive claims on ethical standards.

Do I think that we need to have a serious conversation about informed consent? Absolutely. Do I think that we need to have a serious conversation about the ethical decisions companies make with user data? Absolutely. But I do not believe that this conversation should ever apply just to that which is categorized under “research.” Nor do I believe that academe is necessarily providing a golden standard.

Academe has many problems that need to be accounted for. Researchers are incentivized to figure out how to get through IRBs rather than to think critically and collectively about the ethics of their research protocols. IRBs are incentivized to protect the university rather than truly work out an ethical framework for these issues. Journals relish corporate datasets even when replicability is impossible. And for that matter, even in a post-paper era, journals have ridiculous word count limits that demotivate researchers from spelling out all of the gory details of their methods. But there are also broader structural issues. Academe is so stupidly competitive and peer review is so much of a game that researchers have little incentive to share their studies-in-progress with their peers for true feedback and critique. And the status games of academe reward those who get access to private coffers of data while prompting those who don’t to chastise those who do. And there’s generally no incentive for corporates to play nice with researchers unless it helps their prestige, hiring opportunities, or product.

IRBs are an abysmal mechanism for actually accounting for ethics in research. By and large, they’re structured to make certain that the university will not be liable. Ethics aren’t a checklist. Nor are they a universal. Navigating ethics involves a process of working through the benefits and costs of a research act and making a conscientious decision about how to move forward. Reasonable people differ on what they think is ethical. And disciplines have different standards for how to navigate ethics. But we’ve trained an entire generation of scholars that ethics equals “that which gets past the IRB” which is a travesty. We need researchers to systematically think about how their practices alter the world in ways that benefit and harm people. We need ethics to not just be tacked on, but to be an integral part of how *everyone* thinks about what they study, build, and do.

There’s a lot of research that has serious consequences on the people who are part of the study. I think about the work that some of my colleagues do with child victims of sexual abuse. Getting children to talk about these awful experiences can be quite psychologically tolling. Yet, better understanding what they experienced has huge benefits for society. So we make our trade-offs and we do research that can have consequences. But what warms my heart is how my colleagues work hard to help those children by providing counseling immediately following the interview (and, in some cases, follow-up counseling). They think long and hard about each question they ask, and how they go about asking it. And yet most IRBs wouldn’t let them do this work because no university wants to touch anything that involves kids and sexual abuse. Doing research involves trade-offs and finding an ethical path forward requires effort and risk.

It’s far too easy to say “informed consent” and then not take responsibility for the costs of the research process, just as it’s far too easy to point to an IRB as proof of ethical thought. For any study that involves manipulation – common in economics, psychology, and other social science disciplines – people are only so informed about what they’re getting themselves into. You may think that you know what you’re consenting to, but do you? And then there are studies like discrimination audit studies in which we purposefully don’t inform people that they’re part of a study. So what are the right trade-offs? When is it OK to eschew consent altogether? What does it mean to truly be informed? When it being informed not enough? These aren’t easy questions and there aren’t easy answers.

I’m not necessarily saying that Facebook made the right trade-offs with this study, but I think that the scholarly reaction of research is only acceptable with IRB plus informed consent is disingenuous. Of course, a huge part of what’s at stake has to do with the fact that what counts as a contract legally is not the same as consent. Most people haven’t consented to all of Facebook’s terms of service. They’ve agreed to a contract because they feel as though they have no other choice. And this really upsets people.

A Different Theory

The more I read people’s reactions to this study, the more that I’ve started to think that the outrage has nothing to do with the study at all. There is a growing amount of negative sentiment towards Facebook and other companies that collect and use data about people. In short, there’s anger at the practice of big data. This paper provided ammunition for people’s anger because it’s so hard to talk about harm in the abstract.

For better or worse, people imagine that Facebook is offered by a benevolent dictator, that the site is there to enable people to better connect with others. In some senses, this is true. But Facebook is also a company. And a public company for that matter. It has to find ways to become more profitable with each passing quarter. This means that it designs its algorithms not just to market to you directly but to convince you to keep coming back over and over again. People have an abstract notion of how that operates, but they don’t really know, or even want to know. They just want the hot dog to taste good. Whether it’s couched as research or operations, people don’t want to think that they’re being manipulated. So when they find out what soylent green is made of, they’re outraged. This study isn’t really what’s at stake. What’s at stake is the underlying dynamic of how Facebook runs its business, operates its system, and makes decisions that have nothing to do with how its users want Facebook to operate. It’s not about research. It’s a question of power.

I get the anger. I personally loathe Facebook and I have for a long time, even as I appreciate and study its importance in people’s lives. But on a personal level, I hate the fact that Facebook thinks it’s better than me at deciding which of my friends’ posts I should see. I hate that I have no meaningful mechanism of control on the site. And I am painfully aware of how my sporadic use of the site has confused their algorithms so much that what I see in my newsfeed is complete garbage. And I resent the fact that because I barely use the site, the only way that I could actually get a message out to friends is to pay to have it posted. My minimal use has made me an algorithmic pariah and if I weren’t technologically savvy enough to know better, I would feel as though I’ve been shunned by my friends rather than simply deemed unworthy by an algorithm. I also refuse to play the game to make myself look good before the altar of the algorithm. And every time I’m forced to deal with Facebook, I can’t help but resent its manipulations.

There’s also a lot that I dislike about the company and its practices. At the same time, I’m glad that they’ve started working with researchers and started publishing their findings. I think that we need more transparency in the algorithmic work done by these kinds of systems and their willingness to publish has been one of the few ways that we’ve gleaned insight into what’s going on. Of course, I also suspect that the angry reaction from this study will prompt them to clamp down on allowing researchers to be remotely public. My gut says that they will naively respond to this situation as though the practice of research is what makes them vulnerable rather than their practices as a company as a whole. Beyond what this means for researchers, I’m concerned about what increased silence will mean for a public who has no clue of what’s being done with their data, who will think that no new report of terrible misdeeds means that Facebook has stopped manipulating data.

Information companies aren’t the same as pharmaceuticals. They don’t need to do clinical trials before they put a product on the market. They can psychologically manipulate their users all they want without being remotely public about exactly what they’re doing. And as the public, we can only guess what the black box is doing.

There’s a lot that needs reformed here. We need to figure out how to have a meaningful conversation about corporate ethics, regardless of whether it’s couched as research or not. But it’s not so simple as saying that a lack of a corporate IRB or a lack of a golden standard “informed consent” means that a practice is unethical. Almost all manipulations that take place by these companies occur without either one of these. And they go unchecked because they aren’t published or public.

Ethical oversight isn’t easy and I don’t have a quick and dirty solution to how it should be implemented. But I do have a few ideas. For starters, I’d like to see any company that manipulates user data create an ethics board. Not an IRB that approves research studies, but an ethics board that has visibility into all proprietary algorithms that could affect users. For public companies, this could be done through the ethics committee of the Board of Directors. But rather than simply consisting of board members, I think that it should consist of scholars and users. I also think that there needs to be a mechanism for whistleblowing regarding ethics from within companies because I’ve found that many employees of companies like Facebook are quite concerned by certain algorithmic decisions, but feel as though there’s no path to responsibly report concerns without going fully public. This wouldn’t solve all of the problems, nor am I convinced that most companies would do so voluntarily, but it is certainly something to consider. More than anything, I want to see users have the ability to meaningfully influence what’s being done with their data and I’d love to see a way for their voices to be represented in these processes.

I’m glad that this study has prompted an intense debate among scholars and the public, but I fear that it’s turned into a simplistic attack on Facebook over this particular study rather than a nuanced debate over how we create meaningful ethical oversight in research and practice. The lines between research and practice are always blurred and information companies like Facebook make this increasingly salient. No one benefits by drawing lines in the sand. We need to address the problem more holistically. And, in the meantime, we need to hold companies accountable for how they manipulate people across the board, regardless of whether or not it’s couched as research. If we focus too much on this study, we’ll lose track of the broader issues at stake.

Can someone explain WhatsApp’s valuation to me?

Unless you were off the internet yesterday, it’s old news that WhatsApp was purchased by Facebook for a gobsmacking $16B + $3B in employee payouts. And the founder got a board seat. I’ve been mulling over this since the news came out and I can’t get past my initial reaction: WTF?

Messaging apps are *huge* and there’s little doubt that WhatsApp is the premier player in this scene. Other services – GroupMe, Kik, WeChat, Line, Viber – still have huge user numbers, but nothing like WhatsApp (although some of them have even more sophisticated use cases). 450M users and growing is no joke. And I have no doubt that WhatsApp will continue on its meteoric rise, although, as Facebook knows all too well, there are only so many people on the planet and only so many of them have technology in their pockets (even if it’s a larger number than those who have bulky sized computers).

Unlike other social media genres, messaging apps emerged in response to the pure stupidity and selfishness of another genre: carrier-driven SMS. These messaging apps solve four very real problems:

  • Carriers charge a stupidly high price for text messaging (especially photo shares) and haven’t meaningfully lowered that rate in years.
  • Carriers gouge customers who want to send texts across international borders.
  • Carriers often require special packages for sending group messages and don’t inform their customers when they didn’t receive a group message.
  • Carriers have never bothered innovating around this cash cow of theirs.

So props to companies building messaging apps for seeing an opportunity to route around carrier stupidity.

I also get why Facebook would want to buy WhatsApp. They want to be the company through which consumers send all social messages, all images, all chats, etc. They want to be the central social graph. And they’ve never managed to get people as passionate about communicating through their phone app as other apps, particularly in the US. So good on them for buying Instagram and allowing its trajectory to continue skyrocketing. That acquisition made sense to me, even if the price was high, because the investment in a photo sharing app based on a stream and a social graph and mechanism for getting feedback is huge. People don’t want to lose those comments, likes, and photos.

But I must be stupid because I just can’t add up the numbers to understand the valuation of WhatsApp. The personal investment in the app isn’t nearly as high. The photos get downloaded to your phone, the historical chats don’t necessarily need to stick around (and disappear entirely if a child accidentally hard resets your phone as I learned last week). The monitization play of $.99/year after the first year is a good thing and not too onerous for most users (although I’d be curious what kind of app switching happens then for the younger set or folks from more impoverished regions). But that doesn’t add up to $19B + a board seat. I don’t see how advertising would work without driving out users to a different service. Sure, there are some e-commerce plays that would be interesting and that other services have been experimenting with. But is that enough? Or is the plan to make a play that guarantees that no VC will invest in any competitors so that all of those companies wither and die while WhatsApp sits by patiently and then makes a move when it’s clearly the only one left standing? And if that’s the play, then what about the carriers? When will they wake up and think for 5 seconds about how their greed is eroding one of their cash cows?

What am I missing? There has to be more to this play than I’m seeing. Or is Facebook just that desperate?

(Originally posted at LinkedIn. More comments there.)

Keeping Teens ‘Private’ on Facebook Won’t Protect Them

(Originally written for TIME Magazine)

We’re afraid of and afraid for teenagers. And nothing brings out this dualism more than discussions of how and when teens should be allowed to participate in public life.

Last week, Facebook made changes to teens’ content-sharing options. They introduced the opportunity for those ages 13 to 17 to share their updates and images with everyone and not just with their friends. Until this change, teens could not post their content publicly even though adults could. When minors select to make their content public, they are given a notice and a reminder in order to make it very clear to them that this material will be shared publicly. “Public” is never the default for teens; they must choose to make their content public, and they must affirm that this is what they intended at the point in which they choose to publish.

Representatives of parenting organizations have responded to this change negatively, arguing that this puts children more at risk. And even though the Pew Internet & American Life Project has found that teens are quite attentive to their privacy, and many other popular sites allow teens to post publicly (e.g. Twitter, YouTube, Tumblr), privacy advocates are arguing that Facebook’s decision to give teens choices suggests that the company is undermining teens’ privacy.

But why should youth not be allowed to participate in public life? Do paternalistic, age-specific technology barriers really protect or benefit teens?

One of the most crucial aspects of coming of age is learning how to navigate public life. The teenage years are precisely when people transition from being a child to being an adult. There is no magic serum that teens can drink on their 18th birthday to immediately mature and understand the world around them. Instead, adolescents must be exposed to — and allowed to participate in — public life while surrounded by adults who can help them navigate complex situations with grace. They must learn to be a part of society, and to do so, they must be allowed to participate.

Most teens no longer see Facebook as a private place. They befriend anyone they’ve ever met, from summer-camp pals to coaches at universities they wish to attend. Yet because Facebook doesn’t allow youth to contribute to public discourse through the site, there’s an assumption that the site is more private than it is. Facebook’s decision to allow teens to participate in public isn’t about suddenly exposing youth; it’s about giving them an option to treat the site as being as public as it often is in practice.

Rather than trying to protect teens from all fears and risks that we can imagine, let’s instead imagine ways of integrating them constructively into public life. The key to doing so is not to create technologies that reinforce limitations but to provide teens and parents with the mechanisms and information needed to make healthy decisions. Some young people may be ready to start navigating broad audiences at 13; others are not ready until they are much older. But it should not be up to technology companies to determine when teens are old enough to have their voices heard publicly. Parents should be allowed to work with their children to help them navigate public spaces as they see fit. And all of us should be working hard to inform our younger citizens about the responsibilities and challenges of being a part of public life. I commend Facebook for giving teens the option and working hard to inform them of the significance of their choices.

(Originally written for TIME Magazine)

Is Facebook Destroying the American College Experience?

Sitting with a group of graduating high school seniors last summer, the conversation turned to college roommates. Although headed off to different schools, they had a similar experience of learning their roommate assignment and immediately turning to Facebook to investigate that person. Some had already begun developing deep, mediated friendships while others had already asked for roommate transfers. Beyond roommates, all had used Facebook to find other newly minted freshman, building relationships long before they set foot on campus.

At first blush, this seems like a win for students. Going off to college can be a scary proposition, full of uncertainty, particularly about social matters. Why not get a head start building friends from the safety of your parent’s house?

What most students (and parents) fail to realize is that the success of the American college system has less to do with the quality of the formal education than it does with the social engineering project that is quietly enacted behind the scenes each year. Roommates are structured to connect incoming students with students of different backgrounds. Dorms are organized to cross-breed the cultural diversity that exists on campus. Early campus activities are designed to help people encounter people whose approach to the world is different than theirs. This process has a lot of value because it means that students develop an appreciation for difference and build meaningful relationships that will play a significant role for years to come. The friendships and connections that form on campuses shape future job opportunities and help create communities that change the future. We hear about famous college roommates as exemplars. Heck, Facebook itself was created by a group of Harvard roommates. But the more basic story is how people learn to appreciate difference, often by suffering through the challenges of entering college together.

When pre-frosh turn to Facebook before arriving on campus, they do so to find other people who share their interests, values, and background. As such, they begin a self-segregation process that results in increased “homophily” on campuses. Homophily is a sociological concept that refers to the notion that birds of a feather stick together. In other words, teens inadvertently undermine the collegiate social engineering project of creating diverse connections through common experiences. Furthermore, because Facebook enables them to keep in touch with friends from high school, college freshman spend extensive time maintaining old ties rather than building new ones. They lose out on one of the most glorious benefits of the American collegiate system: the ability to diversify their networks.

Facebook is not itself the problem. The issue stems from how youth use Facebook and the desire that many youth have to focus on building connections to people that think like they do. Building friendships with people who have different political, cultural, religious beliefs is hard. Getting to know people whose life stories seem so foreign is hard. And yet, such relationship building across lines of difference can also be tremendously transformative.

To complicate matters more, parents and high school teachers have beaten into today’s teens’ heads that internet strangers are dangerous. As such, even when teens are turning to Facebook or other services to find future college friends, they are skittish about people who are discomforting to them because they’ve been socialized into being wary of anyone they talk with. The fear-mongering around strangers plays a subtle but powerful role in discouraging teens from doing the disorienting work of getting to know someone truly unfamiliar.

It’s high time we recognize that college isn’t just about formalized learning and skills training, but also a socialization process with significant implications for the future. The social networks that youth build in college have long-lasting implications for youth’s future prospects. One of the reasons that the American college experience is so valuable is because it often produces diverse networks that enable future opportunities. This is also precisely what makes elite colleges elite; the networks that are built through these institutions end up shaping many aspects of power. When less privileged youth get to know children of powerful families, new pathways of opportunity and tolerance are created. But when youth use Facebook to maintain existing insular networks, the potential for increased structural inequity is great.

Photo by Daniel Borman

This post was originally written for LinkedIn. Visit there for additional comments.

Why Parents Help Children Violate Facebook’s 13+ Rule

Announcing new journal article: “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, First Monday.

“At what age should I let my child join Facebook?” This is a question that countless parents have asked my collaborators and me. Often, it’s followed by the following: “I know that 13 is the minimum age to join Facebook, but is it really so bad that my 12-year-old is on the site?”

While parents are struggling to determine what social media sites are appropriate for their children, government tries to help parents by regulating what data internet companies can collect about children without parental permission. Yet, as has been the case for the last decade, this often backfires. Many general-purpose communication platforms and social media sites restrict access to only those 13+ in response to a law meant to empower parents: the Children’s Online Privacy Protection Act (COPPA). This forces parents to make a difficult choice: help uphold the minimum age requirements and limit their children’s access to services that let kids connect with family and friends OR help their children lie about their age to circumvent the age-based restrictions and eschew the protections that COPPA is meant to provide.

In order to understand how parents were approaching this dilemma, my collaborators — Eszter Hargittai (Northwestern University), Jason Schultz (University of California, Berkeley), John Palfrey (Harvard University) — and I decided to survey parents. In many ways, we were responding to a flurry of studies (e.g. Pew’s) that revealed that millions of U.S. children have violated Facebook’s Terms of Service and joined the site underage. These findings prompted outrage back in May as politicians blamed Facebook for failing to curb underage usage. Embedded in this furor was an assumption that by not strictly guarding its doors and keeping children out, Facebook was undermining parental authority and thumbing its nose at the law. Facebook responded by defending its practices — and highlighting how it regularly ejects children from its site. More controversially, Facebook’s founder Mark Zuckerberg openly questioned the value of COPPA in the first place.

While Facebook has often sparked anger over its cavalier attitudes towards user privacy, Zuckerberg’s challenge with regard to COPPA has merit. It’s imperative that we question the assumptions embedded in this policy. All too often, the public takes COPPA at face-value and politicians angle to build new laws based on it without examining its efficacy.

Eszter, Jason, John, and I decided to focus on one core question: Does COPPA actually empower parents? In order to do so, we surveyed parents about their household practices with respect to social media and their attitudes towards age restrictions online. We are proud to release our findings today, in a new paper published at First Monday called “Why parents help their children lie to Facebook about age: Unintended consequences of the ‘Children’s Online Privacy Protection Act’.” From a national sample of 1,007 U.S. parents who have children living with them between the ages of 10-14 conducted July 5-14, 2011, we found:

  • Although Facebook’s minimum age is 13, parents of 13- and 14-year-olds report that, on average, their child joined Facebook at age 12.
  • Half (55%) of parents of 12-year-olds report their child has a Facebook account, and most (82%) of these parents knew when their child signed up. Most (76%) also assisted their 12-year old in creating the account.
  • A third (36%) of all parents surveyed reported that their child joined Facebook before the age of 13, and two-thirds of them (68%) helped their child create the account.
  • Half (53%) of parents surveyed think Facebook has a minimum age and a third (35%) of these parents think that this is a recommendation and not a requirement.
  • Most (78%) parents think it is acceptable for their child to violate minimum age restrictions on online services.

The status quo is not working if large numbers of parents are helping their children lie to get access to online services. Parents do appear to be having conversations with their children, as COPPA intended. Yet, what does it mean if they’re doing so in order to violate the restrictions that COPPA engendered?

One reaction to our data might be that companies should not be allowed to restrict access to children on their sites. Unfortunately, getting the parental permission required by COPPA is technologically difficult, financially costly, and ethically problematic. Sites that target children take on this challenge, but often by excluding children whose parents lack resources to pay for the service, those who lack credit cards, and those who refuse to provide extra data about their children in order to offer permission. The situation is even more complicated for children who are in abusive households, have absentee parents, or regularly experience shifts in guardianship. General-purpose sites, including communication platforms like Gmail and Skype and social media services like Facebook and Twitter, generally prefer to avoid the social, technical, economic, and free speech complications involved.

While there is merit to thinking about how to strengthen parent permission structures, focusing on this obscures the issues that COPPA is intended to address: data privacy and online safety. COPPA predates the rise of social media. Its architects never imagined a world where people would share massive quantities of data as a central part of participation. It no longer makes sense to focus on how data are collected; we must instead question how those data are used. Furthermore, while children may be an especially vulnerable population, they are not the only vulnerable population. Most adults have little sense of how their data are being stored, shared, and sold.

COPPA is a well-intentioned piece of legislation with unintended consequences for parents, educators, and the public writ large. It has stifled innovation for sites focused on children and its implementations have made parenting more challenging. Our data clearly show that parents are concerned about privacy and online safety. Many want the government to help, but they don’t want solutions that unintentionally restrict their children’s access. Instead, they want guidance and recommendations to help them make informed decisions. Parents often want their children to learn how to be responsible digital citizens. Allowing them access is often the first step.

Educators face a different set of issues. Those who want to help youth navigate commercial tools often encounter the complexities of age restrictions. Consider the 7th grade teacher whose students are heavy Facebook users. Should she admonish her students for being on Facebook underage? Or should she make sure that they understand how privacy settings work? Where does digital literacy fit in when what children are doing is in violation of websites’ Terms of Service?

At first blush, the issues surrounding COPPA may seem to only apply to technology companies and the government, but their implications extend much further. COPPA affects parenting, education, and issues surrounding youth rights. It affects those who care about free speech and those who are concerned about how violence shapes home life. It’s important that all who care about youth pay attention to these issues. They’re complex and messy, full of good intention and unintended consequences. But rather than reinforcing or extending a legal regime that produces age-based restrictions which parents actively circumvent, we need to step back and rethink the underlying goals behind COPPA and develop new ways of achieving them. This begins with a public conversation.

We are excited to release our new study in the hopes that it will contribute to that conversation. To read our complete findings and learn more about their implications for policy makers, see “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, published in First Monday.

To learn more about the Children’s Online Privacy Protection Act (COPPA), make sure to check out the Federal Trade Commission’s website.

(Versions of this post were originally written for the Huffington Post and for the Digital Media and Learning Blog.)

Image Credit: Tim Roe

Designing for Social Norms (or How Not to Create Angry Mobs)

In his seminal book “Code”, Larry Lessig argued that social systems are regulated by four forces: 1) the market; 2) the law; 3) social norms; and 4) architecture or code. In thinking about social media systems, plenty of folks think about monetization. Likewise, as issues like privacy pop up, we regularly see legal regulation become a factor. And, of course, folks are always thinking about what the code enables or not. But it’s depressing to me how few people think about the power of social norms. In fact, social norms are usually only thought of as a regulatory process when things go terribly wrong. And then they’re out of control and reactionary and confusing to everyone around. We’ve seen this with privacy issues and we’re seeing this with the “real name” policy debates. As I read through the discussion that I provoked on this issue, I couldn’t help but think that we need a more critical conversation about the importance of designing with social norms in mind.

Good UX designers know that they have the power to shape certain kinds of social practices by how they design systems. And engineers often fail to give UX folks credit for the important work that they do. But designing the system itself is only a fraction of the design challenge when thinking about what unfolds. Social norms aren’t designed into the system. They don’t emerge by telling people how they should behave. And they don’t necessarily follow market logic. Social norms emerge as people – dare we say “users” – work out how a technology makes sense and fits into their lives. Social norms take hold as people bring their own personal values and beliefs to a system and help frame how future users can understand the system. And just as “first impressions matter” for social interactions, I cannot underestimate the importance of early adopters. Early adopters configure the technology in critical ways and they play a central role in shaping the social norms that surround a particular system.

How a new social media system rolls out is of critical importance. Your understanding of a particular networked system will be heavily shaped by the people who introduce you to that system. When a system unfolds slowly, there’s room for the social norms to slowly bake, for people to work out what the norms should be. When a system unfolds quickly, there’s a whole lot of chaos in terms of social norms. Whenever a network system unfolds, there are inevitably competing norms that arise from people who are disconnected to one another. (I can’t tell you how much I loved watching Friendster when the gay men, Burners, and bloggers were oblivious to one another.) Yet, the faster things move, the faster those collisions occur, and the more confusing it is for the norms to settle.

The “real name” culture on Facebook didn’t unfold because of the “real name” policy. It unfolded because the norms were set by early adopters and most people saw that and reacted accordingly. Likewise, the handle culture on MySpace unfolded because people saw what others did and reproduced those norms. When social dynamics are allowed to unfold organically, social norms are a stronger regulatory force than any formalized policy. At that point, you can often formalize the dominant social norms without too much pushback, particularly if you leave wiggle room. Yet, when you start with a heavy-handed regulatory policy that is not driven by social norms – as Google Plus did – the backlash is intense.

Think back to Friendster for a moment… Remember Fakester? (I wrote about them here.) Friendster spent ridiculous amounts of time playing whack-a-mole, killing off “fake” accounts and pissing off some of the most influential of its userbase. The “Fakester genocide” prompted an amazing number of people to leave Friendster and head over to MySpace, most notably bands, all because they didn’t want to be configured by the company. The notion of Fakesters died down on MySpace, but the most central practice – the ability for groups (bands) to have recognizable representations – ended up being the most central feature of MySpace.

People don’t like to be configured. They don’t like to be forcibly told how they should use a service. They don’t want to be told to behave like the designers intended them to be. Heavy-handed policies don’t make for good behavior; they make for pissed off users.

This doesn’t mean that you can’t or shouldn’t design to encourage certain behaviors. Of course you should. The whole point of design is to help create an environment where people engage in the most fruitful and healthy way possible. But designing a system to encourage the growth of healthy social norms is fundamentally different than coming in and forcefully telling people how they must behave. No one likes being spanked, especially not a crowd of opinionated adults.

Ironically, most people who were adopting Google Plus early on were using their real names, out of habit, out of understanding how they thought the service should work. A few weren’t. Most of those who weren’t were using a recognizable pseudonym, not even trying to trick anyone. Going after them was just plain stupid. It was an act of force and people felt disempowered. And they got pissed. And at this point, it’s no longer about whether or not the “real names” policy was a good idea in the first place; it’s now an act of oppression. Google Plus would’ve been ten bazillion times better off had they subtly encouraged the policy without making a big deal out of it, had they chosen to only enforce it in the most egregious situations. But now they’re stuck between a rock and a hard place. They either have to stick with their policy and deal with the angry mob or let go of their policy as a peace offering in the hopes that the anger will calm down. It didn’t have to be this way though and it wouldn’t have been had they thought more about encouraging the practices they wanted through design rather than through force.

Of course there’s a legitimate reason to want to encourage civil behavior online. And of course trolls wreak serious havoc on a social media system. But a “real names” policy doesn’t stop an unrepentant troll; it’s just another hurdle that the troll will love mounting. In my work with teens, I see textual abuse (“bullying”) every day among people who know exactly who each other is on Facebook. The identities of many trolls are known. But that doesn’t solve the problem. What matters is how the social situation is configured, the norms about what’s appropriate, and the mechanisms by which people can regulate them (through social shaming and/or technical intervention). A culture where people can build reputation through their online presence (whether “real” names or pseudonyms) goes a long way in combating trolls (although it is by no means a fullproof solution). But you don’t get that culture by force; you get it by encouraging the creation of healthy social norms.

Companies that build systems that people use have power. But they have to be very very very careful about how they assert that power. It’s really easy to come in and try to configure the user through force. It’s a lot harder to work diligently to design and build the ecosystem in which healthy norms emerge. Yet, the latter is of critical importance to the creation of a healthy community. Cuz you can’t get to a healthy community through force.

“Real Names” Policies Are an Abuse of Power

Everyone’s abuzz with the “nymwars,” mostly in response to Google Plus’ decision to enforce its “real names” policy. At first, Google Plus went on a deleting spree, killing off accounts that violated its policy. When the community reacted with outrage, Google Plus leaders tried to calm the anger by detailing their “new and improved” mechanism to enforce “real names” (without killing off accounts). This only sparked increased discussion about the value of pseudonymity. Dozens of blog posts have popped up with people expressing their support for pseudonymity and explaining their reasons. One of the posts, by Kirrily “Skud” Robert included a list of explanations that came from people she polled, including:

  • “I am a high school teacher, privacy is of the utmost importance.”
  • “I have used this name/account in a work context, my entire family know this name and my friends know this name. It enables me to participate online without being subject to harassment that at one point in time lead to my employer having to change their number so that calls could get through.”
  • “I do not feel safe using my real name online as I have had people track me down from my online presence and had coworkers invade my private life.”
  • “I’ve been stalked. I’m a rape survivor. I am a government employee that is prohibited from using my IRL.”
  • “As a former victim of stalking that impacted my family I’ve used [my nickname] online for about 7 years.”
  • “[this name] is a pseudonym I use to protect myself. My web site can be rather controversial and it has been used against me once.”
  • “I started using [this name] to have at least a little layer of anonymity between me and people who act inappropriately/criminally. I think the “real names” policy hurts women in particular.
  • “I enjoy being part of a global and open conversation, but I don’t wish for my opinions to offend conservative and religious people I know or am related to. Also I don’t want my husband’s Govt career impacted by his opinionated wife, or for his staff to feel in any way uncomfortable because of my views.”
  • “I have privacy concerns for being stalked in the past. I’m not going to change my name for a google+ page. The price I might pay isn’t worth it.”
  • “We get death threats at the blog, so while I’m not all that concerned with, you know, sane people finding me. I just don’t overly share information and use a pen name.”
  • “This identity was used to protect my real identity as I am gay and my family live in a small village where if it were openly known that their son was gay they would have problems.”
  • “I go by pseudonym for safety reasons. Being female, I am wary of internet harassment.”

You’ll notice a theme here…

Another site has popped up called “My Name Is Me” where people vocalize their support for pseudonyms. What’s most striking is the list of people who are affected by “real names” policies, including abuse survivors, activists, LGBT people, women, and young people.

Over and over again, people keep pointing to Facebook as an example where “real names” policies work. This makes me laugh hysterically. One of the things that became patently clear to me in my fieldwork is that countless teens who signed up to Facebook late into the game chose to use pseudonyms or nicknames. What’s even more noticeable in my data is that an extremely high percentage of people of color used pseudonyms as compared to the white teens that I interviewed. Of course, this would make sense…

The people who most heavily rely on pseudonyms in online spaces are those who are most marginalized by systems of power. “Real names” policies aren’t empowering; they’re an authoritarian assertion of power over vulnerable people. These ideas and issues aren’t new (and I’ve even talked about this before), but what is new is that marginalized people are banding together and speaking out loudly. And thank goodness.

What’s funny to me is that people also don’t seem to understand the history of Facebook’s “real names” culture. When early adopters (first the elite college students…) embraced Facebook, it was a trusted community. They gave the name that they used in the context of college or high school or the corporation that they were a part of. They used the name that fit into the network that they joined Facebook with. The names they used weren’t necessarily their legal names; plenty of people chose Bill instead of William. But they were, for all intents and purposes, “real.” As the site grew larger, people had to grapple with new crowds being present and discomfort emerged over the norms. But the norms were set and people kept signing up and giving the name that they were most commonly known by. By the time celebrities kicked in, Facebook wasn’t demanding that Lady Gaga call herself Stefani Germanotta, but of course, she had a “fan page” and was separate in the eyes of the crowd. Meanwhile, what many folks failed to notice is that countless black and Latino youth signed up to Facebook using handles. Most people don’t notice what black and Latino youth do online. Likewise, people from outside of the US started signing up to Facebook and using alternate names. Again, no one noticed because names transliterated from Arabic or Malaysian or containing phrases in Portuguese weren’t particularly visible to the real name enforcers. Real names are by no means universal on Facebook, but it’s the importance of real names is a myth that Facebook likes to shill out. And, for the most part, privileged white Americans use their real name on Facebook. So it “looks” right.

Then along comes Google Plus, thinking that it can just dictate a “real names” policy. Only, they made a huge mistake. They allowed the tech crowd to join within 48 hours of launching. The thing about the tech crowd is that it has a long history of nicks and handles and pseudonyms. And this crowd got to define the early social norms of the site, rather than being socialized into the norms set up by trusting college students who had joined a site that they thought was college-only. This was not a recipe for “real name” norm setting. Quite the opposite. Worse for Google… Tech folks are VERY happy to speak LOUDLY when they’re pissed off. So while countless black and Latino folks have been using nicks all over Facebook (just like they did on MySpace btw), they never loudly challenged Facebook’s policy. There was more of a “live and let live” approach to this. Not so lucky for Google and its name-bending community. Folks are now PISSED OFF.

Personally, I’m ecstatic to see this much outrage. And I’m really really glad to see seriously privileged people take up the issue, because while they are the least likely to actually be harmed by “real names” policies, they have the authority to be able to speak truth to power. And across the web, I’m seeing people highlight that this issue has more depth to it than fun names (and is a whole lot more complicated than boiling it down to being about anonymity, as Facebook’s Randi Zuckerberg foolishly did).

What’s at stake is people’s right to protect themselves, their right to actually maintain a form of control that gives them safety. If companies like Facebook and Google are actually committed to the safety of its users, they need to take these complaints seriously. Not everyone is safer by giving out their real name. Quite the opposite; many people are far LESS safe when they are identifiable. And those who are least safe are often those who are most vulnerable.

Likewise, the issue of reputation must be turned on its head when thinking about marginalized people. Folks point to the issue of people using pseudonyms to obscure their identity and, in theory, “protect” their reputation. The assumption baked into this is that the observer is qualified to actually assess someone’s reputation. All too often, and especially with marginalized people, the observer takes someone out of context and judges them inappropriately based on what they get online. Let me explain this in a concrete example that many of you have heard before. Years ago, I received a phone call from an Ivy League college admissions officer who wanted to accept a young black man from South Central in LA into their college; the student had written an application about how he wanted to leave behind the gang-ridden community he came from, but the admissions officers had found his MySpace which was filled with gang insignia. The question that was asked of me was “Why would he lie to us when we can tell the truth online?” Knowing that community, I was fairly certain that he was being honest with the college; he was also doing what it took to keep himself alive in his community. If he had used a pseudonym, the college wouldn’t have been able to get data out of context about him and inappropriately judge him. But they didn’t. They thought that their frame mattered most. I really hope that he got into that school.

There is no universal context, no matter how many times geeks want to tell you that you can be one person to everyone at every point. But just because people are doing what it takes to be appropriate in different contexts, to protect their safety, and to make certain that they are not judged out of context, doesn’t mean that everyone is a huckster. Rather, people are responsibly and reasonably responding to the structural conditions of these new media. And there’s nothing acceptable about those who are most privileged and powerful telling those who aren’t that it’s OK for their safety to be undermined. And you don’t guarantee safety by stopping people from using pseudonyms, but you do undermine people’s safety by doing so.

Thus, from my perspective, enforcing “real names” policies in online spaces is an abuse of power.

Risk Reduction Strategies on Facebook

Sometimes, when I’m in the field, I find teens who have strategies for managing their online presence that are odd at first blush but make complete sense when you understand the context in which they operate. These teens use innovative approaches to leverage the technology to meet personal goals. Let me explain two that caught my attention this week.

Mikalah uses Facebook but when she goes to log out, she deactivates her Facebook account. She knows that this doesn’t delete the account – that’s the point. She knows that when she logs back in, she’ll be able to reactivate the account and have all of her friend connections back. But when she’s not logged in, no one can post messages on her wall or send her messages privately or browse her content. But when she’s logged in, they can do all of that. And she can delete anything that she doesn’t like. Michael Ducker calls this practice “super-logoff” when he noticed a group of gay male adults doing the exact same thing.

Mikalah is not trying to get rid of her data or piss of her friends. And she’s not. What she’s trying to do is minimize risk when she’s not present to actually address it. For the longest time, scholars have talked about online profiles as digital bodies that are left behind to do work while the agent themselves is absent. In many ways, deactivation is a way of not letting the digital body stick around when the person is not present. This is a great risk reduction strategy if you’re worried about people who might look and misinterpret. Or people who might post something that would get you into trouble. Mikalah’s been there and isn’t looking to get into any more trouble. But she wants to be a part of Facebook when it makes sense and not risk the possibility that people will be snooping when she’s not around. It’s a lot easier to deactivate every day than it is to change your privacy settings every day. More importantly, through deactivation, you’re not searchable when you’re not around. You really are invisible except when you’re there. And when you’re there, your friends know it, which is great. What Mikalah does gives her the ability to let Facebook be useful to her when she’s present but not live on when she’s not.

Shamika doesn’t deactivate her Facebook profile but she does delete every wall message, status update, and Like shortly after it’s posted. She’ll post a status update and leave it there until she’s ready to post the next one or until she’s done with it. Then she’ll delete it from her profile. When she’s done reading a friend’s comment on her page, she’ll delete it. She’ll leave a Like up for a few days for her friends to see and then delete it. When I asked her why she was deleting this content, she looked at me incredulously and told me “too much drama.” Pushing further, she talked about how people were nosy and it was too easy to get into trouble for the things you wrote a while back that you couldn’t even remember posting let alone remember what it was all about. It was better to keep everything clean and in the moment. If it’s relevant now, it belongs on Facebook, but the old stuff is no longer relevant so it doesn’t belong on Facebook. Her narrative has nothing to do with adults or with Facebook as a data retention agent. She’s concerned about how her postings will get her into unexpected trouble with her peers in an environment where saying the wrong thing always results in a fight. She’s trying to stay out of fights because fights mean suspensions and she’s had enough of those. So for her, it’s one of many avoidance strategies. The less she has out there for a jealous peer to misinterpret, the better.

I asked Shamika why she bothered with Facebook in the first place, given that she sent over 1200 text messages a day. Once again, she looked at me incredulously, pointing out that there’s no way that she’d give just anyone her cell phone number. Texting was for close friends that respected her while Facebook was necessary to be a part of her school social life. And besides, she liked being able to touch base with people from her former schools or reach out to someone from school that she didn’t know well. Facebook is a lighter touch communication structure and that’s really important to her. But it doesn’t need to be persistent to be useful.

Both of these girls live in high-risk situations. Their lives aren’t easy and they’re just trying to have fun. But they want to have fun with as little trouble as possible. They don’t want people in their business but they’re fully aware that people are nosy. They’re very guarded in general; getting them to open up even a teensy bit during the interview was hard enough. Given the schools that they’re at, they’ve probably seen far more trouble than they’re letting on. Some of it was obvious in their stories. Accounts of fights breaking out in classes, stories of classes where teachers simply have no control over what goes on in the room and have given up teaching, discussions of moving from school to school to school. These girls have limited literacy but their street smarts are strong. And Facebook is another street where you’ve got to always be watching your back.

Related tweets:

  • @tremblebot: My students talk abt this call it “whitewashing” or “whitewalling.” Takes forever for initial scrub then easy to stay on top of.
      @techsoc: College students too! Altho their issue is more peers & partners. One spent 1 hr a day deleting everything BF might be jealous of

        @futurescape: I know someone who deactivated account all festivals, important occasions for her so that people cannot leave comments etc