Tag Archives: privacy

Debating Privacy in a Networked World for the WSJ

Earlier this week, the Wall Street Journal posted excerpts from a debate between me, Stewart Baker, Jeff Jarvis, and Chris Soghoian on privacy. In preparation for the piece, they had us respond to a series of questions. Jeff posted the full text of his responses here. Now it’s my turn. Here are the questions that I was asked and my responses.

Part 1:

Question: How much should people care about privacy? (400 words)

People should – and do – care deeply about privacy. But privacy is not simply the control of information. Rather, privacy is the ability to assert control over a social situation. This requires that people have agency in their environment and that they are able to understand any given social situation so as to adjust how they present themselves and determine what information they share. Privacy violations occur when people have their agency undermined or lack relevant information in a social setting that’s needed to act or adjust accordingly. Privacy is not protected by complex privacy settings that create what Alessandro Acquisti calls “the illusion of control.” Rather, it’s protected when people are able to fully understand the social environment in which they are operating and have the protections necessary to maintain agency.

Social media has prompted a radical shift. We’ve moved from a world that is “private-by-default, public-through-effort” to one that is “public-by-default, private-with-effort.” Most of our conversations in a face-to-face setting are too mundane for anyone to bother recording and publicizing. They stay relatively private simply because there’s no need or desire to make them public. Online, social technologies encourage broad sharing and thus, participating on sites like Facebook or Twitter means sharing to large audiences. When people interact casually online, they share the mundane. They aren’t publicizing; they’re socializing. While socializing, people have no interest in going through the efforts required by digital technologies to make their pithy conversations more private. When things truly matter, they leverage complex social and technical strategies to maintain privacy.

The strategies that people use to assert privacy in social media are diverse and complex, but the most notable approach involves limiting access to meaning while making content publicly accessible. I’m in awe of the countless teens I’ve met who use song lyrics, pronouns, and community references to encode meaning into publicly accessible content. If you don’t know who the Lions are or don’t know what happened Friday night or don’t know why a reference to Rihanna’s latest hit might be funny, you can’t interpret the meaning of the message. This is privacy in action.

The reason that we must care about privacy, especially in a democracy, is that it’s about human agency. To systematically undermine people’s privacy – or allow others to do so – is to deprive people of freedom and liberty.

Part 2:

Question: What is the harm in not being able to control our social contexts? Do we suffer because we have to develop codes to communicate on social networks? Or are we forced offline because of our inability to develop codes? (200 words)

Social situations are not one-size-fits-all. How a man acts with his toddler son is different from how he interacts with his business partner, not because he’s trying to hide something but because what’s appropriate in each situation differs. Rolling on the floor might provoke a giggle from his toddler, but it would be strange behavior in a business meeting. When contexts collide, people must choose what’s appropriate. Often, they present themselves in a way that’s as inoffensive to as many people as possible (and particularly those with high social status), which often makes for a bored and irritable toddler.

Social media is one big context collapse, but it’s not fun to behave as though being online is a perpetual job interview. Thus, many people lower their guards and try to signal what context they want to be in, hoping others will follow suit. When that’s not enough, they encode their messages to be only relevant to a narrower audience. This is neither good, nor bad; it’s simply how people are learning to manage their lives in a networked world where they cannot assume strict boundaries between distinct contexts. Lacking spatial separation, people construct context through language and interaction.

Part 3:

Question: Jeff and Stewart seem to be arguing that privacy advocates have too much power and that they should be reined in for the good of society. What do you think of that view? Is the status quo protecting privacy enough? So we need more laws? What kind of laws? Or different social norms? In particular, I would like to hear what you think should be done to prevent turning the Internet into one long job interview, as you described. If you had one or two examples of types of usages that you think should be limited, that would be perfect. (300 words)

When it comes to creating a society in which both privacy and public life can flourish, there are no easy answers. Laws can protect, but they can also hinder. Technologies can empower, but they can also expose. I respect my esteemed colleagues’ views, but I am also concerned about what it means to have a conversation among experts. Decisions about privacy – and public life – in a networked age are being made by people who have immense social, political, and/or economic power, often at the expense of those who are less privileged. We must engender a public conversation about these issues rather than leaving the in the hands of experts.

There are significant pros and cons to all social, legal, economic, and technological decisions. Balancing individual desires with the goals of the collective is daunting. Mediated life forces us to face serious compromises and hard choices. Privacy is a value that’s dear to many people, precisely because openness is a privilege. Systems must respect privacy, but there’s no easy mechanism to inscribe this value into code or law. Thus, we must publicly grapple with these issues and put pressure on decision-makers and systems-builders to remember that their choices have consequences.

We must also switch the conversation from being about one of data collection to being one about data usage. This involves drawing on the language of abuse, violence, and victimization to think about what happens when people’s willingness to share is twisted to do them harm. Just as we have models for differentiating sex between consenting partners and rape, so too must we construct models that that separate usage that’s empowering and that which strips people of their freedoms and opportunities. For example, refusing health insurance based on search queries may make economic sense, but the social costs are far to great. Focusing on usage requires understanding who is doing what to whom and for what purposes. Limiting data collection may be structurally easier, but it doesn’t address the tensions between privacy and public-ness with which people are struggling.

Part 4:

Question: Jeff makes the point that we’re overemphasizing privacy at the expense of all the public benefits delivered by new online services. What do you think of that view? Do you think privacy is being sufficiently protected?

I think that positioning privacy and public-ness in opposition is a false dichotomy. People want privacy *and* they want to be able to participate in public. This is why I think it’s important to emphasize that privacy is not about controlling information, but about having agency and the ability to control a social situation. People want to share and they gain a lot from sharing. But that’s different than saying that people want to be exposed by others. Agency matters.

From my perspective, protecting privacy is about making certain that people have the agency they need to make informed decisions about how they engage in public. I do not think that we’ve done enough here. That said, I am opposed to approaches that protect people by disempowering them or by taking away their agency. I want to see approaches that force powerful entities to be transparent about their data practices. And I want to see approaches the put restrictions on how data can be used to harm people. For example, people should have the ability to share their medical experiences without being afraid of losing their health insurance. The answer is not to silence consumers from sharing their experiences, but rather to limit what insurers can do with information that they can access.

Question: Jeff says that young people are “likely the worst-served sector of society online”? What do you think of that? Do youth-targeted privacy safeguards prevent them from taking advantage of the benefits of the online world? Do the young have special privacy issues, and do they deserve special protections?

I _completely_ agree with Jeff on this point. In our efforts to protect youth, we often exclude them from public life. Nowhere is this more visible than with respect to the Children’s Online Privacy Protection Act (COPPA). This well-intended laws was meant to empower parents. Yet, in practice, it has prompted companies to ban any child under the age of 13 from joining general-purpose communication services and participating on social media platforms. In other words, COPPA has inadvertently locked children out of being legitimate users of Facebook, Gmail, Skype, and similar services. Interestingly, many parents help their children circumvent age restrictions. Is this a win? I don’t think so.

I don’t believe that privacy protections focused on children make any sense. Yes, children are a vulnerable population, but they’re not the only vulnerable population. Can you imagine excluding senile adults from participating on Facebook because they don’t know when they’re being manipulated? We need to develop structures that support all people while also making sure that protection does not equal exclusion.

Thanks to Julia Angwin for keeping us on task!

Why Parents Help Children Violate Facebook’s 13+ Rule

Announcing new journal article: “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, First Monday.

“At what age should I let my child join Facebook?” This is a question that countless parents have asked my collaborators and me. Often, it’s followed by the following: “I know that 13 is the minimum age to join Facebook, but is it really so bad that my 12-year-old is on the site?”

While parents are struggling to determine what social media sites are appropriate for their children, government tries to help parents by regulating what data internet companies can collect about children without parental permission. Yet, as has been the case for the last decade, this often backfires. Many general-purpose communication platforms and social media sites restrict access to only those 13+ in response to a law meant to empower parents: the Children’s Online Privacy Protection Act (COPPA). This forces parents to make a difficult choice: help uphold the minimum age requirements and limit their children’s access to services that let kids connect with family and friends OR help their children lie about their age to circumvent the age-based restrictions and eschew the protections that COPPA is meant to provide.

In order to understand how parents were approaching this dilemma, my collaborators — Eszter Hargittai (Northwestern University), Jason Schultz (University of California, Berkeley), John Palfrey (Harvard University) — and I decided to survey parents. In many ways, we were responding to a flurry of studies (e.g. Pew’s) that revealed that millions of U.S. children have violated Facebook’s Terms of Service and joined the site underage. These findings prompted outrage back in May as politicians blamed Facebook for failing to curb underage usage. Embedded in this furor was an assumption that by not strictly guarding its doors and keeping children out, Facebook was undermining parental authority and thumbing its nose at the law. Facebook responded by defending its practices — and highlighting how it regularly ejects children from its site. More controversially, Facebook’s founder Mark Zuckerberg openly questioned the value of COPPA in the first place.

While Facebook has often sparked anger over its cavalier attitudes towards user privacy, Zuckerberg’s challenge with regard to COPPA has merit. It’s imperative that we question the assumptions embedded in this policy. All too often, the public takes COPPA at face-value and politicians angle to build new laws based on it without examining its efficacy.

Eszter, Jason, John, and I decided to focus on one core question: Does COPPA actually empower parents? In order to do so, we surveyed parents about their household practices with respect to social media and their attitudes towards age restrictions online. We are proud to release our findings today, in a new paper published at First Monday called “Why parents help their children lie to Facebook about age: Unintended consequences of the ‘Children’s Online Privacy Protection Act’.” From a national sample of 1,007 U.S. parents who have children living with them between the ages of 10-14 conducted July 5-14, 2011, we found:

  • Although Facebook’s minimum age is 13, parents of 13- and 14-year-olds report that, on average, their child joined Facebook at age 12.
  • Half (55%) of parents of 12-year-olds report their child has a Facebook account, and most (82%) of these parents knew when their child signed up. Most (76%) also assisted their 12-year old in creating the account.
  • A third (36%) of all parents surveyed reported that their child joined Facebook before the age of 13, and two-thirds of them (68%) helped their child create the account.
  • Half (53%) of parents surveyed think Facebook has a minimum age and a third (35%) of these parents think that this is a recommendation and not a requirement.
  • Most (78%) parents think it is acceptable for their child to violate minimum age restrictions on online services.

The status quo is not working if large numbers of parents are helping their children lie to get access to online services. Parents do appear to be having conversations with their children, as COPPA intended. Yet, what does it mean if they’re doing so in order to violate the restrictions that COPPA engendered?

One reaction to our data might be that companies should not be allowed to restrict access to children on their sites. Unfortunately, getting the parental permission required by COPPA is technologically difficult, financially costly, and ethically problematic. Sites that target children take on this challenge, but often by excluding children whose parents lack resources to pay for the service, those who lack credit cards, and those who refuse to provide extra data about their children in order to offer permission. The situation is even more complicated for children who are in abusive households, have absentee parents, or regularly experience shifts in guardianship. General-purpose sites, including communication platforms like Gmail and Skype and social media services like Facebook and Twitter, generally prefer to avoid the social, technical, economic, and free speech complications involved.

While there is merit to thinking about how to strengthen parent permission structures, focusing on this obscures the issues that COPPA is intended to address: data privacy and online safety. COPPA predates the rise of social media. Its architects never imagined a world where people would share massive quantities of data as a central part of participation. It no longer makes sense to focus on how data are collected; we must instead question how those data are used. Furthermore, while children may be an especially vulnerable population, they are not the only vulnerable population. Most adults have little sense of how their data are being stored, shared, and sold.

COPPA is a well-intentioned piece of legislation with unintended consequences for parents, educators, and the public writ large. It has stifled innovation for sites focused on children and its implementations have made parenting more challenging. Our data clearly show that parents are concerned about privacy and online safety. Many want the government to help, but they don’t want solutions that unintentionally restrict their children’s access. Instead, they want guidance and recommendations to help them make informed decisions. Parents often want their children to learn how to be responsible digital citizens. Allowing them access is often the first step.

Educators face a different set of issues. Those who want to help youth navigate commercial tools often encounter the complexities of age restrictions. Consider the 7th grade teacher whose students are heavy Facebook users. Should she admonish her students for being on Facebook underage? Or should she make sure that they understand how privacy settings work? Where does digital literacy fit in when what children are doing is in violation of websites’ Terms of Service?

At first blush, the issues surrounding COPPA may seem to only apply to technology companies and the government, but their implications extend much further. COPPA affects parenting, education, and issues surrounding youth rights. It affects those who care about free speech and those who are concerned about how violence shapes home life. It’s important that all who care about youth pay attention to these issues. They’re complex and messy, full of good intention and unintended consequences. But rather than reinforcing or extending a legal regime that produces age-based restrictions which parents actively circumvent, we need to step back and rethink the underlying goals behind COPPA and develop new ways of achieving them. This begins with a public conversation.

We are excited to release our new study in the hopes that it will contribute to that conversation. To read our complete findings and learn more about their implications for policy makers, see “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, published in First Monday.

To learn more about the Children’s Online Privacy Protection Act (COPPA), make sure to check out the Federal Trade Commission’s website.

(Versions of this post were originally written for the Huffington Post and for the Digital Media and Learning Blog.)

Image Credit: Tim Roe

“Real Names” Policies Are an Abuse of Power

Everyone’s abuzz with the “nymwars,” mostly in response to Google Plus’ decision to enforce its “real names” policy. At first, Google Plus went on a deleting spree, killing off accounts that violated its policy. When the community reacted with outrage, Google Plus leaders tried to calm the anger by detailing their “new and improved” mechanism to enforce “real names” (without killing off accounts). This only sparked increased discussion about the value of pseudonymity. Dozens of blog posts have popped up with people expressing their support for pseudonymity and explaining their reasons. One of the posts, by Kirrily “Skud” Robert included a list of explanations that came from people she polled, including:

  • “I am a high school teacher, privacy is of the utmost importance.”
  • “I have used this name/account in a work context, my entire family know this name and my friends know this name. It enables me to participate online without being subject to harassment that at one point in time lead to my employer having to change their number so that calls could get through.”
  • “I do not feel safe using my real name online as I have had people track me down from my online presence and had coworkers invade my private life.”
  • “I’ve been stalked. I’m a rape survivor. I am a government employee that is prohibited from using my IRL.”
  • “As a former victim of stalking that impacted my family I’ve used [my nickname] online for about 7 years.”
  • “[this name] is a pseudonym I use to protect myself. My web site can be rather controversial and it has been used against me once.”
  • “I started using [this name] to have at least a little layer of anonymity between me and people who act inappropriately/criminally. I think the “real names” policy hurts women in particular.
  • “I enjoy being part of a global and open conversation, but I don’t wish for my opinions to offend conservative and religious people I know or am related to. Also I don’t want my husband’s Govt career impacted by his opinionated wife, or for his staff to feel in any way uncomfortable because of my views.”
  • “I have privacy concerns for being stalked in the past. I’m not going to change my name for a google+ page. The price I might pay isn’t worth it.”
  • “We get death threats at the blog, so while I’m not all that concerned with, you know, sane people finding me. I just don’t overly share information and use a pen name.”
  • “This identity was used to protect my real identity as I am gay and my family live in a small village where if it were openly known that their son was gay they would have problems.”
  • “I go by pseudonym for safety reasons. Being female, I am wary of internet harassment.”

You’ll notice a theme here…

Another site has popped up called “My Name Is Me” where people vocalize their support for pseudonyms. What’s most striking is the list of people who are affected by “real names” policies, including abuse survivors, activists, LGBT people, women, and young people.

Over and over again, people keep pointing to Facebook as an example where “real names” policies work. This makes me laugh hysterically. One of the things that became patently clear to me in my fieldwork is that countless teens who signed up to Facebook late into the game chose to use pseudonyms or nicknames. What’s even more noticeable in my data is that an extremely high percentage of people of color used pseudonyms as compared to the white teens that I interviewed. Of course, this would make sense…

The people who most heavily rely on pseudonyms in online spaces are those who are most marginalized by systems of power. “Real names” policies aren’t empowering; they’re an authoritarian assertion of power over vulnerable people. These ideas and issues aren’t new (and I’ve even talked about this before), but what is new is that marginalized people are banding together and speaking out loudly. And thank goodness.

What’s funny to me is that people also don’t seem to understand the history of Facebook’s “real names” culture. When early adopters (first the elite college students…) embraced Facebook, it was a trusted community. They gave the name that they used in the context of college or high school or the corporation that they were a part of. They used the name that fit into the network that they joined Facebook with. The names they used weren’t necessarily their legal names; plenty of people chose Bill instead of William. But they were, for all intents and purposes, “real.” As the site grew larger, people had to grapple with new crowds being present and discomfort emerged over the norms. But the norms were set and people kept signing up and giving the name that they were most commonly known by. By the time celebrities kicked in, Facebook wasn’t demanding that Lady Gaga call herself Stefani Germanotta, but of course, she had a “fan page” and was separate in the eyes of the crowd. Meanwhile, what many folks failed to notice is that countless black and Latino youth signed up to Facebook using handles. Most people don’t notice what black and Latino youth do online. Likewise, people from outside of the US started signing up to Facebook and using alternate names. Again, no one noticed because names transliterated from Arabic or Malaysian or containing phrases in Portuguese weren’t particularly visible to the real name enforcers. Real names are by no means universal on Facebook, but it’s the importance of real names is a myth that Facebook likes to shill out. And, for the most part, privileged white Americans use their real name on Facebook. So it “looks” right.

Then along comes Google Plus, thinking that it can just dictate a “real names” policy. Only, they made a huge mistake. They allowed the tech crowd to join within 48 hours of launching. The thing about the tech crowd is that it has a long history of nicks and handles and pseudonyms. And this crowd got to define the early social norms of the site, rather than being socialized into the norms set up by trusting college students who had joined a site that they thought was college-only. This was not a recipe for “real name” norm setting. Quite the opposite. Worse for Google… Tech folks are VERY happy to speak LOUDLY when they’re pissed off. So while countless black and Latino folks have been using nicks all over Facebook (just like they did on MySpace btw), they never loudly challenged Facebook’s policy. There was more of a “live and let live” approach to this. Not so lucky for Google and its name-bending community. Folks are now PISSED OFF.

Personally, I’m ecstatic to see this much outrage. And I’m really really glad to see seriously privileged people take up the issue, because while they are the least likely to actually be harmed by “real names” policies, they have the authority to be able to speak truth to power. And across the web, I’m seeing people highlight that this issue has more depth to it than fun names (and is a whole lot more complicated than boiling it down to being about anonymity, as Facebook’s Randi Zuckerberg foolishly did).

What’s at stake is people’s right to protect themselves, their right to actually maintain a form of control that gives them safety. If companies like Facebook and Google are actually committed to the safety of its users, they need to take these complaints seriously. Not everyone is safer by giving out their real name. Quite the opposite; many people are far LESS safe when they are identifiable. And those who are least safe are often those who are most vulnerable.

Likewise, the issue of reputation must be turned on its head when thinking about marginalized people. Folks point to the issue of people using pseudonyms to obscure their identity and, in theory, “protect” their reputation. The assumption baked into this is that the observer is qualified to actually assess someone’s reputation. All too often, and especially with marginalized people, the observer takes someone out of context and judges them inappropriately based on what they get online. Let me explain this in a concrete example that many of you have heard before. Years ago, I received a phone call from an Ivy League college admissions officer who wanted to accept a young black man from South Central in LA into their college; the student had written an application about how he wanted to leave behind the gang-ridden community he came from, but the admissions officers had found his MySpace which was filled with gang insignia. The question that was asked of me was “Why would he lie to us when we can tell the truth online?” Knowing that community, I was fairly certain that he was being honest with the college; he was also doing what it took to keep himself alive in his community. If he had used a pseudonym, the college wouldn’t have been able to get data out of context about him and inappropriately judge him. But they didn’t. They thought that their frame mattered most. I really hope that he got into that school.

There is no universal context, no matter how many times geeks want to tell you that you can be one person to everyone at every point. But just because people are doing what it takes to be appropriate in different contexts, to protect their safety, and to make certain that they are not judged out of context, doesn’t mean that everyone is a huckster. Rather, people are responsibly and reasonably responding to the structural conditions of these new media. And there’s nothing acceptable about those who are most privileged and powerful telling those who aren’t that it’s OK for their safety to be undermined. And you don’t guarantee safety by stopping people from using pseudonyms, but you do undermine people’s safety by doing so.

Thus, from my perspective, enforcing “real names” policies in online spaces is an abuse of power.

“Networked Privacy” (my PDF talk)

Our contemporary ideas about privacy are often shaped by legal discourse that emphasizes the notion of “individual harm.” Furthermore, when we think about privacy in online contexts, the American neoliberal frame and the techno-libertarian frame once again force us to really think about the individual. In my talk at Personal Democracy Forum this year, I decided to address some of the issues of “networked privacy” precisely because I think that we need to start thinking about how privacy fits into a social context. Even with respect to the individual frame, what others say/do about us affects our privacy. And yet, more importantly, all of the issues of privacy end up having a broader set of social implications.

Anyhow, I’m very much at the beginning of thinking through these ideas, but in the meantime, I took a first pass at PDF. A crib of the talk that I gave at the conference is available here: “Networked Privacy”

Photo Credit: Collin Key

How Teens Understand Privacy

In the fall, Alice Marwick and I went into the field to understand teens’ privacy attitudes and practices. We’ve blogged some of our thinking since then but we’re currently working on turning our thinking into a full-length article. We are lucky enough to be able to workshop our ideas at an upcoming scholarly meeting (PLSC), but we also wanted to share our work-in-progress with the public since we both know that there are all sorts of folks out there who have a lot of knowledge about this domain but with whom we don’t have the privilege of regularly interacting.

“Social Privacy in Networked Publics: Teens’ Attitudes, Practices, and Strategies”
by danah boyd and Alice Marwick

Please understand that this is an unfinished work-in-progress article, complete with all sorts of bugs that we will need to address before we submit it for publication. But… we would certainly love feedback, critiques, and suggestions for how to improve it. Given the highly interdisciplinary nature of this kind of research, it’s also quite likely that we’re missing out on all sorts of prior work that was done in this space so we’d love to also hear about any articles that we should’ve read by now. Or any thoughts you might have that might advance/complicate our thinking.

Regardless, hopefully you’ll enjoy the piece!

Risk Reduction Strategies on Facebook

Sometimes, when I’m in the field, I find teens who have strategies for managing their online presence that are odd at first blush but make complete sense when you understand the context in which they operate. These teens use innovative approaches to leverage the technology to meet personal goals. Let me explain two that caught my attention this week.

Mikalah uses Facebook but when she goes to log out, she deactivates her Facebook account. She knows that this doesn’t delete the account – that’s the point. She knows that when she logs back in, she’ll be able to reactivate the account and have all of her friend connections back. But when she’s not logged in, no one can post messages on her wall or send her messages privately or browse her content. But when she’s logged in, they can do all of that. And she can delete anything that she doesn’t like. Michael Ducker calls this practice “super-logoff” when he noticed a group of gay male adults doing the exact same thing.

Mikalah is not trying to get rid of her data or piss of her friends. And she’s not. What she’s trying to do is minimize risk when she’s not present to actually address it. For the longest time, scholars have talked about online profiles as digital bodies that are left behind to do work while the agent themselves is absent. In many ways, deactivation is a way of not letting the digital body stick around when the person is not present. This is a great risk reduction strategy if you’re worried about people who might look and misinterpret. Or people who might post something that would get you into trouble. Mikalah’s been there and isn’t looking to get into any more trouble. But she wants to be a part of Facebook when it makes sense and not risk the possibility that people will be snooping when she’s not around. It’s a lot easier to deactivate every day than it is to change your privacy settings every day. More importantly, through deactivation, you’re not searchable when you’re not around. You really are invisible except when you’re there. And when you’re there, your friends know it, which is great. What Mikalah does gives her the ability to let Facebook be useful to her when she’s present but not live on when she’s not.

Shamika doesn’t deactivate her Facebook profile but she does delete every wall message, status update, and Like shortly after it’s posted. She’ll post a status update and leave it there until she’s ready to post the next one or until she’s done with it. Then she’ll delete it from her profile. When she’s done reading a friend’s comment on her page, she’ll delete it. She’ll leave a Like up for a few days for her friends to see and then delete it. When I asked her why she was deleting this content, she looked at me incredulously and told me “too much drama.” Pushing further, she talked about how people were nosy and it was too easy to get into trouble for the things you wrote a while back that you couldn’t even remember posting let alone remember what it was all about. It was better to keep everything clean and in the moment. If it’s relevant now, it belongs on Facebook, but the old stuff is no longer relevant so it doesn’t belong on Facebook. Her narrative has nothing to do with adults or with Facebook as a data retention agent. She’s concerned about how her postings will get her into unexpected trouble with her peers in an environment where saying the wrong thing always results in a fight. She’s trying to stay out of fights because fights mean suspensions and she’s had enough of those. So for her, it’s one of many avoidance strategies. The less she has out there for a jealous peer to misinterpret, the better.

I asked Shamika why she bothered with Facebook in the first place, given that she sent over 1200 text messages a day. Once again, she looked at me incredulously, pointing out that there’s no way that she’d give just anyone her cell phone number. Texting was for close friends that respected her while Facebook was necessary to be a part of her school social life. And besides, she liked being able to touch base with people from her former schools or reach out to someone from school that she didn’t know well. Facebook is a lighter touch communication structure and that’s really important to her. But it doesn’t need to be persistent to be useful.

Both of these girls live in high-risk situations. Their lives aren’t easy and they’re just trying to have fun. But they want to have fun with as little trouble as possible. They don’t want people in their business but they’re fully aware that people are nosy. They’re very guarded in general; getting them to open up even a teensy bit during the interview was hard enough. Given the schools that they’re at, they’ve probably seen far more trouble than they’re letting on. Some of it was obvious in their stories. Accounts of fights breaking out in classes, stories of classes where teachers simply have no control over what goes on in the room and have given up teaching, discussions of moving from school to school to school. These girls have limited literacy but their street smarts are strong. And Facebook is another street where you’ve got to always be watching your back.

Related tweets:

  • @tremblebot: My students talk abt this call it “whitewashing” or “whitewalling.” Takes forever for initial scrub then easy to stay on top of.
      @techsoc: College students too! Altho their issue is more peers & partners. One spent 1 hr a day deleting everything BF might be jealous of

        @futurescape: I know someone who deactivated account all festivals, important occasions for her so that people cannot leave comments etc

Regulating the Use of Social Media Data

If you were to walk into my office, I’d have a pretty decent sense of your gender, your age, your race, and other identity markers. My knowledge wouldn’t be perfect, but it would give me plenty of information that I could use to discriminate against you if I felt like it. The law doesn’t prohibit me for “collecting” this information in a job interview nor does it say that discrimination is acceptable if you “shared” this information with me. That’s good news given that faking what’s written on your body is bloody hard. What the law does is regulate how this information can be used by me, the theoretical employer. This doesn’t put an end to all discrimination – plenty of people are discriminated against based on what’s written on their bodies – but it does provide you with legal rights if you think you were discriminated against and it forces the employer to think twice about hiring practices.

The Internet has made it possible for you to create digital bodies that reflect a whole lot more than your demographics. Your online profiles convey a lot about you, but that content is produced in a context. And, more often than not, that context has nothing to do with employment. This creates an interesting conundrum. Should employers have the right to discriminate against you because of your Facebook profile? One might argue that they should because such a profile reflects your “character” or your priorities or your public presence. Personally, I think that’s just code for discriminating against you because you’re not like me, the theoretical employer.

Of course, it’s a tough call. Hiring is hard. We’re always looking for better ways to judge someone and goddess knows that an interview plus resume is rarely the best way to assess whether or not there’s a “good fit.” It’s far too tempting to jump on the Internet and try to figure out who someone is based on what we can drudge up online. This might be reasonable if only we were reasonable judges of people’s signaling or remotely good at assessing them in context. Cuz it’s a whole lot harder to assess someone’s professional sensibilities by their social activities if they come from a world different than our own.

Given this, I was fascinated to learn that the German government is proposing legislation that would put restrictions on what Internet content employers could use when recruiting.

A decade ago, all of our legal approaches to the Internet focused on what data online companies could collect. This makes sense if you think of the Internet as a broadcast medium. But then along came the mainstreamification of social media and user-generated content. People are sharing content left right and center as part of their daily sociable practices. They’re sharing as if the Internet is a social place, not a professional place. More accurately, they’re sharing in a setting where there’s no clear delineation of social and professional spheres. Since social media became popular, folks have continuously talked about how we need to teach people to not share what might cause them professional consternation. Those warnings haven’t worked. And for good reason. What’s professionally questionable to one may be perfectly appropriate to another. Or the social gain one sees might outweigh the professional risks. Or, more simply, people may just be naive.

I’m sick of hearing about how the onus should be entirely on the person doing the sharing. There are darn good reasons in which people share information and just because you can dig it up doesn’t mean that it’s ethical to use it. So I’m delighted by the German move, if for no other reason than to highlight that we need to rethink our regulatory approaches. I strongly believe that we need to spend more time talking about how information is being used and less time talking about how stupid people are for sharing it in the first place.

Social Steganography: Learning to Hide in Plain Sight

[Posted originally to the Digital Media & Learning blog.]

Carmen and her mother are close. As far as Carmen’s concerned, she has nothing to hide from her mother so she’s happy to have her mom as her ‘friend’ on Facebook. Of course, Carmen’s mom doesn’t always understand the social protocols on Facebook and Carmen sometimes gets frustrated. She hates that her mom comments on nearly every post, because it “scares everyone away…Everyone kind of disappears after the mom post…It’s just uncool having your mom all over your wall. That’s just lame.” Still, she knows that her mom means well and she sometimes uses this pattern to her advantage. While Carmen welcomes her mother’s presence, she also knows her mother overreacts. In order to avoid a freak out, Carmen will avoid posting things that have a high likelihood of mother misinterpretation. This can make communication tricky at times and Carmen must work to write in ways that are interpreted differently by different people.

When Carmen broke up with her boyfriend, she “wasn’t in the happiest state.” The breakup happened while she was on a school trip and her mother was already nervous. Initially, Carmen was going to mark the breakup with lyrics from a song that she had been listening to, but then she realized that the lyrics were quite depressing and worried that if her mom read them, she’d “have a heart attack and think that something is wrong.” She decided not to post the lyrics. Instead, she posted lyrics from Monty Python’s “Always Look on the Bright Side of Life.” This strategy was effective. Her mother wrote her a note saying that she seemed happy which made her laugh. But her closest friends knew that this song appears in the movie when the characters are about to be killed. They reached out to her immediately to see how she was really feeling.

Privacy in a public age

Carmen is engaging in social steganography. She’s hiding information in plain sight, creating a message that can be read in one way by those who aren’t in the know and read differently by those who are. She’s communicating to different audiences simultaneously, relying on specific cultural awareness to provide the right interpretive lens. While she’s focused primarily on separating her mother from her friends, her message is also meaningless to broader audiences who have no idea that she had just broken up with her boyfriend. As far as they’re concerned, Carmen just posted an interesting lyric.

Social steganography is one privacy tactic teens take when engaging in semi-public forums like Facebook. While adults have worked diligently to exclude people through privacy settings, many teenagers have been unable to exclude certain classes of adults – namely their parents – for quite some time. For this reason, they’ve had to develop new techniques to speak to their friends fully aware that their parents are overhearing. Social steganography is one of the most common techniques that teens employ. They do this because they care about privacy, they care about misinterpretation, they care about segmented communications strategies. And they know that technical tools for restricting access don’t trump parental demands to gain access. So they find new ways of getting around limitations. And, in doing so, reconstruct age-old practices.

Ancient methods

Steganography is an ancient technique where people hide messages in plain sight. Invisible ink, tattoos under hair on messengers, and messages embedded in pictures are just a few ways in which steganography is employed. Cryptographers are obsessed with steganography, in part because it’s hardest to decode a message when you don’t know where to look. This is precisely why spy movies LOVE steganography. Of course, average people have also employed techniques of hiding in plain sight for a long time, hiding information in everyday communication, knowing that it’ll only be interpreted by some. Children love employing codes and adults generally pretend as though they can’t understand pig Latin or uncover the messages that children hide using invisible ink pens purchased from toy stores. Yet, as children grow up, they get more mature about their messaging, realizing that language has multiple layers and, with it, multiple meanings. They often learn this by being misinterpreted.

What fascinates me is that teens are taking these strategies into the digital spaces, recognizing multiple audiences and the challenges of persistence, and working to speak in layers. They are not always successful. And things that are meant to mean one thing are often misinterpreted in all sorts of the wrong ways. But that doesn’t mean teens aren’t experimenting and learning. In fact, I’d expect that they’re learning more nuanced ways of managing privacy than any of us adults. Why? Because they have to. The more they live in public, the more I expect them to hide in plain sight.

Image credit: Jon McGovern

(This work is licensed under a Creative Commons Attribution 3.0 Unported License.)

Facebook privacy settings: Who cares?

Eszter Hargittai and I just published a new article in First Monday entitled: “Facebook privacy settings: Who cares?”

Abstract: With over 500 million users, the decisions that Facebook makes about its privacy settings have the potential to influence many people. While its changes in this domain have often prompted privacy advocates and news media to critique the company, Facebook has continued to attract more users to its service. This raises a question about whether or not Facebook’s changes in privacy approaches matter and, if so, to whom. This paper examines the attitudes and practices of a cohort of 18– and 19–year–olds surveyed in 2009 and again in 2010 about Facebook’s privacy settings. Our results challenge widespread assumptions that youth do not care about and are not engaged with navigating privacy. We find that, while not universal, modifications to privacy settings have increased during a year in which Facebook’s approach to privacy was hotly contested. We also find that both frequency and type of Facebook use as well as Internet skill are correlated with making modifications to privacy settings. In contrast, we observe few gender differences in how young adults approach their Facebook privacy settings, which is notable given that gender differences exist in so many other domains online. We discuss the possible reasons for our findings and their implications.

We look forward to your comments!

How COPPA Fails Parents, Educators, Youth

Ever wonder why youth have to be over 13 to create an account on Facebook or Gmail or Skype? It has nothing to do with safety.

In 1998, the U.S. Congress enacted the Children’s Online Privacy Protection Act (COPPA) with the best of intentions. They wanted to make certain that corporations could not collect or sell data about children under the age of 13 without parental permission, so they created a requirement to check age and get parental permission for those under 13. Most companies took one look at COPPA and decided that the process of getting parental consent was far too onerous so they simply required all participants to be at least 13 years of age. The notifications that say “You must be 13 years or older to use this service” and the pull-down menus that don’t allow you to indicate that you’re under 13 have nothing to do with whether or not a website is appropriate for a child; it has to do with whether or not the company thinks that it’s worth the effort to seek parental permission.

COPPA is currently being discussed by the Federal Trade Commission and the US Senate. Most of the conversation focuses on whether or not companies are abiding by the ruling and whether or not the age should be upped to 18. What is not being discussed is the effectiveness of this legislation or what it means to American families (let alone families in other countries who are affected by it). In trying to understand COPPA’s impact, my research led me conclude four things:

  1. Parents and youth believe that age requirements are designed to protect their safety, rather than their privacy.
  2. Parents want their children to have access to social media service to communicate with extended family members.
  3. Parents teach children to lie about their age to circumvent age limitations.
  4. Parents believe that age restrictions take away their parental choice.

How the Public Interprets COPPA-Prompted Age Restrictions

Most parents and youth believe that the age requirements that they encounter when signing up to various websites are equivalent to a safety warning. They interpret this limitation as: “This site is not suitable for children under the age of 13.” While this might be true, that’s not actually what the age restriction is about. Not only does COPPA fail to inform parents about the appropriateness of a particular site, but parental misinterpretations of the age restrictions mean that few are aware that this stems from an attempt to protect privacy.

While many parents do not believe that social network sites like Facebook and MySpace are suitable for young children, they often want their children to have access to other services that have age restrictions (email, instant messaging, video services, etc.). Often, parents cite that these tools enable children to connect with extended family; Skype is especially important to immigrant parents who have extended family outside of the US. Grandparents were most frequently cited as the reason why parents created accounts for their young children. Many parents will create accounts for children even before they are literate because the value of connecting children to family outweighs the age restriction. When parents encourage their children to use these services, they send a conflicting message that their kids eventually learn: ignore some age limitations but not others.

By middle school, communication tools and social network sites are quite popular among tweens who pressure their parents for permission to get access to accounts on these services because they want to communicate with their classmates, church friends, and friends who have moved away. Although parents in the wealthiest and most educated segments of society often forbid their children from signing up to social network sites until they turn 13, most parents support their children’s desires to acquire email and IM, precisely because of familial use. To join, tweens consistently lie about their age when asked to provide it. When I interviewed teens about who taught them to lie, the overwhelming answer was parents. I interviewed parents who consistently admitted to helping their children circumvent the age restriction by teaching them that they needed to choose a birth year that would make them over 13. Even in households where an older sibling or friend was the educator, parents knew their children had email and IM and social network sites accounts. Interestingly, in households where parents forbid Facebook but allow email, kids have started noting the hypocritical stance of their parents. That’s not a good outcome of this misinterpretation.

When I asked parents about how they felt about the age restrictions presented by social websites, parents had one of two responses. When referencing social network sites, parents stated that they felt that the restrictions were justified because younger children were too immature to handle the challenges of social network sites. Yet, when discussing sites and services that they did not believe were risky environments or that they felt were important for family communication, parents often felt as though the limitations were unnecessarily restrictive. Those who interpreted the restriction as a maturity rating did not understand why the sites required age confirmation. Some other parents felt as though the websites were trying to tell them how to parent. Some were particularly outraged by what they felt was a paternal attitude by websites, making statements like: “Who are they to tell me how to be a good parent?”

Across the board, parents and youth misinterpret the age requirements that emerged from the implementation of COPPA. Except for the most educated and technologically savvy, they are completely unaware that these restrictions have anything to do with privacy. More problematically, parents’ conflicting ways in which they address some age restrictions and not others sends a dangerous message.

Policy Literacy and the Future of COPPA

There’s another issue here that’s not regularly addressed. COPPA affects educators and social services in counterintuitive ways. While non-commercial services are not required to abide by COPPA, there are plenty of commercial education and health services out there who are seeking to help youth. Parental permission might be viable for an organization working to help kids learn arithmetic through online tutoring, but it is completely untenable when we’re thinking about suicide hotlines, LGBT programs, and mental health programs. (Keep in mind that many hospitals are for-profit even if their free websites are out there for general help.)

COPPA is well-intended but its implementation and cultural uptake have been a failure. The key to making COPPA work is not to making it stricter or to force the technology companies to be better at confirming that the kids on their site are not underage. Not only is this technologically infeasible without violating privacy at an even greater level, doing so would fail to recognize what’s actually happening on the ground. Parents want to be able to parent, to be able to decide what services are appropriate for their children. At the same time, we shouldn’t forget that not all parents are present and we don’t want to shut teens out of crucial media spaces because their parents are absent, as would often be the case if we upped the age to 18. The key to improving COPPA is to go back to the table and think about how children’s data is being used, whether it’s collected implicitly or explicitly.

In order for the underlying intentions of COPPA to work, we need both information literacy and policy literacy. We need to find ways to help digital citizens understand how their information is being used, what rights they have, and how the policies that exist affect their lives. If parents and educators don’t understand that the 13 limitation is about privacy, COPPA will continue to fail. It’s time that parents and educators learned more about COPPA and start sharing their own perspective, asking Congress to do a better job of addressing the privacy issues without taking away their rights to parent and educate. And without marginalizing those who aren’t fortunate enough to have engaged parents by their side.

John Palfrey, Urs Gasser, and I submitted a statement to the FTC and Senate called “How COPPA, as Implemented, is Misinterpreted by the Public: A Research Perspective. To learn more about COPPA or submit your own letter to the FTC and Senate, go to the FTC website.

This post was originally posted at the DML Central blog.

Image Credit: WarzauWynn