Four Essays Addressing Risky Behaviors and Online Safety

At Harvard’s Berkman Center, John Palfrey, Urs Gasser, and I have been co-directing the Youth and Media Policy Working Group Initiative to investigate the role that policy can play in addressing core issues involving youth and media. John has been leading up the Privacy, Publicity, and Reputation track; Urs has been managing Youth Created Content and Information Quality track; and I have been coordinating the Risky Behaviors and Online Safety track. We’ll have a lot of different pieces coming out over the next few months that stem from this work. Today, I’m pleased to share four important essays that emerged from the work we’ve been doing in the Risky Behaviors and Online Safety track:

“Moving Beyond One Size Fits All With Digital Citizenship” by Matt Levinson and Deb Socia

This essay addresses some of the challenges that educators face when trying to address online safety and digital citizenship in the classroom.

“Evaluating Online Safety Programs” by Tobit Emmens and Andy Phippen

This essay talks about the importance of evaluating interventions that are implemented so as to not face dangerous unintended consequences, using work in suicide prevention as a backdrop.

“The Future of Internet Safety Education: Critical Lessons from Four Decades of Youth Drug Abuse Prevention” by Lisa M. Jones

This essay contextualizes contemporary internet safety programs in light of work done in the drug abuse prevention domain to highlight best practices to implementing interventions.

“Online Safety: Why Research is Important” by David Finkelhor, Janis Wolak, and Kimberly J. Mitchell

This essay examines the role that research can and should play in shaping policy.

These four essays provide crucial background information for understanding the challenges of implementing education and public health interventions in the area of online safety. I hope you will read them because they are truly mind-expanding pieces.

“for the lolz”: 4chan is hacking the attention economy

(Newbie note: If you have never heard of 4chan, start with the Wikipedia entry and not the website itself. The site tends to offend many adults’ sensibilities. As one of my friends put it, loving LOLcats or rickrolling as outputs is like loving a tasty hamburger; visiting 4chan is like visiting the meat factory. At some point, it’d probably help to visit the meat factory, but that might make you go vegetarian.)

Over the last year, 4chan emerged from complete obscurity to being recognized by mainstream media as something of significance. Perhaps it was moot’s appearance at the top of the TIME 100 list. More likely, it was moot’s TED talk on anonymity that tipped it all over. At TED, moot – otherwise known as Chris Poole – revealed a more “legitimate” side of an underground site typically known to outsiders as the cesspool of the internet. And in doing so, he marked himself as one of the more articulate, thoughtful, and entertaining community leaders on the web. In short, he was someone that adults could embrace, even if his site scared the shit out of them.

Amidst all of this, 4chan has “popped.” Journalists and academics are clamoring to discuss and analyze 4chan. At first, it was all about discussing whether or not this community of 9.5 million mostly young mostly male internet people was evil or brilliant. Lately, the obsession focuses on anonymity, signaling that Chris’ TED talk set the frame for public discourse about 4chan. Both of these are certainly interesting topics. 4chan has created some of the most lovable memes on the internet but /b/tards have also been some of the most nefarious trolls and griefers on the web. And anonymity is a really complex topic that can’t be boiled down to a question of accountability in light of whether or not the anonymous commentator is seen as evil or brilliant. And while I could write a long essay on how the anonymity that people seek on the web counters the ways in which identifiability on the web far exceeds any identifiability that ever existed offline, that’s not the point of this post. Instead, what I want to claim is that 4chan is next-gen hacker culture. And that it should be appreciated (and vilified) on those terms.

I grew up in a community of hackers at the tale end of the security hacking days. Many of my friends in high school prided themselves on their phreaking skills or in their ability to break into high-end security systems. While some were truly gifted technical geniuses, few were true crackers bent on destroying systems with malicious intentions. Most of my friends simply wanted to see what they could do. And mostly, the hacking that was taking place was really mundane, leveraging people’s stupidity in using “admin/admin” as their username/password combo to leave little love notes and easter eggs. Of course, there were consequences. One of my friends was forbidden from using the internet throughout high school while another ended up doing time in the Navy’s security system in lieu of the alternatives. I was not connected to the 31337 hackers that were central to the security hacking era, but I grew up on the margins in ways that allowed me to appreciate their technical prowess (and to want to be Angelina Jolie a few years later).

Depending on where you sit, security hackers are vilified or adored, recognized for the havoc that they wreaked and for really challenging systems to be much more secure. As a community, they were the underground of the 80s and 90s. Yet, today, former hackers are some of the most powerful people in the tech industry. Some hackers had truly malicious intentions while others were engaged in a series of acts that can best be understood through a popular 4chan phrase: “for the lolz.” It was entertaining to see what one could do. And while most of those who were in it for the lolz had no political agenda, the resultant acts of the security hackers ended up being deeply political, ended up really shaping the development of technological systems.

I would argue that 4chan is ground zero of a new generation of hackers – those who are bent on hacking the attention economy. While the security hackers were attacking the security economy at the center of power and authority in the pre-web days, these attention hackers are highlighting how manipulatable information flows are. They are showing that Top 100 lists can be gamed and that entertaining content can reach mass popularity without having any commercial intentions (regardless of whether or not someone decided to commercialize it on the other side). Their antics force people to think about status and power and they encourage folks to laugh at anything that takes itself too seriously. The mindset is deeply familiar to me and it doesn’t surprise me when I learn that old hacker types get a warm fuzzy feeling thinking about 4chan even if trolls and griefers annoy the hell out of them. In a mediated environment where marketers are taking over, there’s something subversively entertaining about betting on the anarchist subculture. Cuz, really, at the end of the day, many old skool hackers weren’t entirely thrilled to realize that mainstreamification of net culture meant that mainstream culture would dominate net culture. For us geeks, freaks, and queers who embraced the internet as a savior, mainstreamification has meant a new form of disempowerment.

As with security hackers, the attention hackers that are popping up today are a mixed bag. It’s easy to love the cultural ethos and despise some of the individuals or the individual acts. In recognizing the cultural power of the community represented by 4chan, I don’t mean to justify some of the truly hateful things that some individuals have done. But I am willing to laugh off some of the stupidity and find humor in the antics while also rejecting certain acts. I’m willing to lament the fact that it’s been 20 years and underground hacking culture is still mostly white and mostly male while also being stoked to see a new underground subculture emerge. Of course, it doesn’t look like it’ll be underground for long… And I can’t say that I’m too thrilled for every mom and pop and average teen to know about 4chan (which is precisely why I haven’t blogged about it before). But I do think that there’s something important about those invested in hacking the attention economy. And I do hope that we always have people around us reminding us to never take the internets too seriously.

Update: Yes, I know the more commonly accepted spelling of lolz is lulz. (The full phrase should also be: “I did it for the lulz.”) I can’t explain why I prefer lolz but I always have and there are others out there who use this variant as well. Lulz highlights the negativity (since it’s loling at someone else’s expense) while lolz focus on generalized laughter, not always hurtful laughter. I prefer to think of things in this frame. YMMV.

(Translated into Russian by Mikhail Karpov)

Mikhail Karpov

Mikhail Karpov

How COPPA Fails Parents, Educators, Youth

Ever wonder why youth have to be over 13 to create an account on Facebook or Gmail or Skype? It has nothing to do with safety.

In 1998, the U.S. Congress enacted the Children’s Online Privacy Protection Act (COPPA) with the best of intentions. They wanted to make certain that corporations could not collect or sell data about children under the age of 13 without parental permission, so they created a requirement to check age and get parental permission for those under 13. Most companies took one look at COPPA and decided that the process of getting parental consent was far too onerous so they simply required all participants to be at least 13 years of age. The notifications that say “You must be 13 years or older to use this service” and the pull-down menus that don’t allow you to indicate that you’re under 13 have nothing to do with whether or not a website is appropriate for a child; it has to do with whether or not the company thinks that it’s worth the effort to seek parental permission.

COPPA is currently being discussed by the Federal Trade Commission and the US Senate. Most of the conversation focuses on whether or not companies are abiding by the ruling and whether or not the age should be upped to 18. What is not being discussed is the effectiveness of this legislation or what it means to American families (let alone families in other countries who are affected by it). In trying to understand COPPA’s impact, my research led me conclude four things:

  1. Parents and youth believe that age requirements are designed to protect their safety, rather than their privacy.
  2. Parents want their children to have access to social media service to communicate with extended family members.
  3. Parents teach children to lie about their age to circumvent age limitations.
  4. Parents believe that age restrictions take away their parental choice.

How the Public Interprets COPPA-Prompted Age Restrictions

Most parents and youth believe that the age requirements that they encounter when signing up to various websites are equivalent to a safety warning. They interpret this limitation as: “This site is not suitable for children under the age of 13.” While this might be true, that’s not actually what the age restriction is about. Not only does COPPA fail to inform parents about the appropriateness of a particular site, but parental misinterpretations of the age restrictions mean that few are aware that this stems from an attempt to protect privacy.

While many parents do not believe that social network sites like Facebook and MySpace are suitable for young children, they often want their children to have access to other services that have age restrictions (email, instant messaging, video services, etc.). Often, parents cite that these tools enable children to connect with extended family; Skype is especially important to immigrant parents who have extended family outside of the US. Grandparents were most frequently cited as the reason why parents created accounts for their young children. Many parents will create accounts for children even before they are literate because the value of connecting children to family outweighs the age restriction. When parents encourage their children to use these services, they send a conflicting message that their kids eventually learn: ignore some age limitations but not others.

By middle school, communication tools and social network sites are quite popular among tweens who pressure their parents for permission to get access to accounts on these services because they want to communicate with their classmates, church friends, and friends who have moved away. Although parents in the wealthiest and most educated segments of society often forbid their children from signing up to social network sites until they turn 13, most parents support their children’s desires to acquire email and IM, precisely because of familial use. To join, tweens consistently lie about their age when asked to provide it. When I interviewed teens about who taught them to lie, the overwhelming answer was parents. I interviewed parents who consistently admitted to helping their children circumvent the age restriction by teaching them that they needed to choose a birth year that would make them over 13. Even in households where an older sibling or friend was the educator, parents knew their children had email and IM and social network sites accounts. Interestingly, in households where parents forbid Facebook but allow email, kids have started noting the hypocritical stance of their parents. That’s not a good outcome of this misinterpretation.

When I asked parents about how they felt about the age restrictions presented by social websites, parents had one of two responses. When referencing social network sites, parents stated that they felt that the restrictions were justified because younger children were too immature to handle the challenges of social network sites. Yet, when discussing sites and services that they did not believe were risky environments or that they felt were important for family communication, parents often felt as though the limitations were unnecessarily restrictive. Those who interpreted the restriction as a maturity rating did not understand why the sites required age confirmation. Some other parents felt as though the websites were trying to tell them how to parent. Some were particularly outraged by what they felt was a paternal attitude by websites, making statements like: “Who are they to tell me how to be a good parent?”

Across the board, parents and youth misinterpret the age requirements that emerged from the implementation of COPPA. Except for the most educated and technologically savvy, they are completely unaware that these restrictions have anything to do with privacy. More problematically, parents’ conflicting ways in which they address some age restrictions and not others sends a dangerous message.

Policy Literacy and the Future of COPPA

There’s another issue here that’s not regularly addressed. COPPA affects educators and social services in counterintuitive ways. While non-commercial services are not required to abide by COPPA, there are plenty of commercial education and health services out there who are seeking to help youth. Parental permission might be viable for an organization working to help kids learn arithmetic through online tutoring, but it is completely untenable when we’re thinking about suicide hotlines, LGBT programs, and mental health programs. (Keep in mind that many hospitals are for-profit even if their free websites are out there for general help.)

COPPA is well-intended but its implementation and cultural uptake have been a failure. The key to making COPPA work is not to making it stricter or to force the technology companies to be better at confirming that the kids on their site are not underage. Not only is this technologically infeasible without violating privacy at an even greater level, doing so would fail to recognize what’s actually happening on the ground. Parents want to be able to parent, to be able to decide what services are appropriate for their children. At the same time, we shouldn’t forget that not all parents are present and we don’t want to shut teens out of crucial media spaces because their parents are absent, as would often be the case if we upped the age to 18. The key to improving COPPA is to go back to the table and think about how children’s data is being used, whether it’s collected implicitly or explicitly.

In order for the underlying intentions of COPPA to work, we need both information literacy and policy literacy. We need to find ways to help digital citizens understand how their information is being used, what rights they have, and how the policies that exist affect their lives. If parents and educators don’t understand that the 13 limitation is about privacy, COPPA will continue to fail. It’s time that parents and educators learned more about COPPA and start sharing their own perspective, asking Congress to do a better job of addressing the privacy issues without taking away their rights to parent and educate. And without marginalizing those who aren’t fortunate enough to have engaged parents by their side.

John Palfrey, Urs Gasser, and I submitted a statement to the FTC and Senate called “How COPPA, as Implemented, is Misinterpreted by the Public: A Research Perspective. To learn more about COPPA or submit your own letter to the FTC and Senate, go to the FTC website.

This post was originally posted at the DML Central blog.

Image Credit: WarzauWynn

i can haz housesitting tool pls?

Dear enterprising developers of the world, I have a request:

I travel a lot. I prefer staying in apartments to staying in hotels. But I hate imposing on friends and, frankly, crashing on couches isn’t as fun as it used to be. When I’m lucky, I randomly learn that a friend is out of town and I have the opportunity to housesit. And when I’m lucky, I randomly learn that someone I trust is in Boston when I’m not and can get them to catsit/housesit. Cuz I’m always begging for housesitters. But there has to be a better way of getting this information in our world of interconnectedness.

I want an application that lets me announce to my friends when I’m out of town and my apartment is vacant or when I need a housesitter. And I want to know when people that I know are out of town and would welcome me to housesit/catsit/plantsit. As wonderful as couchsurfing.com and airbdb are, they don’t serve my needs. I don’t want the burden of having to socialize with strangers (or, realistically, friends) when I travel for work nor do I typically want to stay at strangers’ places (or have strangers stay at my place). I want an easy way to trade apartments with people that I already know. And I want to know when people’s homes are vacant, not when I’m welcome to crash at their place.

I want to be able to create a calendar of when my place is empty and see when my friends’ places are empty. I want to be able to indicate who I trust to see my empty apartment calendar. I want to be able to pivot based on location and see when there’s a specific need (like catsitting). It’d be great to message out to friends when I have a catsitting need. I don’t want to make my calendar available to just anyone that I know so opening it up to anyone in my Facebook social graph isn’t the solution. I want it to be really easy for my friends to indicate when they’re gone so that it’s not a crazy burden to keep the calendar updated. .

Perhaps there’s a tool out there that would meet my needs that I don’t know about and, if so, please tell me. But so far, I haven’t been able to find one. And I don’t think that it would be that hard to build; a minimalist tool would be good enough. I’m not sure that such an application is actually monetizable because no money is being exchanged, but perhaps travel ads could make it profitable.

Anyhow, I throw this desire to the coding wolves in the hopes that someone might make it a reality. I would be eternally grateful.

ktxby

Deception + fear + humiliation != education

I hate fear-based approaches to education. I grew up on the “this is your brain on drugs” messages and watched classmates go from being afraid of drugs to trying marijuana to deciding that all of the messages about drugs were idiotic. (Crystal meth and marijuana shouldn’t be in the same category.) Much to my frustration, adults keep turning to fear to “educate” the kids with complete disregard to the unintended consequences of this approach. Sometimes, it’s even worse. I recently received an email from a friend of mine (Chloe Cockburn) discussing an issue brought before the ACLU. She gave me permission to share this with you:

A campus police officer has been offering programs about the dangers inherent in using the internet to middle and high school assemblies. As part of her presentation she displays pictures that students have posted on their Facebook pages. The idea is to demonstrate that anyone can have access to this information, so be careful. She gains access to the students’ Facebook pages by creating false profiles claiming to be a student at the school and asking to be “friended”, evidently in violation of Facebook policy.

An ACLU affiliate received a complaint from a student at a small rural high school. The entire assembly was shown a photo of her holding a beer. The picture was not on the complainant’s Facebook page, but on one belonging to a friend of hers, who allowed access to the bogus profile created by the police officer. The complainant was not “punished” as the plaintiff above was, but she was humiliated, and she is afraid that she will not get some local scholarship aid as a result.

So here we have a police officer intentionally violating Facebook’s policy and creating a deceptive profile to entrap teenagers and humiliate them to “teach them a lesson”??? Unethical acts + deception + fear + humiliation != education. This. Makes. Me. Want. To. Scream.

“Transparency is Not Enough”

At Gov2.0 this week, I gave a talk on the importance of information literacy when addressing transparency of government data:

“Transparency is Not Enough”

I address everything from registered sex offenders to what happens when politicians don’t like data to the complexities of interpretation.  In doing so, I make three key points:

  1. Information is power, but interpretation is more powerful
  2. Data taken out of context can have unintended consequences
  3. Transparency alone is not the great equalizer

My talk is also available on YouTube if you prefer to listen to a different version of the same message (since my crib is what I intended to say and the video is what actually came out of my mouth).

Pew Research confirms that youth care about their reputation

In today’s discussions about privacy, “youth don’t care about privacy” is an irritating but popular myth. Embedded in this rhetoric is the belief that youth are reckless risk-takers who don’t care about the consequences of their actions. This couldn’t be further from the truth.

In my own work, I’ve found that teenagers care deeply about privacy in that they care about knowing how information flows and wanting influence over it. They care deeply about their reputation and leverage the tools available to help shape who they are. Of course, reputation and privacy always come back to audience. And audience is where we continuously misunderstand teenagers. They want to make sure that people they respect or admire think highly of them. But this doesn’t always mean that they care about how YOU think about them. So a teenager may be willing to sully their reputation as their parents see it if it gives them street cred that makes them cool amongst their peers. This is why reputation is so messy. There’s no universal reputation, no universal self-presentation. It’s always about audience.

The teenagers that I first started interviewing in 2004 are now young adults. Many are in college or in the army and their views on their reputation have matured. How they think about privacy and information flow has also matured. They’re thinking about a broader world. At the same time, they’re doing so having developed an understanding of these challenges through their engagement with social media. Are their ideas about these technologies perfect? Of course not. But they’re a whole lot more nuanced than those of most adults that I talk with.

Earlier today, Pew Research Center’s Internet and American Life Project released a report entitled “Reputation, Management, and Social Media” which includes a slew of data that might seem counter-intuitive to adults who have really skewed mythical views of youth and young adults. They found that young adults are more actively engaged in managing what they share online than older adults. In fact, 71% of the 18-29s interviewed in August-September of 2009 who use social network sites reported having changed their privacy settings (vs. 55% of those 50-64). Think about that. This was before Time Magazine put privacy on their front page.

Now, let’s be clear… Young adults are actively engaged in managing their reputation but they’re not always successful. The tools are confusing and companies continue to expose them without them understanding what’s happening. But the fact that they go out of their way to try to shape their information is important. It signals very clearly that young adults care deeply about information flow and reputation.

Reputation matters. This is why Pew found that 47% of 18-29s delete comments made by others on their profiles (vs. 29% of 30-49s and 26% of 50-64s). Likewise, 41% of them remove their name from photos (vs. 24% of 30-49s and 18% of 50-64s). While Pew didn’t collect data on those under 18, I’d expect that this age-wise trend would continue into that age bracket. Much of this is because of digital literacy – the younger folks understand the controls better than the older folks AND they understand the implications better. We spend a lot more time telling teenagers and young adults that there are consequences to reputation when information is put up online than we do listening to ourselves. This is also because, as always, youth are learning the hard way. As Pew notes, young adults have made mistakes that they regret. They’ve also seen their friends make mistakes that they regret. All of this leads to greater consciousness about these issues and a deeper level of engagement.

As always, this Pew report is filled to the brim with useful information that gives us a sense of what’s going on. Here are some of my favorite bullet points:

  • Young adults are still more likely than older users to say they limit the amount of information available about them online.
  • Those who know more, worry more. And those who express concern are twice as likely to say they take steps to limit the amount of information available about them online.
  • The most visible and engaged internet users are also most active in limiting the information connected to their names online.
  • The more you see footprints left by others, the more likely you are to limit your own.
  • Those who take steps to limit the information about them online are less likely to post comments online using their real name.
  • More than half of social networking users (56%) have “unfriended” others in their network.
  • Just because we’re friends doesn’t mean I’m listening: 41% of social networking users say they filter updates posted by some of their friends.
  • Young adult users of social networking sites report the lowest levels of trust in them.

This Pew report does more than inform us about privacy and reputation issues. Its data sends an important message: We need more literacy about these issues. Ironically, I think that the best thing that’s going to come about because of Facebook’s ongoing screw-ups is an increased awareness of privacy issues. When youth see that they can do one of two things with their interests: delete them or make them publicly visible to everyone, they’re going to think twice. Sure, many will still make a lot of that content publicly accessible. And others will be very angry at Facebook for not giving them a meaningful choice. But this is going to force people to think about these issues. And the more people think about it, the more they actively try to control what’s going on. (Of course, we need Facebook to stop taking controls away from people, but that’s a different story.)

Pew’s report also counters a lot of myths that I’ve been hearing. For example, the desire for anonymity isn’t dead. Facebook tends to proudly announce that its users are completely honest about their names. Guess what? Many youth don’t trust Facebook. And they’re not providing them with real names either. Just take a look at this screen shot that I grabbed from a publicly accessible Facebook profile. This image isn’t doctored and while some of the names reflect real ones, there’s a lot of obscuring going on.

If you care about youth, if you care about issues of privacy and reputation, PLEASE read the Pew report. It is an example of brilliant research and tremendous reporting.

Quitting Facebook is pointless; challenging them to do better is not

I’ve been critiquing moves made by Facebook for a long time and I’m pretty used to them being misinterpreted. When I lamented the development of the News Feed, many people believed that I thought that the technology was a failure and that it wouldn’t be popular. This was patently untrue. I was bothered by it precisely because I knew that it would be popular, precisely because people love to gossip and learn about others, often to their own detriment. It was hugely disruptive and, when it launched, users lacked the controls necessary to really manage the situation effectively. Facebook responded with controls and people were able to find a way of engaging with Facebook with the News Feed as a given. But people were harmed in the transition.

Last week, I offered two different critiques of the moves made by Facebook, following up on my SXSW talk. Both have been misinterpreted in fascinating ways. Even news agencies are publishing statements like: “Microsoft wants Facebook to be regulated as a utility.” WTF? Seriously? Le sigh. (For the record, I’m not speaking on behalf of my employer nor do I want regulation; I think that it’s inevitable and I think that we need to contend with it. Oh, and I don’t think that the regulation that we’ll see will at all resemble the ways in which utilities are regulated. I was talking about utilities because that’s how Facebook frames itself. But clearly, most folks missed that.) Misinterpretations are frustrating because they make me feel as though I’m doing a bad job of communicating what I think is important. For this, I apologize to all of you. I will try to do better.

With this backdrop in mind, I want to enumerate six beliefs that I have that I want to flesh out in this post in light of discussions about how “everyone” is leaving Facebook:

  1. I do not believe that people will (or should) leave Facebook because of privacy issues.
  2. I do not believe that the tech elites who are publicly leaving Facebook will affect on the company’s numbers; they are unrepresentative and were not central users in the first place.
  3. I do not believe that an alternative will emerge in the next 2-5 years that will “replace” Facebook in any meaningful sense.
  4. I believe that Facebook will get regulated and I would like to see an open discussion of what this means and what form this takes.
  5. I believe that a significant minority of users are at risk because of decisions Facebook has made and I think that those of us who aren’t owe it to those who are to work through these issues.
  6. I believe that Facebook needs to start a public dialogue with users and those who are concerned ASAP (and Elliot Schrage’s Q&A doesn’t count).

As I stated in my last post, I think that Facebook plays a central role in the lives of many and I think that it is unreasonable for anyone to argue that they should “just leave” if they’re not happy. This is like saying that people should just leave their apartments if they’re not happy with their landlord or just leave their spouse because they’re not happy with a decision or just leave their job if they’re not happy with their boss. Life is more complicated than a series of simplified choices and we are always making calculated decisions, balancing costs and benefits. We stay with our jobs, apartments, and spouses even when things get messy because we hope to rectify problems. And those with the most to gain from Facebook are the least likely to leave, even if they also have the most to lose.

In the last few weeks, a handful of well known digerati have proudly announced that they’ve departed from Facebook. Most of these individuals weren’t that engaged in Facebook as users in the first place. I say this as someone who would lose very little (outside of research knowledge) from leaving. I am not a representative user. I barely share on the site for a whole host of personal and professional reasons. (And because I don’t have a life.) None of my friends would miss me if I did leave. In fact, they’d probably be grateful for the disappearance of my tweets. That means that me deciding to leave will have pretty much no impact on the network. This is true for many of the people who I’ve watched depart. At best, they’re content broadcasters. But people have other ways of consuming their broadcasting. So their departure is meaningless. These are not the people that Facebook is worried about losing.

People will not leave Facebook en masse, even if a new site were to emerge. Realistically, if that were enough, they could go to MySpace or Orkut or Friendster or Tribe. But they won’t. And not just because those sites are no longer “cool.” They won’t because they’ve invested in Facebook and they’re still hoping that Facebook will get its act together. Changing services is costly, just like moving apartments or changing jobs or breaking up in general. The deeper the relationship, the harder it is to simply walk away. And the relationship that Facebook has built with many of its users is very very very deep. When transition costs are high, people work hard to change the situation so that they don’t have to transition. This is why people are complaining, this is why they are speaking up. And it’s really important that those in power listen to what it is that people are upset about. The worst thing that those in power can do is ignore what’s going on, waiting for it to go away. This is a bad idea, not because people will walk away, but because they will look to greater authorities of power to push back. This is why Facebook’s failure to address what’s going on invites regulation.

Facebook has gotten quite accustomed to upset users. In “The Facebook Effect,” David Kirkpatrick outlines how Facebook came to expect that every little tweak would set off an internal rebellion. He documented how most of the members of the group “I AUTOMATICALLY HATE THE NEW FACEBOOK HOME PAGE” were employees of Facebook whose frustration with user rebellion was summed up by the group’s description: “I HATE CHANGE AND EVERYTHING ASSOCIATED WITH IT. I WANT EVERYTHING TO REMAIN STATIC THROUGHOUT MY ENTIRE LIFE.” Kirkpatrick quotes Zuckerberg as saying, “The biggest thing is going to be leading the user base through the changes that need to continue to happen… Whenever we roll out any major product there’s some sort of backlash.” Unfortunately, Facebook has become so numb to user complaints that it doesn’t see the different flavors of them any longer.

What’s happening around privacy is not simply user backlash. In fact, users are far less upset about what’s going on than most of us privileged techno-elites. Why? Because even with the New York Times writing article after article, most users have no idea what’s happening. I’m reminded of this every time that I sit down with someone who doesn’t run in my tech circles. And I’m reminded that they care every time I sit down and walk them through their privacy settings. The disconnect between average users and the elite is what makes this situation different, what makes this issue messier. Because the issue comes down to corporate transparency, informed consent, and choice. As long as users believe that their content is private and have no idea how public it is, they won’t take to the streets. A disappearance of publicity for these issues is to Facebook’s advantage. But it’s not to user’s advantage. Which is precisely why I think that it’s important that the techno-elite and the bloggers and the journalists keep covering this topic. Because it’s important that more people are aware of what’s going on. Unfortunately, of course, we also have to contend with the fact that most people being screwed don’t speak English and have no idea this conversation is even happening. Especially when privacy features are only explained in English.

In documenting Zuckerberg’s attitudes about transparency, Kirkpatrick sheds light on one of the weaknesses of his philosophy: Zuckerberg doesn’t know how to resolve the positive (and in his head inevitable) outcomes of transparency with the possible challenges of surveillance. As is typical in the American tech world, most of the conversation about surveillance centers on the government. But Kirkpatrick highlights another outcome of surveillance with a throwaway example that sends shivers down my spine: “When a father in Saudi Arabia caught his daughter interacting with men on Facebook, he killed her.” This is precisely the kind of unintended consequence that motivates me to speak loudly even though I’m privileged enough to not face these risks. Statistically, death is an unlikely outcome of surveillance. But there are many other kinds of side effects that are more common and also disturbing: losing one’s job, losing one’s health insurance, losing one’s parental rights, losing one’s relationships, etc. Sometimes, these losses will be because visibility makes someone more accountable. But sometimes this will occur because of misinterpretation and/or overreaction. And the examples keep on coming.

I am all in favor of people building what they believe to be alternatives to Facebook. I even invested in Diaspora because I’m curious what will come of that system. But I don’t believe that Diaspora is a Facebook killer. I do believe that there is a potential for Diaspora to do something interesting that will play a different role in the ecosystem and I look forward to seeing what they develop. I’m also curious about the future of peer-to-peer systems in light of the move towards the cloud, but I’m not convinced that decentralization is a panacea to all of our contemporary woes. Realistically, I don’t think that most users around the globe will find a peer-to-peer solution worth the hassle. The cost/benefit analysis isn’t in their favor. I’m also patently afraid that a system like Diaspora will be quickly leveraged for child pornography and other more problematic uses that tend to emerge when there isn’t a centralized control system. But innovation is important and I’m excited that a group of deeply passionate developers are being given a chance to see what they can pull off. And maybe it’ll be even more fabulous than we can possibly imagine, but I’d bet a lot of money that it won’t put a dent into Facebook. Alternatives aren’t the point.

Facebook has embedded itself pretty deeply into the ecosystem, into the hearts and minds of average people. They love the technology, but they’re not necessarily prepared for where the company is taking them. And while I’m all in favor of giving users the choice to embrace the opportunities and potential of being highly visible, of being a part of a transparent society, I’m not OK with throwing them off the boat just to see if they can swim. Fundamentally, my disagreement with Facebook’s approach to these matters is a philosophical one. Do I want to create more empathy, more tolerance in a global era? Of course. But I’m not convinced that sudden exposure to the world at large gets people there and I genuinely fear that possible backlash that can emerge. I’m not convinced that this won’t enhance a type of extremism that is manifesting around the globe as we speak.

Screaming about the end of Facebook is futile. And I think that folks are wasting a lot of energy telling others to quit or boycott to send a message. Doing so will do no such thing. It’ll just make us technophiles look like we’re living on a different planet. Which we are. Instead, I think that we should all be working to help people understand what’s going on. I love using Reclaim Privacy to walk through privacy settings with people. While you’re helping your family and friends understand their settings, talk to them and record their stories. I want to hear average people’s stories, their fears, their passions. I want to hear what privacy means to them and why they care about it. I want to hear about the upside and downside of visibility and the challenges introduced by exposure. And I want folks inside Facebook to listen. Not because this is another user rebellion, but because Facebook’s decisions shape the dynamics of so many people’s lives. And we need to help make those voices heard.

I also want us techno-elites to think hard and deep about the role that regulation may play and what the consequences may be for all of us. In thinking about regulation, always keep Larry Lessig’s arguments in “Code” in mind. Larry argued that there are four points of regulation for all change: the market, the law, social norms, and architecture (or code). Facebook’s argument is that social norms have changed so dramatically that what they’re doing with code aligns with the people (and conveniently the market). I would argue that they’re misreading social norms but there’s no doubt that the market and code work in their favor. This is precisely why I think that law will get involved and I believe that legal regulators don’t share Facebook’s attitudes about social norms. This is not a question of if but a question of when, in what form, and at what cost. And I think that all of us who are living and breathing this space should speak up about how we think this should play out because if we just pretend like it won’t happen, not only are we fooling ourselves, but we’re missing an opportunity to shape the future.

I realize that Elliot Schrage attempted to communicate with the public through his NYTimes responses. And I believe that he failed. But I’m still confused about why Zuckerberg isn’t engaging publicly about these issues. (A letter to Robert Scoble doesn’t count.) In each major shitstorm, we eventually got a blog post from Zuckerberg outlining his views. Why haven’t we received one of those? Why is the company so silent on these matters? In inviting the users to vote on the changes to the Terms of Service, Facebook mapped out the possibility of networked engagement, of inviting passionate users to speak back and actively listening. This was a huge success for Facebook. Why aren’t they doing this now? I find the silence to be quite eerie. I cannot imagine that Facebook isn’t listening. So, Facebook, if you are listening, please start a dialogue with the public. Please be transparent if you’re asking us to be. And please start now, not when you’ve got a new set of features ready.

Regardless of how the digerati feel about Facebook, millions of average people are deeply wedded to the site. They won’t leave because the cost/benefit ratio is still in their favor. But that doesn’t mean that they aren’t suffering because of decisions being made about them and for them. What’s at stake now is not whether or not Facebook will become passe, but whether or not Facebook will become evil. I think that we owe it to the users to challenge Facebook to live up to a higher standard, regardless of what we as individuals may gain or lose from their choices. And we owe it to ourselves to make sure that everyone is informed and actively engaged in a discussion about the future of privacy. Zuckerberg is right: “Given that the world is moving towards more sharing of information, making sure that it happens in a bottom-up way, with people inputting their information themselves and having control over how their information interacts with the system, as opposed to a centralized way, through it being tracked in some surveillance system. I think it’s critical for the world.” Now, let’s hold him to it.

Update: Let me be clear… Anyone who wants to leave Facebook is more than welcome to do so. Participation is about choice. But to assume that there will be a mass departure is naive. And to assume that a personal boycott will have a huge impact is also naive. But if it’s not working for you personally, leave. And if you don’t think it’s healthy for your friends to participate, encourage them to do so too. Just do expect a mass exodus to fix the problems that we’re facing.

Update: Mark Zuckerberg wrote an op-ed in the Washington Post reiterating their goals and saying that changes will be coming. I wish he would’ve apologized for December or made any allusions to the fact that people were exposed or that they simply can’t turn off all that is now public. It’s not just about simplifying the available controls.

Translations:

Facebook is a utility; utilities get regulated

From day one, Mark Zuckerberg wanted Facebook to become a social utility. He succeeded. Facebook is now a utility for many. The problem with utilities is that they get regulated.

Yesterday, I ranted about Facebook and “radical transparency.” Lots of people wrote to thank me for saying what I said. And so I looked many of them up. Most were on Facebook. I wrote back to some, asking why they were still on Facebook if they disagreed with where the company was going. The narrative was consistent: they felt as though the needed to be there. For work, for personal reasons, because they got to connect with someone there that they couldn’t connect with elsewhere. Nancy Baym did a phenomenal job of explaining this dynamic in her post on Thursday: “Why, despite myself, I am not leaving Facebook. Yet.”

Every day. I look with admiration and envy on my friends who have left. I’ve also watched sadly as several have returned. And I note above all that very few of my friends, who by nature of our professional connections are probably more attuned to these issues than most, have left. I don’t like supporting Facebook at all. But I do.

And here is why: they provide a platform through which I gain real value. I actually like the people I went to school with. I know that even if I write down all their email addresses, we are not going to stay in touch and recapture the recreated community we’ve built on Facebook. I like my colleagues who work elsewhere, and I know that we have mailing lists and Twitter, but I also know that without Facebook I won’t be in touch with their daily lives as I’ve been these last few years. I like the people I’ve met briefly or hope I’ll meet soon, and I know that Facebook remains our best way to keep in touch without the effort we would probably not take of engaging in sustained one-to-one communication.

The emails that I received privately to my query elicited the same sentiment. People felt they needed to stay put, regardless of what Facebook chose to do. Those working at Facebook should be proud: they’ve truly provided a service that people feel is an essential part of their lives, one that they need more than want. That’s the fundamental nature of a utility. They succeeded at their mission.

Throughout Kirkpatrick’s “The Facebook Effect”, Zuckerberg and his comrades are quoted repeated as believing that Facebook is different because it’s a social utility. This language is precisely what’s used in the “About Facebook” on Facebook’s Press Room page. Facebook never wanted to be a social network site; it wanted to be a social utility. Thus, it shouldn’t surprise anyone that Facebook functions as a utility.

And yet, people continue to be surprised. Partially, this is Facebook’s fault. They know that people want to hear that they have a “choice” and most people don’t think choice when they think utility. Thus, I wasn’t surprised that Elliot Schrage’s fumbling responses in the NYTimes emphasized choice, not utility: “Joining Facebook is a conscious choice by vast numbers of people who have stepped forward deliberately and intentionally to connect and share… If you’re not comfortable sharing, don’t.”

In my post yesterday, I emphasized that what’s at stake with Facebook today is not about privacy or publicity but informed consent and choice. Facebook speaks of itself as a utility while also telling people they have a choice. But there’s a conflict here. We know this conflict deeply in the United States. When it comes to utilities like water, power, sewage, Internet, etc., I am constantly told that I have a choice. But like hell I’d choose Comcast if I had a choice. Still, I subscribe to Comcast. Begrudgingly. Because the “choice” I have is Internet or no Internet.

I hate all of the utilities in my life. Venomous hatred. And because they’re monopolies, they feel no need to make me appreciate them. Cuz they know that I’m not going to give up water, power, sewage, or the Internet out of spite. Nor will most people give up Facebook, regardless of how much they grow to hate them.

Your gut reaction might be to tell me that Facebook is not a utility. You’re wrong. People’s language reflects that people are depending on Facebook just like they depended on the Internet a decade ago. Facebook may not be at the scale of the Internet (or the Internet at the scale of electricity), but that doesn’t mean that it’s not angling to be a utility or quickly becoming one. Don’t forget: we spent how many years being told that the Internet wasn’t a utility, wasn’t a necessity… now we’re spending what kind of money trying to get universal broadband out there without pissing off the monopolistic beasts because we like to pretend that choice and utility can sit easily together. And because we’re afraid to regulate.

And here’s where we get to the meat of why Facebook being a utility matters. Utilities get regulated. Less in the United States than in any other part of the world. Here, we like to pretend that capitalism works with utilities. We like to “de-regulate” utilities to create “choice” while continuing to threaten regulation when the companies appear too monopolistic. It’s the American Nightmare. But generally speaking, it works, and we survive without our choices and without that much regulation. We can argue about whether or not regulation makes things cheaper or more expensive, but we can’t argue about whether or not regulators are involved with utilities: they are always watching them because they matter to the people.

The problem with Facebook is that it’s becoming an international utility, not one neatly situated in the United States. It’s quite popular in Canada and Europe, two regions that LOVE to regulate their utilities. This might start out being about privacy, but, if we’re not careful, regulation is going to go a lot deeper than that. Even in the States, we’ll see regulation, but it won’t look the same as what we see in Europe and Canada. I find James Grimmelmann’s argument that we think about privacy as product safety to be an intriguing frame. I’d expect to see a whole lot more coming down the line in this regards. And Facebook knows it. Why else would they bring in a former Bush regulator to defend its privacy practices?

Thus far, in the world of privacy, when a company oversteps its hand, people flip out, governments threaten regulation, and companies back off. This is not what’s happening with Facebook. Why? Because they know people won’t leave and Facebook doesn’t think that regulators matter. In our public discourse, we keep talking about the former and ignoring the latter. We can talk about alternatives to Facebook until we’re blue in the face and we can point to the handful of people who are leaving as “proof” that Facebook will decline, but that’s because we’re fooling ourselves. If Facebook is a utility – and I strongly believe it is – the handful of people who are building cabins in the woods to get away from the evil utility companies are irrelevant in light of all of the people who will suck up and deal with the utility to live in the city. This is going to come down to regulation, whether we like it or not.

The problem is that we in the tech industry don’t like regulation. Not because we’re evil but because we know that regulation tends to make a mess of things. We like the threat of regulation and we hope that it will keep things at bay without actually requiring stupidity. So somehow, the social norm has been to push as far as possible and then pull back quickly when regulatory threats emerge. Of course, there have been exceptions. And I work for one of them. Two decades ago, Microsoft was as arrogant as they come and they didn’t balk at the threat of regulation. As a result, the company spent years mired in regulatory hell. And being painted as evil. The company still lives with that weight and the guilt wrt they company’s historical hubris is palpable throughout the industry.

I cannot imagine that Facebook wants to be regulated, but I fear that it thinks that it won’t be. There’s cockiness in the air. Personally, I don’t care whether or not Facebook alone gets regulated, but regulation’s impact tends to extend much further than one company. And I worry about what kinds of regulation we’ll see. Don’t get me wrong: I think that regulators will come in with the best of intentions; they often (but not always) do. I just think that what they decide will have unintended consequences that are far more harmful than helpful and this makes me angry at Facebook for playing chicken with them. I’m not a libertarian but I’ve come to respect libertarian fears of government regulation because regulation often does backfire in some of the most frustrating ways. (A few weeks ago, I wrote a letter to be included in the COPPA hearings outlining why the intention behind COPPA was great and the result dreadful.) The difference is that I’m not so against regulation as to not welcome it when people are being screwed. And sadly, I think that we’re getting there. I just wish that Facebook would’ve taken a more responsible path so that we wouldn’t have to deal with what’s coming. And I wish that they’d realize that the people they’re screwing are those who are most vulnerable already, those whose voices they’ll never hear if they don’t make an effort.

When Facebook introduced the News Feed and received a backlash from its users, Zuckerberg’s first blog post was to tell everyone to calm down. When they didn’t, new features were introduced to help them navigate the system. Facebook was willing to talk to its users, to negotiate with them, to make a deal. Perhaps this was because they were all American college students, a population that early Facebook understood. Still, when I saw the backlash emerging this time, I was waiting and watching for an open dialogue to emerge. Instead, we got PR mumblings in the NYTimes telling people they were stupid and blog posts on “Gross National Happiness.” I’m sure that Facebook’s numbers are as high as ever and so they’re convinced that this will blow over, that users will just adjust. I bet they think that this is just American techies screaming up a storm for fun. And while more people are searching to find how to delete their account, most will not. And Facebook rightfully knows that. But what’s next is not about whether or not there’s enough user revolt to make Facebook turn back. There won’t be. What’s next is how this emergent utility gets regulated. Cuz sadly, I doubt that anything else is going to stop them in their tracks. And I think that regulators know that.

Update: I probably should’ve titled this “Facebook is trying to be a utility; utilities get regulated” but I chopped it because that was too long. What’s at stake is not whether or not we can agree that Facebook is a utility, but whether or not regulation will come into play. There’s no doubt that Facebook wants to be a utility, sees itself as a utility. So even if we don’t see them as a utility, the fact that they do matters. As does the fact that some people are using it with that attitude. I’d give up my water company (or Comcast) if a better alternative came along too. When people feel as though they are wedded to something because of its utilitarian value, the company providing it can change but the infrastructure is there for good.  Rather than arguing about the details of what counts as a utility, let’s move past that to think about what it means that regulation is coming.

Facebook and “radical transparency” (a rant)

At SXSW, I decided to talk about privacy because I thought that it would be the most important issue of the year. I was more accurate than my wildest dreams. For the last month, I’ve watched as conversations about privacy went from being the topic of the tech elite to a conversation that’s pervasive. The press coverage is overwhelming – filled with infographics and a concerted effort by journalists to make sense of and communicate what seems to be a moving target. I commend them for doing so.

My SXSW used a bunch of different case studies but folks focused on two: Google and Facebook. After my talk, I received numerous emails from folks at Google, including the PM in charge of Buzz. The tenor was consistent, effectively: “we fucked up, we’re trying to fix it, please help us.” What startled me was the radio silence from Facebook, although a close friend of mine told me that Randi Zuckerberg had heard it and effectively responded with a big ole ::gulp:: My SXSW critique concerned their decision in December, an irresponsible move that I felt put users at risk. I wasn’t prepared for how they were going to leverage that data only a few months later.

As most of you know, Facebook has been struggling to explain its privacy-related decisions for the last month while simultaneously dealing with frightening security issues. If you’re not a techie, I’d encourage you to start poking around. The NYTimes is doing an amazing job keeping up with the story, as is TechCrunch, Mashable, and InsideFacebook. The short version… People are cranky. Facebook thinks that it’s just weirdo tech elites like me who are pissed off. They’re standing firm and trying to justify why what they’re doing is good for everyone. Their attitude has triggered the panic button amongst regulators and all sorts of regulators are starting to sniff around. Facebook hired an ex-Bush regulator to manage this. No one is quite sure what is happening but Jason Calacanis thinks that Facebook has overplayed its hand. Meanwhile, security problems mean that even more content has been exposed, including email addresses, IP addresses (your location), and full chat logs. This has only upped the panic amongst those who can imagine worst case scenarios. Like the idea that someone out there is slowly piecing together IP addresses (location) and full names and contact information. A powerful database, and not one that anyone would be too happy to be floating around.

Amidst all of what’s going on, everyone is anxiously awaiting David Kirkpatrick’s soon-to-be-released “The Facebook Effect.” which basically outlines the early days of the company. Throughout the book, Kirkpatrick sheds light on why we’re where we are today without even realizing where we’d be. Consider these two quotes from Zuckerberg:

  • “We always thought people would share more if we didn’t let them do whatever they wanted, because it gave them some order.” – Zuckerberg, 2004
  • “You have one identity… The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly… Having two identities for yourself is an example of a lack of integrity” – Zuckerberg, 2009

In trying to be a neutral reporter, Kirkpatrick doesn’t critically interrogate the language that Zuckerberg or other executives use. At times, he questions them, pointing to how they might make people’s lives challenging. But he undermines his own critiques by accepting Zuckerberg’s premise that the tides they are a turning. For example, he states that “The older you are, the more likely you are to find Facebook’s exposure of personal information intrusive and excessive.” Interestingly, rock solid non-marketing data is about to be released to refute this point. Youth are actually much more concerned about exposure than adults these days. Why? Probably because they get it. And it’s why they’re using fake names and trying to go on the DL (down-low).

With this backdrop in mind, I want to talk about a concept that Kirkpatrick suggests is core to Facebook: “radical transparency.” In short, Kirkpatrick argues that Zuckerberg believes that people will be better off if they make themselves transparent. Not only that, society will be better off. (We’ll ignore the fact that Facebook’s purse strings may be better off too.) My encounters with Zuckerberg lead me to believe that he genuinely believes this, he genuinely believes that society will be better off if people make themselves transparent. And given his trajectory, he probably believes that more and more people want to expose themselves. Silicon Valley is filled with people engaged in self-branding, making a name for themselves by being exhibitionists. It doesn’t surprise me that Scoble wants to expose himself; he’s always the first to engage in a mass collection on social network sites, happy to be more-public-than-thou. Sometimes, too public. But that’s his choice. The problem is that not everyone wants to be along for the ride.

Jeff Jarvis gets at the core issue with his post “Confusing *a* public with *the* public”. As I’ve said time and time again, people do want to engage in public, but not the same public that includes all of you. Jarvis relies on Habermas, but the right way to read this is through the ideas of Michael Warner’s “Publics and Counterpublics”. Facebook was originally a counterpublic, a public that people turned to because they didn’t like the publics that they had accessed to. What’s happening now is ripping the public that was created to shreds and people’s discomfort stems from that.

What I find most fascinating in all of the discussions of transparency is the lack of transparency by Facebook itself. Sure, it would be nice to see executives use the same privacy settings that they determine are the acceptable defaults. And it would be nice to know what they’re saying when they’re meeting. But that’s not the kind of transparency I mean. I mean transparency in interface design.

A while back, I was talking with a teenage girl about her privacy settings and noticed that she had made lots of content available to friends-of-friends. I asked her if she made her content available to her mother. She responded with, “of course not!” I had noticed that she had listed her aunt as a friend of hers and so I surfed with her to her aunt’s page and pointed out that her mother was a friend of her aunt, thus a friend-of-a-friend. She was horrified. It had never dawned on her that her mother might be included in that grouping.

Over and over again, I find that people’s mental model of who can see what doesn’t match up with reality. People think “everyone” includes everyone who searches for them on Facebook. They never imagine that “everyone” includes every third party sucking up data for goddess only knows what purpose. They think that if they lock down everything in the settings that they see, that they’re completely locked down. They don’t get that their friends lists, interests, likes, primary photo, affiliations, and other content is publicly accessible.

If Facebook wanted radical transparency, they could communicate to users every single person and entity who can see their content. They could notify then when the content is accessed by a partner. They could show them who all is included in “friends-of-friends” (or at least a number of people). They hide behind lists because people’s abstractions allow them to share more. When people think “friends-of-friends” they don’t think about all of the types of people that their friends might link to; they think of the people that their friends would bring to a dinner party if they were to host it. When they think of everyone, they think of individual people who might have an interest in them, not 3rd party services who want to monetize or redistribute their data. Users have no sense of how their data is being used and Facebook is not radically transparent about what that data is used for. Quite the opposite. Convolution works. It keeps the press out.

The battle that is underway is not a battle over the future of privacy and publicity. It’s a battle over choice and informed consent. It’s unfolding because people are being duped, tricked, coerced, and confused into doing things where they don’t understand the consequences. Facebook keeps saying that it gives users choices, but that is completely unfair. It gives users the illusion of choice and hides the details away from them “for their own good.”

I have no problem with Scoble being as public as he’d like to be. And I do think it’s unfortunate that Facebook never gave him that choice. I’m not that public, but I’m darn close. And I use Twitter and a whole host of other services to be quite visible. The key to addressing this problem is not to say “public or private?” but to ask how we can make certain people are 1) informed; 2) have the right to chose; and 3) are consenting without being deceived. I’d be a whole lot less pissed off if people had to opt-in in December. Or if they could’ve retained the right to keep their friends lists, affiliations, interests, likes, and other content as private as they had when they first opted into Facebook. Slowly disintegrating the social context without choice isn’t consent; it’s trickery.

What pisses me off the most are the numbers of people who feel trapped. Not because they don’t have another choice. (Technically, they do.) But because they feel like they don’t. They have invested time, energy, resources, into building Facebook what it is. They don’t trust the service, are concerned about it, and are just hoping the problems will go away. It pains me how many people are living like ostriches. If we don’t look, it doesn’t exist, right?? This isn’t good for society. Forcing people into being exposed isn’t good for society. Outting people isn’t good for society, turning people into mini-celebrities isn’t good for society. It isn’t good for individuals either. The psychological harm can be great. Just think of how many “heros” have killed themselves following the high levels of publicity they received.

Zuckerberg and gang may think that they know what’s best for society, for individuals, but I violently disagree. I think that they know what’s best for the privileged class. And I’m terrified of the consequences that these moves are having for those who don’t live in a lap of luxury. I say this as someone who is privileged, someone who has profited at every turn by being visible. But also as someone who has seen the costs and pushed through the consequences with a lot of help and support. Being publicly visible isn’t always easy, it’s not always fun. And I don’t think that anyone should go through what I’ve gone through without making a choice to do it. So I’m angry. Very angry. Angry that some people aren’t being given that choice, angry that they don’t know what’s going on, angry that it’s become OK in my industry to expose people. I think that it’s high time that we take into consideration those whose lives aren’t nearly as privileged as ours, those who aren’t choosing to take the risks that we take, those who can’t afford to. This isn’t about liberals vs. libertarians; it’s about monkeys vs. robots.

if you’re not angry / you’re just stupid / or you don’t care
how else can you react / when you know / something’s so unfair
the men of the hour / can kill half the world in war
make them slaves to a super power / and let them die poor

– Ani Difranco, Out of Range

(Also posted at Blogher)

(Translated to Italian by orangeek)