Category Archives: facebook

Quitting Facebook is pointless; challenging them to do better is not

I’ve been critiquing moves made by Facebook for a long time and I’m pretty used to them being misinterpreted. When I lamented the development of the News Feed, many people believed that I thought that the technology was a failure and that it wouldn’t be popular. This was patently untrue. I was bothered by it precisely because I knew that it would be popular, precisely because people love to gossip and learn about others, often to their own detriment. It was hugely disruptive and, when it launched, users lacked the controls necessary to really manage the situation effectively. Facebook responded with controls and people were able to find a way of engaging with Facebook with the News Feed as a given. But people were harmed in the transition.

Last week, I offered two different critiques of the moves made by Facebook, following up on my SXSW talk. Both have been misinterpreted in fascinating ways. Even news agencies are publishing statements like: “Microsoft wants Facebook to be regulated as a utility.” WTF? Seriously? Le sigh. (For the record, I’m not speaking on behalf of my employer nor do I want regulation; I think that it’s inevitable and I think that we need to contend with it. Oh, and I don’t think that the regulation that we’ll see will at all resemble the ways in which utilities are regulated. I was talking about utilities because that’s how Facebook frames itself. But clearly, most folks missed that.) Misinterpretations are frustrating because they make me feel as though I’m doing a bad job of communicating what I think is important. For this, I apologize to all of you. I will try to do better.

With this backdrop in mind, I want to enumerate six beliefs that I have that I want to flesh out in this post in light of discussions about how “everyone” is leaving Facebook:

  1. I do not believe that people will (or should) leave Facebook because of privacy issues.
  2. I do not believe that the tech elites who are publicly leaving Facebook will affect on the company’s numbers; they are unrepresentative and were not central users in the first place.
  3. I do not believe that an alternative will emerge in the next 2-5 years that will “replace” Facebook in any meaningful sense.
  4. I believe that Facebook will get regulated and I would like to see an open discussion of what this means and what form this takes.
  5. I believe that a significant minority of users are at risk because of decisions Facebook has made and I think that those of us who aren’t owe it to those who are to work through these issues.
  6. I believe that Facebook needs to start a public dialogue with users and those who are concerned ASAP (and Elliot Schrage’s Q&A doesn’t count).

As I stated in my last post, I think that Facebook plays a central role in the lives of many and I think that it is unreasonable for anyone to argue that they should “just leave” if they’re not happy. This is like saying that people should just leave their apartments if they’re not happy with their landlord or just leave their spouse because they’re not happy with a decision or just leave their job if they’re not happy with their boss. Life is more complicated than a series of simplified choices and we are always making calculated decisions, balancing costs and benefits. We stay with our jobs, apartments, and spouses even when things get messy because we hope to rectify problems. And those with the most to gain from Facebook are the least likely to leave, even if they also have the most to lose.

In the last few weeks, a handful of well known digerati have proudly announced that they’ve departed from Facebook. Most of these individuals weren’t that engaged in Facebook as users in the first place. I say this as someone who would lose very little (outside of research knowledge) from leaving. I am not a representative user. I barely share on the site for a whole host of personal and professional reasons. (And because I don’t have a life.) None of my friends would miss me if I did leave. In fact, they’d probably be grateful for the disappearance of my tweets. That means that me deciding to leave will have pretty much no impact on the network. This is true for many of the people who I’ve watched depart. At best, they’re content broadcasters. But people have other ways of consuming their broadcasting. So their departure is meaningless. These are not the people that Facebook is worried about losing.

People will not leave Facebook en masse, even if a new site were to emerge. Realistically, if that were enough, they could go to MySpace or Orkut or Friendster or Tribe. But they won’t. And not just because those sites are no longer “cool.” They won’t because they’ve invested in Facebook and they’re still hoping that Facebook will get its act together. Changing services is costly, just like moving apartments or changing jobs or breaking up in general. The deeper the relationship, the harder it is to simply walk away. And the relationship that Facebook has built with many of its users is very very very deep. When transition costs are high, people work hard to change the situation so that they don’t have to transition. This is why people are complaining, this is why they are speaking up. And it’s really important that those in power listen to what it is that people are upset about. The worst thing that those in power can do is ignore what’s going on, waiting for it to go away. This is a bad idea, not because people will walk away, but because they will look to greater authorities of power to push back. This is why Facebook’s failure to address what’s going on invites regulation.

Facebook has gotten quite accustomed to upset users. In “The Facebook Effect,” David Kirkpatrick outlines how Facebook came to expect that every little tweak would set off an internal rebellion. He documented how most of the members of the group “I AUTOMATICALLY HATE THE NEW FACEBOOK HOME PAGE” were employees of Facebook whose frustration with user rebellion was summed up by the group’s description: “I HATE CHANGE AND EVERYTHING ASSOCIATED WITH IT. I WANT EVERYTHING TO REMAIN STATIC THROUGHOUT MY ENTIRE LIFE.” Kirkpatrick quotes Zuckerberg as saying, “The biggest thing is going to be leading the user base through the changes that need to continue to happen… Whenever we roll out any major product there’s some sort of backlash.” Unfortunately, Facebook has become so numb to user complaints that it doesn’t see the different flavors of them any longer.

What’s happening around privacy is not simply user backlash. In fact, users are far less upset about what’s going on than most of us privileged techno-elites. Why? Because even with the New York Times writing article after article, most users have no idea what’s happening. I’m reminded of this every time that I sit down with someone who doesn’t run in my tech circles. And I’m reminded that they care every time I sit down and walk them through their privacy settings. The disconnect between average users and the elite is what makes this situation different, what makes this issue messier. Because the issue comes down to corporate transparency, informed consent, and choice. As long as users believe that their content is private and have no idea how public it is, they won’t take to the streets. A disappearance of publicity for these issues is to Facebook’s advantage. But it’s not to user’s advantage. Which is precisely why I think that it’s important that the techno-elite and the bloggers and the journalists keep covering this topic. Because it’s important that more people are aware of what’s going on. Unfortunately, of course, we also have to contend with the fact that most people being screwed don’t speak English and have no idea this conversation is even happening. Especially when privacy features are only explained in English.

In documenting Zuckerberg’s attitudes about transparency, Kirkpatrick sheds light on one of the weaknesses of his philosophy: Zuckerberg doesn’t know how to resolve the positive (and in his head inevitable) outcomes of transparency with the possible challenges of surveillance. As is typical in the American tech world, most of the conversation about surveillance centers on the government. But Kirkpatrick highlights another outcome of surveillance with a throwaway example that sends shivers down my spine: “When a father in Saudi Arabia caught his daughter interacting with men on Facebook, he killed her.” This is precisely the kind of unintended consequence that motivates me to speak loudly even though I’m privileged enough to not face these risks. Statistically, death is an unlikely outcome of surveillance. But there are many other kinds of side effects that are more common and also disturbing: losing one’s job, losing one’s health insurance, losing one’s parental rights, losing one’s relationships, etc. Sometimes, these losses will be because visibility makes someone more accountable. But sometimes this will occur because of misinterpretation and/or overreaction. And the examples keep on coming.

I am all in favor of people building what they believe to be alternatives to Facebook. I even invested in Diaspora because I’m curious what will come of that system. But I don’t believe that Diaspora is a Facebook killer. I do believe that there is a potential for Diaspora to do something interesting that will play a different role in the ecosystem and I look forward to seeing what they develop. I’m also curious about the future of peer-to-peer systems in light of the move towards the cloud, but I’m not convinced that decentralization is a panacea to all of our contemporary woes. Realistically, I don’t think that most users around the globe will find a peer-to-peer solution worth the hassle. The cost/benefit analysis isn’t in their favor. I’m also patently afraid that a system like Diaspora will be quickly leveraged for child pornography and other more problematic uses that tend to emerge when there isn’t a centralized control system. But innovation is important and I’m excited that a group of deeply passionate developers are being given a chance to see what they can pull off. And maybe it’ll be even more fabulous than we can possibly imagine, but I’d bet a lot of money that it won’t put a dent into Facebook. Alternatives aren’t the point.

Facebook has embedded itself pretty deeply into the ecosystem, into the hearts and minds of average people. They love the technology, but they’re not necessarily prepared for where the company is taking them. And while I’m all in favor of giving users the choice to embrace the opportunities and potential of being highly visible, of being a part of a transparent society, I’m not OK with throwing them off the boat just to see if they can swim. Fundamentally, my disagreement with Facebook’s approach to these matters is a philosophical one. Do I want to create more empathy, more tolerance in a global era? Of course. But I’m not convinced that sudden exposure to the world at large gets people there and I genuinely fear that possible backlash that can emerge. I’m not convinced that this won’t enhance a type of extremism that is manifesting around the globe as we speak.

Screaming about the end of Facebook is futile. And I think that folks are wasting a lot of energy telling others to quit or boycott to send a message. Doing so will do no such thing. It’ll just make us technophiles look like we’re living on a different planet. Which we are. Instead, I think that we should all be working to help people understand what’s going on. I love using Reclaim Privacy to walk through privacy settings with people. While you’re helping your family and friends understand their settings, talk to them and record their stories. I want to hear average people’s stories, their fears, their passions. I want to hear what privacy means to them and why they care about it. I want to hear about the upside and downside of visibility and the challenges introduced by exposure. And I want folks inside Facebook to listen. Not because this is another user rebellion, but because Facebook’s decisions shape the dynamics of so many people’s lives. And we need to help make those voices heard.

I also want us techno-elites to think hard and deep about the role that regulation may play and what the consequences may be for all of us. In thinking about regulation, always keep Larry Lessig’s arguments in “Code” in mind. Larry argued that there are four points of regulation for all change: the market, the law, social norms, and architecture (or code). Facebook’s argument is that social norms have changed so dramatically that what they’re doing with code aligns with the people (and conveniently the market). I would argue that they’re misreading social norms but there’s no doubt that the market and code work in their favor. This is precisely why I think that law will get involved and I believe that legal regulators don’t share Facebook’s attitudes about social norms. This is not a question of if but a question of when, in what form, and at what cost. And I think that all of us who are living and breathing this space should speak up about how we think this should play out because if we just pretend like it won’t happen, not only are we fooling ourselves, but we’re missing an opportunity to shape the future.

I realize that Elliot Schrage attempted to communicate with the public through his NYTimes responses. And I believe that he failed. But I’m still confused about why Zuckerberg isn’t engaging publicly about these issues. (A letter to Robert Scoble doesn’t count.) In each major shitstorm, we eventually got a blog post from Zuckerberg outlining his views. Why haven’t we received one of those? Why is the company so silent on these matters? In inviting the users to vote on the changes to the Terms of Service, Facebook mapped out the possibility of networked engagement, of inviting passionate users to speak back and actively listening. This was a huge success for Facebook. Why aren’t they doing this now? I find the silence to be quite eerie. I cannot imagine that Facebook isn’t listening. So, Facebook, if you are listening, please start a dialogue with the public. Please be transparent if you’re asking us to be. And please start now, not when you’ve got a new set of features ready.

Regardless of how the digerati feel about Facebook, millions of average people are deeply wedded to the site. They won’t leave because the cost/benefit ratio is still in their favor. But that doesn’t mean that they aren’t suffering because of decisions being made about them and for them. What’s at stake now is not whether or not Facebook will become passe, but whether or not Facebook will become evil. I think that we owe it to the users to challenge Facebook to live up to a higher standard, regardless of what we as individuals may gain or lose from their choices. And we owe it to ourselves to make sure that everyone is informed and actively engaged in a discussion about the future of privacy. Zuckerberg is right: “Given that the world is moving towards more sharing of information, making sure that it happens in a bottom-up way, with people inputting their information themselves and having control over how their information interacts with the system, as opposed to a centralized way, through it being tracked in some surveillance system. I think it’s critical for the world.” Now, let’s hold him to it.

Update: Let me be clear… Anyone who wants to leave Facebook is more than welcome to do so. Participation is about choice. But to assume that there will be a mass departure is naive. And to assume that a personal boycott will have a huge impact is also naive. But if it’s not working for you personally, leave. And if you don’t think it’s healthy for your friends to participate, encourage them to do so too. Just do expect a mass exodus to fix the problems that we’re facing.

Update: Mark Zuckerberg wrote an op-ed in the Washington Post reiterating their goals and saying that changes will be coming. I wish he would’ve apologized for December or made any allusions to the fact that people were exposed or that they simply can’t turn off all that is now public. It’s not just about simplifying the available controls.


Facebook is a utility; utilities get regulated

From day one, Mark Zuckerberg wanted Facebook to become a social utility. He succeeded. Facebook is now a utility for many. The problem with utilities is that they get regulated.

Yesterday, I ranted about Facebook and “radical transparency.” Lots of people wrote to thank me for saying what I said. And so I looked many of them up. Most were on Facebook. I wrote back to some, asking why they were still on Facebook if they disagreed with where the company was going. The narrative was consistent: they felt as though the needed to be there. For work, for personal reasons, because they got to connect with someone there that they couldn’t connect with elsewhere. Nancy Baym did a phenomenal job of explaining this dynamic in her post on Thursday: “Why, despite myself, I am not leaving Facebook. Yet.”

Every day. I look with admiration and envy on my friends who have left. I’ve also watched sadly as several have returned. And I note above all that very few of my friends, who by nature of our professional connections are probably more attuned to these issues than most, have left. I don’t like supporting Facebook at all. But I do.

And here is why: they provide a platform through which I gain real value. I actually like the people I went to school with. I know that even if I write down all their email addresses, we are not going to stay in touch and recapture the recreated community we’ve built on Facebook. I like my colleagues who work elsewhere, and I know that we have mailing lists and Twitter, but I also know that without Facebook I won’t be in touch with their daily lives as I’ve been these last few years. I like the people I’ve met briefly or hope I’ll meet soon, and I know that Facebook remains our best way to keep in touch without the effort we would probably not take of engaging in sustained one-to-one communication.

The emails that I received privately to my query elicited the same sentiment. People felt they needed to stay put, regardless of what Facebook chose to do. Those working at Facebook should be proud: they’ve truly provided a service that people feel is an essential part of their lives, one that they need more than want. That’s the fundamental nature of a utility. They succeeded at their mission.

Throughout Kirkpatrick’s “The Facebook Effect”, Zuckerberg and his comrades are quoted repeated as believing that Facebook is different because it’s a social utility. This language is precisely what’s used in the “About Facebook” on Facebook’s Press Room page. Facebook never wanted to be a social network site; it wanted to be a social utility. Thus, it shouldn’t surprise anyone that Facebook functions as a utility.

And yet, people continue to be surprised. Partially, this is Facebook’s fault. They know that people want to hear that they have a “choice” and most people don’t think choice when they think utility. Thus, I wasn’t surprised that Elliot Schrage’s fumbling responses in the NYTimes emphasized choice, not utility: “Joining Facebook is a conscious choice by vast numbers of people who have stepped forward deliberately and intentionally to connect and share… If you’re not comfortable sharing, don’t.”

In my post yesterday, I emphasized that what’s at stake with Facebook today is not about privacy or publicity but informed consent and choice. Facebook speaks of itself as a utility while also telling people they have a choice. But there’s a conflict here. We know this conflict deeply in the United States. When it comes to utilities like water, power, sewage, Internet, etc., I am constantly told that I have a choice. But like hell I’d choose Comcast if I had a choice. Still, I subscribe to Comcast. Begrudgingly. Because the “choice” I have is Internet or no Internet.

I hate all of the utilities in my life. Venomous hatred. And because they’re monopolies, they feel no need to make me appreciate them. Cuz they know that I’m not going to give up water, power, sewage, or the Internet out of spite. Nor will most people give up Facebook, regardless of how much they grow to hate them.

Your gut reaction might be to tell me that Facebook is not a utility. You’re wrong. People’s language reflects that people are depending on Facebook just like they depended on the Internet a decade ago. Facebook may not be at the scale of the Internet (or the Internet at the scale of electricity), but that doesn’t mean that it’s not angling to be a utility or quickly becoming one. Don’t forget: we spent how many years being told that the Internet wasn’t a utility, wasn’t a necessity… now we’re spending what kind of money trying to get universal broadband out there without pissing off the monopolistic beasts because we like to pretend that choice and utility can sit easily together. And because we’re afraid to regulate.

And here’s where we get to the meat of why Facebook being a utility matters. Utilities get regulated. Less in the United States than in any other part of the world. Here, we like to pretend that capitalism works with utilities. We like to “de-regulate” utilities to create “choice” while continuing to threaten regulation when the companies appear too monopolistic. It’s the American Nightmare. But generally speaking, it works, and we survive without our choices and without that much regulation. We can argue about whether or not regulation makes things cheaper or more expensive, but we can’t argue about whether or not regulators are involved with utilities: they are always watching them because they matter to the people.

The problem with Facebook is that it’s becoming an international utility, not one neatly situated in the United States. It’s quite popular in Canada and Europe, two regions that LOVE to regulate their utilities. This might start out being about privacy, but, if we’re not careful, regulation is going to go a lot deeper than that. Even in the States, we’ll see regulation, but it won’t look the same as what we see in Europe and Canada. I find James Grimmelmann’s argument that we think about privacy as product safety to be an intriguing frame. I’d expect to see a whole lot more coming down the line in this regards. And Facebook knows it. Why else would they bring in a former Bush regulator to defend its privacy practices?

Thus far, in the world of privacy, when a company oversteps its hand, people flip out, governments threaten regulation, and companies back off. This is not what’s happening with Facebook. Why? Because they know people won’t leave and Facebook doesn’t think that regulators matter. In our public discourse, we keep talking about the former and ignoring the latter. We can talk about alternatives to Facebook until we’re blue in the face and we can point to the handful of people who are leaving as “proof” that Facebook will decline, but that’s because we’re fooling ourselves. If Facebook is a utility – and I strongly believe it is – the handful of people who are building cabins in the woods to get away from the evil utility companies are irrelevant in light of all of the people who will suck up and deal with the utility to live in the city. This is going to come down to regulation, whether we like it or not.

The problem is that we in the tech industry don’t like regulation. Not because we’re evil but because we know that regulation tends to make a mess of things. We like the threat of regulation and we hope that it will keep things at bay without actually requiring stupidity. So somehow, the social norm has been to push as far as possible and then pull back quickly when regulatory threats emerge. Of course, there have been exceptions. And I work for one of them. Two decades ago, Microsoft was as arrogant as they come and they didn’t balk at the threat of regulation. As a result, the company spent years mired in regulatory hell. And being painted as evil. The company still lives with that weight and the guilt wrt they company’s historical hubris is palpable throughout the industry.

I cannot imagine that Facebook wants to be regulated, but I fear that it thinks that it won’t be. There’s cockiness in the air. Personally, I don’t care whether or not Facebook alone gets regulated, but regulation’s impact tends to extend much further than one company. And I worry about what kinds of regulation we’ll see. Don’t get me wrong: I think that regulators will come in with the best of intentions; they often (but not always) do. I just think that what they decide will have unintended consequences that are far more harmful than helpful and this makes me angry at Facebook for playing chicken with them. I’m not a libertarian but I’ve come to respect libertarian fears of government regulation because regulation often does backfire in some of the most frustrating ways. (A few weeks ago, I wrote a letter to be included in the COPPA hearings outlining why the intention behind COPPA was great and the result dreadful.) The difference is that I’m not so against regulation as to not welcome it when people are being screwed. And sadly, I think that we’re getting there. I just wish that Facebook would’ve taken a more responsible path so that we wouldn’t have to deal with what’s coming. And I wish that they’d realize that the people they’re screwing are those who are most vulnerable already, those whose voices they’ll never hear if they don’t make an effort.

When Facebook introduced the News Feed and received a backlash from its users, Zuckerberg’s first blog post was to tell everyone to calm down. When they didn’t, new features were introduced to help them navigate the system. Facebook was willing to talk to its users, to negotiate with them, to make a deal. Perhaps this was because they were all American college students, a population that early Facebook understood. Still, when I saw the backlash emerging this time, I was waiting and watching for an open dialogue to emerge. Instead, we got PR mumblings in the NYTimes telling people they were stupid and blog posts on “Gross National Happiness.” I’m sure that Facebook’s numbers are as high as ever and so they’re convinced that this will blow over, that users will just adjust. I bet they think that this is just American techies screaming up a storm for fun. And while more people are searching to find how to delete their account, most will not. And Facebook rightfully knows that. But what’s next is not about whether or not there’s enough user revolt to make Facebook turn back. There won’t be. What’s next is how this emergent utility gets regulated. Cuz sadly, I doubt that anything else is going to stop them in their tracks. And I think that regulators know that.

Update: I probably should’ve titled this “Facebook is trying to be a utility; utilities get regulated” but I chopped it because that was too long. What’s at stake is not whether or not we can agree that Facebook is a utility, but whether or not regulation will come into play. There’s no doubt that Facebook wants to be a utility, sees itself as a utility. So even if we don’t see them as a utility, the fact that they do matters. As does the fact that some people are using it with that attitude. I’d give up my water company (or Comcast) if a better alternative came along too. When people feel as though they are wedded to something because of its utilitarian value, the company providing it can change but the infrastructure is there for good.  Rather than arguing about the details of what counts as a utility, let’s move past that to think about what it means that regulation is coming.

Facebook and “radical transparency” (a rant)

At SXSW, I decided to talk about privacy because I thought that it would be the most important issue of the year. I was more accurate than my wildest dreams. For the last month, I’ve watched as conversations about privacy went from being the topic of the tech elite to a conversation that’s pervasive. The press coverage is overwhelming – filled with infographics and a concerted effort by journalists to make sense of and communicate what seems to be a moving target. I commend them for doing so.

My SXSW used a bunch of different case studies but folks focused on two: Google and Facebook. After my talk, I received numerous emails from folks at Google, including the PM in charge of Buzz. The tenor was consistent, effectively: “we fucked up, we’re trying to fix it, please help us.” What startled me was the radio silence from Facebook, although a close friend of mine told me that Randi Zuckerberg had heard it and effectively responded with a big ole ::gulp:: My SXSW critique concerned their decision in December, an irresponsible move that I felt put users at risk. I wasn’t prepared for how they were going to leverage that data only a few months later.

As most of you know, Facebook has been struggling to explain its privacy-related decisions for the last month while simultaneously dealing with frightening security issues. If you’re not a techie, I’d encourage you to start poking around. The NYTimes is doing an amazing job keeping up with the story, as is TechCrunch, Mashable, and InsideFacebook. The short version… People are cranky. Facebook thinks that it’s just weirdo tech elites like me who are pissed off. They’re standing firm and trying to justify why what they’re doing is good for everyone. Their attitude has triggered the panic button amongst regulators and all sorts of regulators are starting to sniff around. Facebook hired an ex-Bush regulator to manage this. No one is quite sure what is happening but Jason Calacanis thinks that Facebook has overplayed its hand. Meanwhile, security problems mean that even more content has been exposed, including email addresses, IP addresses (your location), and full chat logs. This has only upped the panic amongst those who can imagine worst case scenarios. Like the idea that someone out there is slowly piecing together IP addresses (location) and full names and contact information. A powerful database, and not one that anyone would be too happy to be floating around.

Amidst all of what’s going on, everyone is anxiously awaiting David Kirkpatrick’s soon-to-be-released “The Facebook Effect.” which basically outlines the early days of the company. Throughout the book, Kirkpatrick sheds light on why we’re where we are today without even realizing where we’d be. Consider these two quotes from Zuckerberg:

  • “We always thought people would share more if we didn’t let them do whatever they wanted, because it gave them some order.” – Zuckerberg, 2004
  • “You have one identity… The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly… Having two identities for yourself is an example of a lack of integrity” – Zuckerberg, 2009

In trying to be a neutral reporter, Kirkpatrick doesn’t critically interrogate the language that Zuckerberg or other executives use. At times, he questions them, pointing to how they might make people’s lives challenging. But he undermines his own critiques by accepting Zuckerberg’s premise that the tides they are a turning. For example, he states that “The older you are, the more likely you are to find Facebook’s exposure of personal information intrusive and excessive.” Interestingly, rock solid non-marketing data is about to be released to refute this point. Youth are actually much more concerned about exposure than adults these days. Why? Probably because they get it. And it’s why they’re using fake names and trying to go on the DL (down-low).

With this backdrop in mind, I want to talk about a concept that Kirkpatrick suggests is core to Facebook: “radical transparency.” In short, Kirkpatrick argues that Zuckerberg believes that people will be better off if they make themselves transparent. Not only that, society will be better off. (We’ll ignore the fact that Facebook’s purse strings may be better off too.) My encounters with Zuckerberg lead me to believe that he genuinely believes this, he genuinely believes that society will be better off if people make themselves transparent. And given his trajectory, he probably believes that more and more people want to expose themselves. Silicon Valley is filled with people engaged in self-branding, making a name for themselves by being exhibitionists. It doesn’t surprise me that Scoble wants to expose himself; he’s always the first to engage in a mass collection on social network sites, happy to be more-public-than-thou. Sometimes, too public. But that’s his choice. The problem is that not everyone wants to be along for the ride.

Jeff Jarvis gets at the core issue with his post “Confusing *a* public with *the* public”. As I’ve said time and time again, people do want to engage in public, but not the same public that includes all of you. Jarvis relies on Habermas, but the right way to read this is through the ideas of Michael Warner’s “Publics and Counterpublics”. Facebook was originally a counterpublic, a public that people turned to because they didn’t like the publics that they had accessed to. What’s happening now is ripping the public that was created to shreds and people’s discomfort stems from that.

What I find most fascinating in all of the discussions of transparency is the lack of transparency by Facebook itself. Sure, it would be nice to see executives use the same privacy settings that they determine are the acceptable defaults. And it would be nice to know what they’re saying when they’re meeting. But that’s not the kind of transparency I mean. I mean transparency in interface design.

A while back, I was talking with a teenage girl about her privacy settings and noticed that she had made lots of content available to friends-of-friends. I asked her if she made her content available to her mother. She responded with, “of course not!” I had noticed that she had listed her aunt as a friend of hers and so I surfed with her to her aunt’s page and pointed out that her mother was a friend of her aunt, thus a friend-of-a-friend. She was horrified. It had never dawned on her that her mother might be included in that grouping.

Over and over again, I find that people’s mental model of who can see what doesn’t match up with reality. People think “everyone” includes everyone who searches for them on Facebook. They never imagine that “everyone” includes every third party sucking up data for goddess only knows what purpose. They think that if they lock down everything in the settings that they see, that they’re completely locked down. They don’t get that their friends lists, interests, likes, primary photo, affiliations, and other content is publicly accessible.

If Facebook wanted radical transparency, they could communicate to users every single person and entity who can see their content. They could notify then when the content is accessed by a partner. They could show them who all is included in “friends-of-friends” (or at least a number of people). They hide behind lists because people’s abstractions allow them to share more. When people think “friends-of-friends” they don’t think about all of the types of people that their friends might link to; they think of the people that their friends would bring to a dinner party if they were to host it. When they think of everyone, they think of individual people who might have an interest in them, not 3rd party services who want to monetize or redistribute their data. Users have no sense of how their data is being used and Facebook is not radically transparent about what that data is used for. Quite the opposite. Convolution works. It keeps the press out.

The battle that is underway is not a battle over the future of privacy and publicity. It’s a battle over choice and informed consent. It’s unfolding because people are being duped, tricked, coerced, and confused into doing things where they don’t understand the consequences. Facebook keeps saying that it gives users choices, but that is completely unfair. It gives users the illusion of choice and hides the details away from them “for their own good.”

I have no problem with Scoble being as public as he’d like to be. And I do think it’s unfortunate that Facebook never gave him that choice. I’m not that public, but I’m darn close. And I use Twitter and a whole host of other services to be quite visible. The key to addressing this problem is not to say “public or private?” but to ask how we can make certain people are 1) informed; 2) have the right to chose; and 3) are consenting without being deceived. I’d be a whole lot less pissed off if people had to opt-in in December. Or if they could’ve retained the right to keep their friends lists, affiliations, interests, likes, and other content as private as they had when they first opted into Facebook. Slowly disintegrating the social context without choice isn’t consent; it’s trickery.

What pisses me off the most are the numbers of people who feel trapped. Not because they don’t have another choice. (Technically, they do.) But because they feel like they don’t. They have invested time, energy, resources, into building Facebook what it is. They don’t trust the service, are concerned about it, and are just hoping the problems will go away. It pains me how many people are living like ostriches. If we don’t look, it doesn’t exist, right?? This isn’t good for society. Forcing people into being exposed isn’t good for society. Outting people isn’t good for society, turning people into mini-celebrities isn’t good for society. It isn’t good for individuals either. The psychological harm can be great. Just think of how many “heros” have killed themselves following the high levels of publicity they received.

Zuckerberg and gang may think that they know what’s best for society, for individuals, but I violently disagree. I think that they know what’s best for the privileged class. And I’m terrified of the consequences that these moves are having for those who don’t live in a lap of luxury. I say this as someone who is privileged, someone who has profited at every turn by being visible. But also as someone who has seen the costs and pushed through the consequences with a lot of help and support. Being publicly visible isn’t always easy, it’s not always fun. And I don’t think that anyone should go through what I’ve gone through without making a choice to do it. So I’m angry. Very angry. Angry that some people aren’t being given that choice, angry that they don’t know what’s going on, angry that it’s become OK in my industry to expose people. I think that it’s high time that we take into consideration those whose lives aren’t nearly as privileged as ours, those who aren’t choosing to take the risks that we take, those who can’t afford to. This isn’t about liberals vs. libertarians; it’s about monkeys vs. robots.

if you’re not angry / you’re just stupid / or you don’t care
how else can you react / when you know / something’s so unfair
the men of the hour / can kill half the world in war
make them slaves to a super power / and let them die poor

– Ani Difranco, Out of Range

(Also posted at Blogher)

(Translated to Italian by orangeek)

Facebook’s move ain’t about changes in privacy norms

When I learned that Mark Zuckerberg effectively argued that ‘the age of privacy is over’ (read: ReadWriteWeb), I wanted to scream. Actually, I did. And still am. The logic goes something like this:

  • People I knew didn’t used to like to be public.
  • Now “everyone” is being public.
  • Ergo, privacy is dead.

This isn’t new. This is the exact same logic that made me want to scream a decade ago when folks used David Brin to justify a transparent society. Privacy is dead, get over it. Right? Wrong!

Privacy isn’t a technological binary that you turn off and on. Privacy is about having control of a situation. It’s about controlling what information flows where and adjusting measures of trust when things flow in unexpected ways. It’s about creating certainty so that we can act appropriately. People still care about privacy because they care about control. Sure, many teens repeatedly tell me “public by default, private when necessary” but this doesn’t suggest that privacy is declining; it suggests that publicity has value and, more importantly, that folks are very conscious about when something is private and want it to remain so. When the default is private, you have to think about making something public. When the default is public, you become very aware of privacy. And thus, I would suspect, people are more conscious of privacy now than ever. Because not everyone wants to share everything to everyone else all the time.

Let’s take this scenario for a moment. Bob trust Alice. Bob tells Alice something that he doesn’t want anyone else to know and he tells her not to tell anyone. Alice tells everyone at school because she believes she can gain social stature from it. Bob is hurt and embarrassed. His trust in Alice diminishes. Bob now has two choices. He can break up with Alice, tell the world that Alice is evil, and be perpetually horribly hurt. Or he can take what he learned and manipulate Alice. Next time something bugs him, he’ll tell Alice precisely because he wants everyone to know. And if he wants to guarantee that it’ll spread, he’ll tell her not to tell anyone.

Facebook isn’t in the business of protecting Bob. Facebook is in the business of becoming Alice. Facebook is perfectly content to break Bob’s trust because it knows that Bob can’t totally run away from it. They’re still stuck in the same school together. But, more importantly, Facebook *WANTS* Bob to twist Facebook around and tell it stuff that it’ll spread to everyone. And it’s fine if Bob stops telling Facebook the most intimate stuff, as long as Bob keeps telling Facebook stuff that it can use to gain social stature.

Why? No one makes money off of creating private communities in an era of “free.” It’s in Facebook’s economic interest to force people into being public, even if a few people break up with Facebook in the process. Of course, it’s in Facebook’s interest to maintain some semblance of trust, some appearance of being a trustworthy enterprise. I mean, if they were total bastards, they would’ve just turned everyone’s content public automatically without asking. Instead, they asked in a way that no one would ever figure out what’s going on and voila, lots of folks are producing content that is more public than they even realize. Maybe then they’ll get used to it and accept it, right? Worked with the newsfeed, right? Of course, some legal folks got in the way and now they can’t be that forceful about making people public but, guess what, I can see a lot of people’s content out there who I’m pretty certain don’t think that I can.

Public-ness has always been a privilege. For a long time, only a few chosen few got to be public figures. Now we’ve changed the equation and anyone can theoretically be public, can theoretically be seen by millions. So it mustn’t be a privilege anymore, eh? Not quite. There are still huge social costs to being public, social costs that geeks in Silicon Valley don’t have to account for. Not everyone gets to show up to work whenever they feel like it wearing whatever they’d like and expect a phatty paycheck. Not everyone has the opportunity to be whoever they want in public and demand that everyone else just cope. I know there are lots of folks out there who think that we should force everyone into the public so that we can create a culture where that IS the norm. Not only do I think that this is unreasonable, but I don’t think that this is truly what we want. The same Silicon Valley tycoons who want to push everyone into the public don’t want their kids to know that their teachers are sexual beings, even when their sexuality is as vanilla as it gets. Should we even begin to talk about the marginalized populations out there?

Recently, I gave a talk on the complications of visibility through social media. Power is critical in thinking through these issues. The privileged folks don’t have to worry so much about people who hold power over them observing them online. That’s the very definition of privilege. But most everyone else does. And forcing people into the public eye doesn’t dismantle the structures of privilege, the structures of power. What pisses me off is that it reinforces them. The privileged get more privileged, gaining from being exposed. And those struggling to keep their lives together are forced to create walls that are constantly torn down around them. The teacher, the abused woman, the poor kid living in the ghetto and trying to get out. How do we take them into consideration when we build systems that expose people?

Don’t get me wrong – folks have the right to enter the public stage. As long as we realize that this ain’t always pretty. I will never forget the teen girl who thought that her only chance out was to put up mostly naked photos online in the hopes that some talent agency would find her. All I could think of was the pimp who would.

There isn’t some radical shift in norms taking place. What’s changing is the opportunity to be public and the potential gain from doing so. Reality TV anyone? People are willing to put themselves out there when they can gain from it. But this doesn’t mean that everyone suddenly wants to be always in public. And it doesn’t mean that folks who live their lives in public don’t value privacy. The best way to maintain privacy as a public figure is to give folks the impression that everything about you is in public.

If we’re building a public stage, we need to give people the ability to protect themselves, the ability to face the consequences honestly. We cannot hide behind rhetoric of how everyone is public just because everyone we know in our privileged circles is walking confidently into the public sphere and assuming no risk. And we can’t justify our decisions as being simply about changing norms when the economic incentives are all around. I’m with Marshall on this one: Facebook’s decision is an economic one, not a social norms one. And that scares the bejesus out of me.

People care deeply about privacy, especially those who are most at risk of the consequences of losing it. Let us not forget about them. It kills me when the bottom line justifies social oppression. Is that really what the social media industry is about?


Read also:

is Facebook for old people?

In Atlanta, I met a shy quiet 14-year-old girl that I’ll call Kaitlyn. She wasn’t particularly interested in talking to me, but she answered my questions diligently. She said that she was on both MySpace and Facebook, but quickly started talking about MySpace as the place where she gathered with her friends. At some point, I asked her if her friends also gathered on Facebook and her face took on a combination of puzzlement and horror before she exclaimed, “Facebook is for old people!” Of course, Kaitlyn still uses Facebook to communicate with her mother, aunt, cousins in Kentucky, and other family members.

Cross-town, I met up with Connor, a well-spoken 17-year-old who is more than comfortable in sharing his opinions with me. His manner of speaking and attitude means that he would’ve fit into Eckert’s “jock” category even though he plays no sport. In fact, Connor is more interested in gadgetry (Macs to be precise), but that no longer has the same geek ring as it once did. Connor tells me about how Facebook is the new thing that everyone is using and that, while he prefers MySpace, he now primarily logs into Facebook. His girlfriend deleted her MySpace profile and most of his friends now spend their time on Facebook. In fact, he can’t think of anyone at school who still actively uses MySpace. Connor is also aware of the presence of adults on Facebook. He messages with his mother and his youth pastor on Facebook and he waxes elegantly about how he thinks that Facebook is just as popular among adults as it is among teens. He believes that the reason that people switched to Facebook was because it was more “mature.”

These two narratives reflect different views about the salience of age in social network site participation. At one level, we can simply read Kaitlyn as rebellious, anti-authoritarian. Yet, that doesn’t quite work. Kaitlyn is not rebelling against her parents or teachers; she simply doesn’t see why interacting with them alongside her friends would make any sense whatsoever. She sees her world as starkly age segregated and she sees this as completely normal. Connor, on the other hand, sees the integration of adults and peers as a natural part of growing up. The difference in their ages is part of the story – Connor is two grades ahead of Kaitlyn.

Yet, there’s another important factor here. These teens come from very different demographics. Both teenagers are white and live in the deep south, but they are from different socioeconomic backgrounds and their public schools have quite different characters. Kaitlyn’s family income is near the median of Atlanta while Connor comes from a family that is better-off. Both have had many different opportunities afforded to them by loving and deeply involved parents. The biggest differences in their lives stem from their friend groups and the schools that they attend.

Connor was lamenting the presence of filters in his school (coupled with the sign in the computer lab that warned of punishment if anyone was caught on MySpace). I asked him why his school was strict and he responded by telling me that it was because they were the best school and they had standards. I asked him what made it the best school and he first started by saying that it was because they were strict and kept people in line, but then reverted course. He told me that in Atlanta, most schools are 60% or more black but his school was only 30% black. And then he noted that this was changing, almost with a sense of sadness. Kaitlyn, on the other hand, was proud of the fact that her school was very racially diverse. She did complain that it was big, so big in fact that they had created separate “schools” (think: Harry Potter) and that she was in the school that was primarily for honors kids but that this meant that she didn’t see all of her friends all the time. But she valued the different types of people who attended. These differences are reflected in their friend groups – Connor’s friends are almost entirely white and well-off while at least half of Kaitlyn’s friends are black and most of her friends are neither well-off nor poor.

Both Kaitlyn and Connor follow the crowd when it comes to social media and their instincts reflect more than just their own beliefs; they reflect what is normative among their cohort.

So going back to the question of age and maturity – why do these dynamics of race and socioeconomic factors matter? One argument made about the differences between teens from wealthy and poor environments is that wealthy teens are much more likely to integrate with adults than teens from poorer backgrounds. (There are obviously exceptions on all sides.) Now, Connor is not exceedingly wealthy and Kaitlyn is not poor, but I can’t help but wonder how much of what they’re reflecting is part of that more general trend.

Will Kaitlyn begin to embrace adults alongside her peers in a few years? Perhaps, but I doubt it. Might their differences be simply a personality thing? Perhaps, but I saw these dynamics occur across many other pairings of teens with similar differences and similarities.

Regardless of whether or not this factor explains the differences between these teens, I can’t help but wonder the significance of teens’ willingness to interact with known adults on social network sites. There’s nothing worse than demanding that teens accept adults in their peer space, but there’s a lot to be said for teens who embrace adults there, especially non-custodial adults like youth pastors and “cool” teachers. I strongly believe that the healthiest environment we can create online is one where teens and trusted adults interact seamlessly. To the degree that this is not modeled elsewhere in society, I worry.

“Facebook and academic performance: Reconciling a media sensation with data”

In mid-April, journalists heard about a student poster at the American Educational Research Association’s annual meeting called “A Description of Facebook Use and Academic Performance Among Undergraduate and Graduate Students.” The poster suggested that Facebook use might be related to lower academic achievement in college and graduate school. As the media picked this up (most likely without reading more than the abstract), a new story emerged: Facebook is the cause of poor grades in school. Unhappy with what was panning out, Eszter Hargittai penned a blog post at Crooked Timber to critique the situation: “ZOMG! Facebook use and student grades.”

Move forward a few weeks… Josh Pasek, eian more, and Eszter Hargittai just published an article at First Monday on this issue: “Facebook and academic performance: Reconciling a media sensation with data.” In this article, they examine three different datasets that contradict the claims made by the AERA poster and concluded that the AERA findings could not be reproduced.

Indeed, if anything, Facebook use is more common among individuals with higher grades. We also examined how changes in academic performance in the nationally representative sample related to Facebook use and found that Facebook users were no different from non-users.

The samples used in this First Monday article include a large sample of undergraduates at a diverse undergraduate institution, a nationally representative cross sectional sample of American youth, and a longitudinal panel of American youth. There are also scholars elsewhere that have data that contradict the AERA poster’s claims. Quoting from an email from Sam Gosling (a professor of psych at UT-Austin):

I teach a big intro psych class every year and my co-teacher and I always do a bunch of surveys, questionnaires, etc. and ask the class various questions….in 2007 we asked the class how often they check FB…the options were “never’ “less than once a week” “once a day” 2-5 times a day” and “6 or more times a day”….I knew we had that so I ran a quick correlation between that variable and the overall class score….the correlation was .12, which was not statistically significant, but is in the direction of showing the people who check their FB more often got higher grades…note that was computed over only 149 onlyf the students…I probably do have data on a larger number but that was what matched up in my hasty data merge to see what we’d find.

Given the way that these things typically turn out, I doubt that many journalists will be clamoring to scream, “We were wrong! Facebook doesn’t cause bad grades!” This is a sad reality of media sensationalism. Unfortunately for all of us, when scholars (or students) disseminate findings based on poor methodology that reinforce myths that the media wants to propagate, they get picked up even if they are patently untrue and can be disproved through multiple alternative data sets. Even though I doubt this article will make it into mainstream media, I hope that some of you will take the time to make it clear to those around you that the media coverage of this story was patently ridiculous and unfounded. Or at least start by reading the article: “Facebook and academic performance: Reconciling a media sensation with data.”

Note: The author of the AERA poster, Aryn Karpinski, also published a commentary in First Monday this month: “A response to reconciling a media sensation with data” where she makes it clear that her study was exploratory and that she wanted to place it at AERA to start a conversation with scholars, not to attract media en masse. She then continues on to critique the critique of her work.

using Facebook while ill

Yesterday, I received an email about one person’s Facebook usage that I felt the urge to share:

A little over 6 months ago, my stepmom was diagnosed with ovarian cancer. She is doing alright now, but during her chemotherapy she was isolated from friends and family due to a compromised immune system. She could still see people, but had to keep human interactions to a minimum. During that time, Facebook became this way for her to communicate and interact with the world. Being able to see pictures of friends and family and receiving comments would brighten her day. It was really amazing how she was able to adopt this technology temporarily and how valuable it became to her. As her life has returned to normal, she has had less time for Facebook. Originally, one of her friends had helped her create the profile, but it wasn’t part of her normal life. So now that things are more “normal”, she has talked about how it is hard to maintain her Facebook relationships.

In following up with the son, he shared an additional element with me that is also important: “Even though she is older, she has friends that are college age that she knew through her religious activities. So most of the people she was talking to were of college age. But as the technology becomes more pervasive among older generations, I could totally see being able to communicate with a broader range of friends.”

What I find so compelling about this account is that it is a reminder that in-person encounters are not always possible or ideal. Geography isn’t the only limiting factor. I’m always intrigued to hear stories of people with disabilities using the Internet to build connections that were otherwise impossible for them. Likewise, it’s astonishing the role that the Internet plays in helping people who are ill.

I’m also reminded of all of the awkwardness that occurs when illness gets in the way of friendship and the role that technology can play. In this case, the woman is unable to see her friends frequently. But there’s another layer here. When someone’s sick, the topic is always hanging in the air. In some cases, it’s always the topic of conversation. In others, it’s a difficult subject to broach. Back when I was studying blogging, I spoke with an HIV+ man who told me that he started blogging so he could let his friends know about his health. He had found that there was no comfortable way for them to ask in social settings. “Can you pass the ketchup? Oh, and how are your T cells?” didn’t quite work. Likewise, there was no good way for him to bring it up without creating awkward moments. So he decided to anonymously blog about his illness. His friends could get a sense of how he was doing and he could share it and everyone could look when it was most appropriate for them and their in-person interactions could have a more sane cadence. One huge challenge in being sick is figuring out how to participate “normally” in social settings. Mediated interactions can often be quite valuable in this regard.

There are many other important nuggets in this account. Technology’s value is often dependent on where one’s at in their life. Inter-generational relationships can be enhanced through these tools. Social awareness can be tremendously fulfilling (and should not be seen as purely vacuous). I don’t want to go into a proper analysis here, but hopefully this story makes you think.

Anyhow, I like being reminded of how these tools fit into people’s lives in different ways and I thought maybe you would too. Oh, and if you have a story of your own to share, I’m all ears.

Putting Privacy Settings in the Context of Use (in Facebook and elsewhere)

A few days ago, Gilad’s eyes opened wide and he called me over to look at his computer. He was on Facebook and he had just discovered a privacy loophole. He had maximized his newsfeed to get as many photo-related bits as possible. As a result, he was regularly informed when his Friends commented on other people’s photos, including photos of people with whom he was not Friends or in the same network as. This is all fine and well. Yet, he found that he could click on those photos and, from there, see the entire photo albums of Friends-of-Friends. Once one of his Friends was tagged in one of those albums, he could see the whole album, even if he couldn’t see the whole profile of the person who owned the album. This gave him a delirious amount of joy because he felt as though he could see photos not intended for him… and he liked it.

There are multiple explanations for what is happening. This may indeed be a bug on the part of Facebook’s. It’s more likely a result of people allowing photos tagged of them to be visible to Friends of Friends through the overly complex privacy settings that even Gilad didn’t know about. Either way, Gilad felt as though he was seeing photos not intended for him. Likewise, I’d bank money that his kid sister’s Friends did not think that tagging those photos with her name would make the whole album available to her brother.

Facebook’s privacy settings are the most flexible and the most confusing privacy settings in the industry. Over and over again, I interview teens (and adults) who think that they’ve set their privacy settings to do one thing and are shocked (and sometimes horrified) to learn that their privacy settings do something else. Furthermore, because of things like tagged photos, people are often unaware of the visibility of content that they did not directly contribute. People continue to get themselves into trouble because they lack the control that they think they have. And this ain’t just about teenagers. Teachers/professors – are you _sure_ that the photos that your friends post and tag with your name aren’t visible to your students? Parents – I know many of you joined to snoop on your kids… now that your high school mates are joining, are your kids snooping on you? Power dynamics are a bitch, whether your 16 or 40.

Why are privacy settings still an abstract process removed from the context of the content itself? Privacy settings shouldn’t just be about control; they should be about the combination of awareness, context, and control. You should understand the visibility of an act during the moment of the act itself and whenever you are accessing the tracings of the act.

Tech developers… I implore you… put privacy information into the context of the content itself. When I post a photo in my album, let me see a list of EVERYONE who can view that photo. When I look at a photo on someone’s profile, let me see everyone else who can view that photo before I go to write a comment. You don’t get people to understand the scale of visibility by tweetling a few privacy settings every few months and having no idea what “Friends of Friends” actually means. If you have that setting on and you go to post a photo and realize that it will be visible to 5,000 people included 10 ex-lovers, you’re going to think twice. Or you’re going to change your privacy settings.

In an ideal world where complex access control wouldn’t destroy a database, I would argue that you should be able to edit the list of people who can see a particular artifact at the time of upload. Thus, if I posted a photo and saw that it was visible to 100 people, I could manually go through and remove 10 of those people without having to create a specific group that is everyone but the unwanted people. I know that this is a database disaster so I can’t ask for it… yet. Y’all should make large-n combinatorial functions computationally feasible eventually, right? ::wink:: In the meantime, let me at least see the visibility level and have the ability to adjust my broad settings in the context of use.

Frankly… I don’t understand why tech companies aren’t doing this. Is it because you don’t want users to realize how visible their content is? Is it because your relational databases are directed and this is annoying to compute? Or is there some other reason that I can’t think of? But seriously, if you want to stop the social disasters that stem from people fucking up their privacy settings, why not put it into context? Why not let them grok how visible their acts are by providing a feedback loop that’ll let them see what’s going on? Please tell me why this is not a rational approach!

In the meantime.. for everyone else… have you looked at your privacy settings lately? Did you really want your profile coming up first when people search for your name in Google? Did you really want those photos tagged with your name to be visible to friends-of-friends? Or your status updates visible to everyone in all of your networks? Think about it. Look at your settings. Do your expectations match with what those setting say?

(en francais)

Facebook and Techcrunch: the costs of technological determinism and configuring users

When Nicole and I were trying to decide what term to use and how to define it, we struggled with the many misinterpretations of social networking sites. “We chose not to employ the term “networking” for two reasons: emphasis and scope. ‘Networking’ emphasizes relationship initiation, often between strangers. While networking is possible on these sites, it is not the primary practice on many of them, nor is it what differentiates them from other forms of computer-mediated communication (CMC).” To our frustration, online dating sites and community forums and other such sites were all getting lumped into the frame “social networking sites.” To clarify, we purposely employed “social network site” to emphasize that what makes this genre of social media unique is the way that it allows people to publicly articulate (and leverage) their social network. It’s a small shift, but a significant one. Some people leverage their network to engage in networking, but many don’t. We wanted to account for this and really scope out what made a specific genre of social media unique.

Folks thought we were crazy. I can’t tell you how many tech folks have told me that no one thinks that “social networking sites” implies that people meet new people. Yet, the moment I walk into any public audience where non-tech parents are present, I’m confronted about how the whole purpose of these sites is to help strangers meet, no? It’s been clear to me for a long time that there’s a divide in understanding when the term “social networking site” is employed. And that has tremendous ramifications for how people engage with these sites and how they are politicized (and regulated).

Well, this curshuffle isn’t over. Today, Tech Crunch reported a brewing controversy over an application that encourages collecting of Friends. An email sent from Facebook to a user states:

Please note that Facebook accounts are meant for authentic usage only. This means that we expect accounts to reflect mainly “real-world” contacts (i.e. your family, schoolmates, co-workers, etc.), rather than mainly “internet-only” contacts. As stated on our home page, Facebook is a social utility that connects you with the people around you, not a “social networking site”. It is meant to help reinforce pre-existing social connections, not build large groups of new ones. If this is in direct contrast to what you expected as legitimate Facebook usage, I apologize for any confusion. This is simply the intention behind the site.

TechCrunch responds by noting that people do connect to people that they don’t know and gives an example of a public figure in the tech world who has mostly connected to people he doesn’t know personally. My co-author Nicole takes up this issue to point out that data shows that most (but obviously not all) users are not engaged in mass connecting to strangers. Fred Stutzman takes this in a different direction by emphasizing that a corporate mantra doesn’t necessarily dictate practice. Later, TechCrunch posted an update from Facebook:

To simplify this a bit, users on Facebook cannot have more than one account and creating another account for the purpose of playing this game violates our Terms of Use. We recognize and appreciate that each person uses Facebook based on their own interests and preferences and are happy to see people meeting new friends on Facebook. To ensure users are comfortable on the site and not burdened by unsolicited contact, we encourage users to add people that reflect their real-world connections and create trusted networks.

Putting these pieces together, we should collectively experience a massive wave of deja vu. Feel the wave, feel it… cuz you know where we saw this issue before? Friendster. Let’s back up.

Nicole is 100% correct that people primarily use Facebook (and MySpace and Friendster) to interact with people they already know. We know this and that’s why we agree that the term “social networking site” is a bit of a red herring. Labeling is simply political and we believed that it’s better to label a genre in a way that best reflects the practices taking place rather than use a term that signals something that is not dominant. (This is particularly important when, as in the case of these sites, the term is used to create cultural misinformation so as to add fire to a moral panic.)

That said, the categorical term that we use to label a particular site or genre of social media does NOT determine practice. The intentions of the designers do NOT determine practice. The demand of the company does NOT determine practice. In science and technology studies (STS), we have a term for this foolish worldview – it’s called “technological determinism” and calling someone a “technological determinist” is an insult. Unfortunately, far too often, companies take on this reductionist role and expect that the technology will determine practice.

A different approach is the “social construction of technology” (see: Bijker, Hughes & Pinch). SCOT argues that technologies shape people and people shape technologies. Practices are not determined by technology, but are driven by how people incorporate technology into their lives. Technologies are then shaped and reshaped to meet people’s needs and desires. In essence, technologies and people evolve together.

When companies and users fail to hold the same worldview, companies typically make one of two moves. Either they roll with user practice and try to encourage the good and shape the bad. In other words, they adopt principles that connect with SCOT. Or they try to demand that users behave exactly as they think they should. This latter approach is often labeled “configuring the users” (see: Grint & Woolgar). Needless to say, configuring users has a bad rap. This means that the companies are trying to demand that users fit into their box and punish them when they construct the technology in ways other than designed.

I dealt with these issues before with Friendster. [See Etech 2004 talk and None of this is Real article.] I also talked about how Friendster made an ass of themselves by acting like arrogant dictators of practice and how other companies could learn from this [See: Friendster vs. MySpace essay and Etech 2006 talk].

So how does this apply to this situation? Facebook is undoubtedly first and foremost about pre-existing networks. As a company, Facebook has every right to stop whatever behaviors it does or does not like. Banning applications that promote collecting is fair game. That said, there are costs to placing restrictions on desired practice, particularly if it results in stopping a large number (or influential group) of people from using the system in ways that they think are best. In other words, if their “intention behind the site” and what others “expected as legitimate Facebook usage” are in great conflict, there’s a problem. What is particularly interesting is that they then move on to say that “accounts that are used solely for the purpose of applications are in violation of their TOS” as if this automatically implies non-authentic usage. This is quite fascinating because I’m sure that plenty of legitimate users created accounts for this. I know people who created accounts for Causes or to play Scrabulous (RIP). Upon clarification, they take a different tactic to say that users “cannot have more than one account.” It’s not clear if the person deleted indeed had multiple accounts or not, but there are plenty of people with only one account who for all intents and purposes engage in the practice of collecting.

Of course, I’ve always found the TOS restriction against multiple accounts quite dubious. Back in the day, when I was obsessed with structural holes, I did a lot of research on people who held multiple accounts. I was fascinated when I started meeting gay men in Europe who had different SIM cards so that they could decide whether to answer their phone as “gay” or “straight.” I know soooo many people who break this TOS for very legitimate reasons involving the potential cost of context collisions. Teachers who have a teacher-friendly profile and a personal one, local politicians and micro-celebrities who have a public profile (not page) and one for their close friends, professionals who have a profile for their college buddies and one for their more presentable side, etc. Still, it is a TOS item.

Yet, the idea that gameplay amongst collecting only occurs through a game is preposterous. I know many folks who collect… micro-celebrities who feel awkward saying no to fans, teenage boys who are hoping to get as many cute girls to notice them as possible, college students running for student government who want to get the attention of as many peers as possible, etc. Hell, as I talk about in Friends, Friendsters, and MySpace Top 8, there are all sorts of reasons why people engage in collecting, not the least of which has to do with status.

OK, so they don’t like collecting and multiple accounts and Apps that encourage them. That’s their right and they can boot folks. But I find it interesting that there’s no room for dialogue or recourse: “Unfortunately, I will not be able to reactivate your account… this decision is final.” That’s where things get very very nasty. People put time and effort into creating a profile in a walled garden and then with the click of a few keys, the company can disappear you in a matter of moments with no opportunity for recourse for failing to abide by its terms and, more significantly, the “intention behind the site.” That’s where Friendster got itself into MASSIVE trouble in their games of whack-a-mole during the “Fakester genocide.” Configuring users, pointing to the TOS to justify deletion, and going after anyone who sees the site differently is a recipe for uh-oh.

Of course, lots of folks have been disappeared from Facebook already. You can piss off a lot of people who lack connections and power, but when you piss off the wrong people, you’ve got a PR nightmare on your hands. And, like it or not, with a blog read by millions, Michael Arrington and his connections are the wrong folks to piss off.

Facebook’s “opt-out” precedent

I’ve been watching the public outcry over Facebook’s Beacon (social ads) program with great interest. For those who managed to miss this, Facebook introduced a new feature called Beacon. Whenever you visit one of their partners’ sites, some of your actions were automagically sent to Facebook and published on your News Feed. The list of actions is unknown, although through experimentation folks have learned that they include writing reviews on Yelp, renting movies from Blockbuster, and buying things on certain sites. Some partners were listed in the press release. When a Beacon-worthy action takes place, a pop-up appears in the bottom right, allowing you to opt-out. If you miss it, you auto-opt-in. There was no universal opt-out, although they’ve now implemented one (privacy – external websites – don’t allow any websites). Furthermore, even if you opt out of having that bit blasted to the News Feed, it didn’t stop sponsors from sending it to Facebook.

MoveOn started a petition, bloggers cried foul, and the media did a 180, going from calling Facebook the privacy savior to the privacy destroyer. Amidst the outrage, Facebook was also declared Grinch when unassuming users failed to opt-out and had their gifts broadcast to the recipients, thereby ruining Christmas. Privacy scholar Michael Zimmer also pointed out that the feature was peculiarly named because beacons give warning when danger is about to take place. Not surprisingly, the company was forced to adjust. Zuckerberg apologized and additional features were provided to let people manage Beacon. While this appeases some, not all are satiated. StopBadware argues that Facebook does not go far enough and New York Law School Professor James Grimmelmann argues that Beacon is illegal under the Video Privacy Protection Act.

For all of the repentance by Facebook, what really bugs me is that this is the third time that Facebook has violated people’s sense of privacy in a problematic way. I documented the first incident – the introduction of the News Feeds – in an essay called “Facebook’s Privacy Trainwreck.” In this incident, there were no privacy adjustments until public outcry. The second incident went primarily unnoticed. Back in September, Facebook quietly began making public search listings available to search engines. This means that users’ primary photos are cached alongside their name and networks on Google. Once again, it was an opt-out structure, although finding the opt-out is tricky. Under privacy settings, under search, there is a question of “Which Facebook users can find me in search?” If you choose “everyone,” that includes search engines, not just Facebook users. The third incident is Beacon.

In each incident, Facebook pushed the boundaries of privacy a bit further and, when public outcry took place, retreated just a wee bit to make people feel more comfortable. In other words, this is “slippery slope” software development. Given what I’ve learned from interviewing teens and college students over the years, they have *no* idea that these changes are taking place (until an incident occurs). Most don’t even realize that adding the geographic network makes them visible to thousands if not millions. They don’t know how to navigate the privacy settings and they don’t understand the implications. In other words, defaults are EVERYTHING.

Like most companies, Facebook probably chose the “opt-out” path instead of the “opt-in” path because they knew that most users would not opt in. Even if they thought the feature was purrrfect, most wouldn’t opt-in because they would never know of the feature. Who reads the fine print of a website notice? This is exactly why opt-out approaches are dangerous. People don’t know what they’ve by default opted-in to. They trust companies and once they trust those companies, they are at their mercy.

Most lofty bloggers and technologists argue that if people are given the choice, that’s good enough. The argument is that people should inform themselves and suffer the consequences if they don’t. In other words, no sympathy for “dumb kids.” I object to this line of reasoning. Most people do not have the time or inclination to follow the fine print of every institution and website that they participate in, nor do I think that they should be required to. This is not simply a matter of contracts that they sign, but normative social infrastructure. Companies should be required to do their best to maintain the normative sense of privacy and require that users opt-in to changes that alter that normative sense. In other words, what is the reasonable expectation for privacy on the site and does this new feature change that? Of course, I also understand that this would piss companies off because they make lots of money by manipulating and altering everyday users’ naiveté and sense of norms. Still, I think that the default should be “opt-in” and “opt-out” should only be used in situations that would protect users (i.e., a feature that would limit users’ visibility).

I kinda suspect that Facebook loses very little when there is public outrage. They gain a lot of free press and by taking a step back after taking 10 steps forward, they end up looking like the good guy, even when nine steps forward is still a dreadful end result. This is how “slippery slopes” work and why they are so effective in political circles. Most people will never realize how much of their data has been exposed to so many different companies and people. They will still believe that Facebook is far more private than other social network sites (even though this is patently untrue). And, unless there is a large lawsuit or new legislation introduced, I suspect that Facebook will continue to push the edges when it comes to user privacy.

Lots of companies are looking at Facebook’s success and trying to figure out how to duplicate it. Bigger companies are watching to see what they can get away with so that they too can take that path. Issues of privacy are going to get ickier and ickier, especially once we’re talking about mobile phones and location-based information. As Alison wrote in her previous post on respecting digital privacy, users are likely to act incautiously by default. Thus, what does it mean that we’re solidifying the precedent that “opt-out” is AOK?