Category Archives: privacy

Quitting Facebook is pointless; challenging them to do better is not

I’ve been critiquing moves made by Facebook for a long time and I’m pretty used to them being misinterpreted. When I lamented the development of the News Feed, many people believed that I thought that the technology was a failure and that it wouldn’t be popular. This was patently untrue. I was bothered by it precisely because I knew that it would be popular, precisely because people love to gossip and learn about others, often to their own detriment. It was hugely disruptive and, when it launched, users lacked the controls necessary to really manage the situation effectively. Facebook responded with controls and people were able to find a way of engaging with Facebook with the News Feed as a given. But people were harmed in the transition.

Last week, I offered two different critiques of the moves made by Facebook, following up on my SXSW talk. Both have been misinterpreted in fascinating ways. Even news agencies are publishing statements like: “Microsoft wants Facebook to be regulated as a utility.” WTF? Seriously? Le sigh. (For the record, I’m not speaking on behalf of my employer nor do I want regulation; I think that it’s inevitable and I think that we need to contend with it. Oh, and I don’t think that the regulation that we’ll see will at all resemble the ways in which utilities are regulated. I was talking about utilities because that’s how Facebook frames itself. But clearly, most folks missed that.) Misinterpretations are frustrating because they make me feel as though I’m doing a bad job of communicating what I think is important. For this, I apologize to all of you. I will try to do better.

With this backdrop in mind, I want to enumerate six beliefs that I have that I want to flesh out in this post in light of discussions about how “everyone” is leaving Facebook:

  1. I do not believe that people will (or should) leave Facebook because of privacy issues.
  2. I do not believe that the tech elites who are publicly leaving Facebook will affect on the company’s numbers; they are unrepresentative and were not central users in the first place.
  3. I do not believe that an alternative will emerge in the next 2-5 years that will “replace” Facebook in any meaningful sense.
  4. I believe that Facebook will get regulated and I would like to see an open discussion of what this means and what form this takes.
  5. I believe that a significant minority of users are at risk because of decisions Facebook has made and I think that those of us who aren’t owe it to those who are to work through these issues.
  6. I believe that Facebook needs to start a public dialogue with users and those who are concerned ASAP (and Elliot Schrage’s Q&A doesn’t count).

As I stated in my last post, I think that Facebook plays a central role in the lives of many and I think that it is unreasonable for anyone to argue that they should “just leave” if they’re not happy. This is like saying that people should just leave their apartments if they’re not happy with their landlord or just leave their spouse because they’re not happy with a decision or just leave their job if they’re not happy with their boss. Life is more complicated than a series of simplified choices and we are always making calculated decisions, balancing costs and benefits. We stay with our jobs, apartments, and spouses even when things get messy because we hope to rectify problems. And those with the most to gain from Facebook are the least likely to leave, even if they also have the most to lose.

In the last few weeks, a handful of well known digerati have proudly announced that they’ve departed from Facebook. Most of these individuals weren’t that engaged in Facebook as users in the first place. I say this as someone who would lose very little (outside of research knowledge) from leaving. I am not a representative user. I barely share on the site for a whole host of personal and professional reasons. (And because I don’t have a life.) None of my friends would miss me if I did leave. In fact, they’d probably be grateful for the disappearance of my tweets. That means that me deciding to leave will have pretty much no impact on the network. This is true for many of the people who I’ve watched depart. At best, they’re content broadcasters. But people have other ways of consuming their broadcasting. So their departure is meaningless. These are not the people that Facebook is worried about losing.

People will not leave Facebook en masse, even if a new site were to emerge. Realistically, if that were enough, they could go to MySpace or Orkut or Friendster or Tribe. But they won’t. And not just because those sites are no longer “cool.” They won’t because they’ve invested in Facebook and they’re still hoping that Facebook will get its act together. Changing services is costly, just like moving apartments or changing jobs or breaking up in general. The deeper the relationship, the harder it is to simply walk away. And the relationship that Facebook has built with many of its users is very very very deep. When transition costs are high, people work hard to change the situation so that they don’t have to transition. This is why people are complaining, this is why they are speaking up. And it’s really important that those in power listen to what it is that people are upset about. The worst thing that those in power can do is ignore what’s going on, waiting for it to go away. This is a bad idea, not because people will walk away, but because they will look to greater authorities of power to push back. This is why Facebook’s failure to address what’s going on invites regulation.

Facebook has gotten quite accustomed to upset users. In “The Facebook Effect,” David Kirkpatrick outlines how Facebook came to expect that every little tweak would set off an internal rebellion. He documented how most of the members of the group “I AUTOMATICALLY HATE THE NEW FACEBOOK HOME PAGE” were employees of Facebook whose frustration with user rebellion was summed up by the group’s description: “I HATE CHANGE AND EVERYTHING ASSOCIATED WITH IT. I WANT EVERYTHING TO REMAIN STATIC THROUGHOUT MY ENTIRE LIFE.” Kirkpatrick quotes Zuckerberg as saying, “The biggest thing is going to be leading the user base through the changes that need to continue to happen… Whenever we roll out any major product there’s some sort of backlash.” Unfortunately, Facebook has become so numb to user complaints that it doesn’t see the different flavors of them any longer.

What’s happening around privacy is not simply user backlash. In fact, users are far less upset about what’s going on than most of us privileged techno-elites. Why? Because even with the New York Times writing article after article, most users have no idea what’s happening. I’m reminded of this every time that I sit down with someone who doesn’t run in my tech circles. And I’m reminded that they care every time I sit down and walk them through their privacy settings. The disconnect between average users and the elite is what makes this situation different, what makes this issue messier. Because the issue comes down to corporate transparency, informed consent, and choice. As long as users believe that their content is private and have no idea how public it is, they won’t take to the streets. A disappearance of publicity for these issues is to Facebook’s advantage. But it’s not to user’s advantage. Which is precisely why I think that it’s important that the techno-elite and the bloggers and the journalists keep covering this topic. Because it’s important that more people are aware of what’s going on. Unfortunately, of course, we also have to contend with the fact that most people being screwed don’t speak English and have no idea this conversation is even happening. Especially when privacy features are only explained in English.

In documenting Zuckerberg’s attitudes about transparency, Kirkpatrick sheds light on one of the weaknesses of his philosophy: Zuckerberg doesn’t know how to resolve the positive (and in his head inevitable) outcomes of transparency with the possible challenges of surveillance. As is typical in the American tech world, most of the conversation about surveillance centers on the government. But Kirkpatrick highlights another outcome of surveillance with a throwaway example that sends shivers down my spine: “When a father in Saudi Arabia caught his daughter interacting with men on Facebook, he killed her.” This is precisely the kind of unintended consequence that motivates me to speak loudly even though I’m privileged enough to not face these risks. Statistically, death is an unlikely outcome of surveillance. But there are many other kinds of side effects that are more common and also disturbing: losing one’s job, losing one’s health insurance, losing one’s parental rights, losing one’s relationships, etc. Sometimes, these losses will be because visibility makes someone more accountable. But sometimes this will occur because of misinterpretation and/or overreaction. And the examples keep on coming.

I am all in favor of people building what they believe to be alternatives to Facebook. I even invested in Diaspora because I’m curious what will come of that system. But I don’t believe that Diaspora is a Facebook killer. I do believe that there is a potential for Diaspora to do something interesting that will play a different role in the ecosystem and I look forward to seeing what they develop. I’m also curious about the future of peer-to-peer systems in light of the move towards the cloud, but I’m not convinced that decentralization is a panacea to all of our contemporary woes. Realistically, I don’t think that most users around the globe will find a peer-to-peer solution worth the hassle. The cost/benefit analysis isn’t in their favor. I’m also patently afraid that a system like Diaspora will be quickly leveraged for child pornography and other more problematic uses that tend to emerge when there isn’t a centralized control system. But innovation is important and I’m excited that a group of deeply passionate developers are being given a chance to see what they can pull off. And maybe it’ll be even more fabulous than we can possibly imagine, but I’d bet a lot of money that it won’t put a dent into Facebook. Alternatives aren’t the point.

Facebook has embedded itself pretty deeply into the ecosystem, into the hearts and minds of average people. They love the technology, but they’re not necessarily prepared for where the company is taking them. And while I’m all in favor of giving users the choice to embrace the opportunities and potential of being highly visible, of being a part of a transparent society, I’m not OK with throwing them off the boat just to see if they can swim. Fundamentally, my disagreement with Facebook’s approach to these matters is a philosophical one. Do I want to create more empathy, more tolerance in a global era? Of course. But I’m not convinced that sudden exposure to the world at large gets people there and I genuinely fear that possible backlash that can emerge. I’m not convinced that this won’t enhance a type of extremism that is manifesting around the globe as we speak.

Screaming about the end of Facebook is futile. And I think that folks are wasting a lot of energy telling others to quit or boycott to send a message. Doing so will do no such thing. It’ll just make us technophiles look like we’re living on a different planet. Which we are. Instead, I think that we should all be working to help people understand what’s going on. I love using Reclaim Privacy to walk through privacy settings with people. While you’re helping your family and friends understand their settings, talk to them and record their stories. I want to hear average people’s stories, their fears, their passions. I want to hear what privacy means to them and why they care about it. I want to hear about the upside and downside of visibility and the challenges introduced by exposure. And I want folks inside Facebook to listen. Not because this is another user rebellion, but because Facebook’s decisions shape the dynamics of so many people’s lives. And we need to help make those voices heard.

I also want us techno-elites to think hard and deep about the role that regulation may play and what the consequences may be for all of us. In thinking about regulation, always keep Larry Lessig’s arguments in “Code” in mind. Larry argued that there are four points of regulation for all change: the market, the law, social norms, and architecture (or code). Facebook’s argument is that social norms have changed so dramatically that what they’re doing with code aligns with the people (and conveniently the market). I would argue that they’re misreading social norms but there’s no doubt that the market and code work in their favor. This is precisely why I think that law will get involved and I believe that legal regulators don’t share Facebook’s attitudes about social norms. This is not a question of if but a question of when, in what form, and at what cost. And I think that all of us who are living and breathing this space should speak up about how we think this should play out because if we just pretend like it won’t happen, not only are we fooling ourselves, but we’re missing an opportunity to shape the future.

I realize that Elliot Schrage attempted to communicate with the public through his NYTimes responses. And I believe that he failed. But I’m still confused about why Zuckerberg isn’t engaging publicly about these issues. (A letter to Robert Scoble doesn’t count.) In each major shitstorm, we eventually got a blog post from Zuckerberg outlining his views. Why haven’t we received one of those? Why is the company so silent on these matters? In inviting the users to vote on the changes to the Terms of Service, Facebook mapped out the possibility of networked engagement, of inviting passionate users to speak back and actively listening. This was a huge success for Facebook. Why aren’t they doing this now? I find the silence to be quite eerie. I cannot imagine that Facebook isn’t listening. So, Facebook, if you are listening, please start a dialogue with the public. Please be transparent if you’re asking us to be. And please start now, not when you’ve got a new set of features ready.

Regardless of how the digerati feel about Facebook, millions of average people are deeply wedded to the site. They won’t leave because the cost/benefit ratio is still in their favor. But that doesn’t mean that they aren’t suffering because of decisions being made about them and for them. What’s at stake now is not whether or not Facebook will become passe, but whether or not Facebook will become evil. I think that we owe it to the users to challenge Facebook to live up to a higher standard, regardless of what we as individuals may gain or lose from their choices. And we owe it to ourselves to make sure that everyone is informed and actively engaged in a discussion about the future of privacy. Zuckerberg is right: “Given that the world is moving towards more sharing of information, making sure that it happens in a bottom-up way, with people inputting their information themselves and having control over how their information interacts with the system, as opposed to a centralized way, through it being tracked in some surveillance system. I think it’s critical for the world.” Now, let’s hold him to it.

Update: Let me be clear… Anyone who wants to leave Facebook is more than welcome to do so. Participation is about choice. But to assume that there will be a mass departure is naive. And to assume that a personal boycott will have a huge impact is also naive. But if it’s not working for you personally, leave. And if you don’t think it’s healthy for your friends to participate, encourage them to do so too. Just do expect a mass exodus to fix the problems that we’re facing.

Update: Mark Zuckerberg wrote an op-ed in the Washington Post reiterating their goals and saying that changes will be coming. I wish he would’ve apologized for December or made any allusions to the fact that people were exposed or that they simply can’t turn off all that is now public. It’s not just about simplifying the available controls.

Translations:

Facebook is a utility; utilities get regulated

From day one, Mark Zuckerberg wanted Facebook to become a social utility. He succeeded. Facebook is now a utility for many. The problem with utilities is that they get regulated.

Yesterday, I ranted about Facebook and “radical transparency.” Lots of people wrote to thank me for saying what I said. And so I looked many of them up. Most were on Facebook. I wrote back to some, asking why they were still on Facebook if they disagreed with where the company was going. The narrative was consistent: they felt as though the needed to be there. For work, for personal reasons, because they got to connect with someone there that they couldn’t connect with elsewhere. Nancy Baym did a phenomenal job of explaining this dynamic in her post on Thursday: “Why, despite myself, I am not leaving Facebook. Yet.”

Every day. I look with admiration and envy on my friends who have left. I’ve also watched sadly as several have returned. And I note above all that very few of my friends, who by nature of our professional connections are probably more attuned to these issues than most, have left. I don’t like supporting Facebook at all. But I do.

And here is why: they provide a platform through which I gain real value. I actually like the people I went to school with. I know that even if I write down all their email addresses, we are not going to stay in touch and recapture the recreated community we’ve built on Facebook. I like my colleagues who work elsewhere, and I know that we have mailing lists and Twitter, but I also know that without Facebook I won’t be in touch with their daily lives as I’ve been these last few years. I like the people I’ve met briefly or hope I’ll meet soon, and I know that Facebook remains our best way to keep in touch without the effort we would probably not take of engaging in sustained one-to-one communication.

The emails that I received privately to my query elicited the same sentiment. People felt they needed to stay put, regardless of what Facebook chose to do. Those working at Facebook should be proud: they’ve truly provided a service that people feel is an essential part of their lives, one that they need more than want. That’s the fundamental nature of a utility. They succeeded at their mission.

Throughout Kirkpatrick’s “The Facebook Effect”, Zuckerberg and his comrades are quoted repeated as believing that Facebook is different because it’s a social utility. This language is precisely what’s used in the “About Facebook” on Facebook’s Press Room page. Facebook never wanted to be a social network site; it wanted to be a social utility. Thus, it shouldn’t surprise anyone that Facebook functions as a utility.

And yet, people continue to be surprised. Partially, this is Facebook’s fault. They know that people want to hear that they have a “choice” and most people don’t think choice when they think utility. Thus, I wasn’t surprised that Elliot Schrage’s fumbling responses in the NYTimes emphasized choice, not utility: “Joining Facebook is a conscious choice by vast numbers of people who have stepped forward deliberately and intentionally to connect and share… If you’re not comfortable sharing, don’t.”

In my post yesterday, I emphasized that what’s at stake with Facebook today is not about privacy or publicity but informed consent and choice. Facebook speaks of itself as a utility while also telling people they have a choice. But there’s a conflict here. We know this conflict deeply in the United States. When it comes to utilities like water, power, sewage, Internet, etc., I am constantly told that I have a choice. But like hell I’d choose Comcast if I had a choice. Still, I subscribe to Comcast. Begrudgingly. Because the “choice” I have is Internet or no Internet.

I hate all of the utilities in my life. Venomous hatred. And because they’re monopolies, they feel no need to make me appreciate them. Cuz they know that I’m not going to give up water, power, sewage, or the Internet out of spite. Nor will most people give up Facebook, regardless of how much they grow to hate them.

Your gut reaction might be to tell me that Facebook is not a utility. You’re wrong. People’s language reflects that people are depending on Facebook just like they depended on the Internet a decade ago. Facebook may not be at the scale of the Internet (or the Internet at the scale of electricity), but that doesn’t mean that it’s not angling to be a utility or quickly becoming one. Don’t forget: we spent how many years being told that the Internet wasn’t a utility, wasn’t a necessity… now we’re spending what kind of money trying to get universal broadband out there without pissing off the monopolistic beasts because we like to pretend that choice and utility can sit easily together. And because we’re afraid to regulate.

And here’s where we get to the meat of why Facebook being a utility matters. Utilities get regulated. Less in the United States than in any other part of the world. Here, we like to pretend that capitalism works with utilities. We like to “de-regulate” utilities to create “choice” while continuing to threaten regulation when the companies appear too monopolistic. It’s the American Nightmare. But generally speaking, it works, and we survive without our choices and without that much regulation. We can argue about whether or not regulation makes things cheaper or more expensive, but we can’t argue about whether or not regulators are involved with utilities: they are always watching them because they matter to the people.

The problem with Facebook is that it’s becoming an international utility, not one neatly situated in the United States. It’s quite popular in Canada and Europe, two regions that LOVE to regulate their utilities. This might start out being about privacy, but, if we’re not careful, regulation is going to go a lot deeper than that. Even in the States, we’ll see regulation, but it won’t look the same as what we see in Europe and Canada. I find James Grimmelmann’s argument that we think about privacy as product safety to be an intriguing frame. I’d expect to see a whole lot more coming down the line in this regards. And Facebook knows it. Why else would they bring in a former Bush regulator to defend its privacy practices?

Thus far, in the world of privacy, when a company oversteps its hand, people flip out, governments threaten regulation, and companies back off. This is not what’s happening with Facebook. Why? Because they know people won’t leave and Facebook doesn’t think that regulators matter. In our public discourse, we keep talking about the former and ignoring the latter. We can talk about alternatives to Facebook until we’re blue in the face and we can point to the handful of people who are leaving as “proof” that Facebook will decline, but that’s because we’re fooling ourselves. If Facebook is a utility – and I strongly believe it is – the handful of people who are building cabins in the woods to get away from the evil utility companies are irrelevant in light of all of the people who will suck up and deal with the utility to live in the city. This is going to come down to regulation, whether we like it or not.

The problem is that we in the tech industry don’t like regulation. Not because we’re evil but because we know that regulation tends to make a mess of things. We like the threat of regulation and we hope that it will keep things at bay without actually requiring stupidity. So somehow, the social norm has been to push as far as possible and then pull back quickly when regulatory threats emerge. Of course, there have been exceptions. And I work for one of them. Two decades ago, Microsoft was as arrogant as they come and they didn’t balk at the threat of regulation. As a result, the company spent years mired in regulatory hell. And being painted as evil. The company still lives with that weight and the guilt wrt they company’s historical hubris is palpable throughout the industry.

I cannot imagine that Facebook wants to be regulated, but I fear that it thinks that it won’t be. There’s cockiness in the air. Personally, I don’t care whether or not Facebook alone gets regulated, but regulation’s impact tends to extend much further than one company. And I worry about what kinds of regulation we’ll see. Don’t get me wrong: I think that regulators will come in with the best of intentions; they often (but not always) do. I just think that what they decide will have unintended consequences that are far more harmful than helpful and this makes me angry at Facebook for playing chicken with them. I’m not a libertarian but I’ve come to respect libertarian fears of government regulation because regulation often does backfire in some of the most frustrating ways. (A few weeks ago, I wrote a letter to be included in the COPPA hearings outlining why the intention behind COPPA was great and the result dreadful.) The difference is that I’m not so against regulation as to not welcome it when people are being screwed. And sadly, I think that we’re getting there. I just wish that Facebook would’ve taken a more responsible path so that we wouldn’t have to deal with what’s coming. And I wish that they’d realize that the people they’re screwing are those who are most vulnerable already, those whose voices they’ll never hear if they don’t make an effort.

When Facebook introduced the News Feed and received a backlash from its users, Zuckerberg’s first blog post was to tell everyone to calm down. When they didn’t, new features were introduced to help them navigate the system. Facebook was willing to talk to its users, to negotiate with them, to make a deal. Perhaps this was because they were all American college students, a population that early Facebook understood. Still, when I saw the backlash emerging this time, I was waiting and watching for an open dialogue to emerge. Instead, we got PR mumblings in the NYTimes telling people they were stupid and blog posts on “Gross National Happiness.” I’m sure that Facebook’s numbers are as high as ever and so they’re convinced that this will blow over, that users will just adjust. I bet they think that this is just American techies screaming up a storm for fun. And while more people are searching to find how to delete their account, most will not. And Facebook rightfully knows that. But what’s next is not about whether or not there’s enough user revolt to make Facebook turn back. There won’t be. What’s next is how this emergent utility gets regulated. Cuz sadly, I doubt that anything else is going to stop them in their tracks. And I think that regulators know that.

Update: I probably should’ve titled this “Facebook is trying to be a utility; utilities get regulated” but I chopped it because that was too long. What’s at stake is not whether or not we can agree that Facebook is a utility, but whether or not regulation will come into play. There’s no doubt that Facebook wants to be a utility, sees itself as a utility. So even if we don’t see them as a utility, the fact that they do matters. As does the fact that some people are using it with that attitude. I’d give up my water company (or Comcast) if a better alternative came along too. When people feel as though they are wedded to something because of its utilitarian value, the company providing it can change but the infrastructure is there for good.  Rather than arguing about the details of what counts as a utility, let’s move past that to think about what it means that regulation is coming.

Facebook and “radical transparency” (a rant)

At SXSW, I decided to talk about privacy because I thought that it would be the most important issue of the year. I was more accurate than my wildest dreams. For the last month, I’ve watched as conversations about privacy went from being the topic of the tech elite to a conversation that’s pervasive. The press coverage is overwhelming – filled with infographics and a concerted effort by journalists to make sense of and communicate what seems to be a moving target. I commend them for doing so.

My SXSW used a bunch of different case studies but folks focused on two: Google and Facebook. After my talk, I received numerous emails from folks at Google, including the PM in charge of Buzz. The tenor was consistent, effectively: “we fucked up, we’re trying to fix it, please help us.” What startled me was the radio silence from Facebook, although a close friend of mine told me that Randi Zuckerberg had heard it and effectively responded with a big ole ::gulp:: My SXSW critique concerned their decision in December, an irresponsible move that I felt put users at risk. I wasn’t prepared for how they were going to leverage that data only a few months later.

As most of you know, Facebook has been struggling to explain its privacy-related decisions for the last month while simultaneously dealing with frightening security issues. If you’re not a techie, I’d encourage you to start poking around. The NYTimes is doing an amazing job keeping up with the story, as is TechCrunch, Mashable, and InsideFacebook. The short version… People are cranky. Facebook thinks that it’s just weirdo tech elites like me who are pissed off. They’re standing firm and trying to justify why what they’re doing is good for everyone. Their attitude has triggered the panic button amongst regulators and all sorts of regulators are starting to sniff around. Facebook hired an ex-Bush regulator to manage this. No one is quite sure what is happening but Jason Calacanis thinks that Facebook has overplayed its hand. Meanwhile, security problems mean that even more content has been exposed, including email addresses, IP addresses (your location), and full chat logs. This has only upped the panic amongst those who can imagine worst case scenarios. Like the idea that someone out there is slowly piecing together IP addresses (location) and full names and contact information. A powerful database, and not one that anyone would be too happy to be floating around.

Amidst all of what’s going on, everyone is anxiously awaiting David Kirkpatrick’s soon-to-be-released “The Facebook Effect.” which basically outlines the early days of the company. Throughout the book, Kirkpatrick sheds light on why we’re where we are today without even realizing where we’d be. Consider these two quotes from Zuckerberg:

  • “We always thought people would share more if we didn’t let them do whatever they wanted, because it gave them some order.” – Zuckerberg, 2004
  • “You have one identity… The days of you having a different image for your work friends or co-workers and for the other people you know are probably coming to an end pretty quickly… Having two identities for yourself is an example of a lack of integrity” – Zuckerberg, 2009

In trying to be a neutral reporter, Kirkpatrick doesn’t critically interrogate the language that Zuckerberg or other executives use. At times, he questions them, pointing to how they might make people’s lives challenging. But he undermines his own critiques by accepting Zuckerberg’s premise that the tides they are a turning. For example, he states that “The older you are, the more likely you are to find Facebook’s exposure of personal information intrusive and excessive.” Interestingly, rock solid non-marketing data is about to be released to refute this point. Youth are actually much more concerned about exposure than adults these days. Why? Probably because they get it. And it’s why they’re using fake names and trying to go on the DL (down-low).

With this backdrop in mind, I want to talk about a concept that Kirkpatrick suggests is core to Facebook: “radical transparency.” In short, Kirkpatrick argues that Zuckerberg believes that people will be better off if they make themselves transparent. Not only that, society will be better off. (We’ll ignore the fact that Facebook’s purse strings may be better off too.) My encounters with Zuckerberg lead me to believe that he genuinely believes this, he genuinely believes that society will be better off if people make themselves transparent. And given his trajectory, he probably believes that more and more people want to expose themselves. Silicon Valley is filled with people engaged in self-branding, making a name for themselves by being exhibitionists. It doesn’t surprise me that Scoble wants to expose himself; he’s always the first to engage in a mass collection on social network sites, happy to be more-public-than-thou. Sometimes, too public. But that’s his choice. The problem is that not everyone wants to be along for the ride.

Jeff Jarvis gets at the core issue with his post “Confusing *a* public with *the* public”. As I’ve said time and time again, people do want to engage in public, but not the same public that includes all of you. Jarvis relies on Habermas, but the right way to read this is through the ideas of Michael Warner’s “Publics and Counterpublics”. Facebook was originally a counterpublic, a public that people turned to because they didn’t like the publics that they had accessed to. What’s happening now is ripping the public that was created to shreds and people’s discomfort stems from that.

What I find most fascinating in all of the discussions of transparency is the lack of transparency by Facebook itself. Sure, it would be nice to see executives use the same privacy settings that they determine are the acceptable defaults. And it would be nice to know what they’re saying when they’re meeting. But that’s not the kind of transparency I mean. I mean transparency in interface design.

A while back, I was talking with a teenage girl about her privacy settings and noticed that she had made lots of content available to friends-of-friends. I asked her if she made her content available to her mother. She responded with, “of course not!” I had noticed that she had listed her aunt as a friend of hers and so I surfed with her to her aunt’s page and pointed out that her mother was a friend of her aunt, thus a friend-of-a-friend. She was horrified. It had never dawned on her that her mother might be included in that grouping.

Over and over again, I find that people’s mental model of who can see what doesn’t match up with reality. People think “everyone” includes everyone who searches for them on Facebook. They never imagine that “everyone” includes every third party sucking up data for goddess only knows what purpose. They think that if they lock down everything in the settings that they see, that they’re completely locked down. They don’t get that their friends lists, interests, likes, primary photo, affiliations, and other content is publicly accessible.

If Facebook wanted radical transparency, they could communicate to users every single person and entity who can see their content. They could notify then when the content is accessed by a partner. They could show them who all is included in “friends-of-friends” (or at least a number of people). They hide behind lists because people’s abstractions allow them to share more. When people think “friends-of-friends” they don’t think about all of the types of people that their friends might link to; they think of the people that their friends would bring to a dinner party if they were to host it. When they think of everyone, they think of individual people who might have an interest in them, not 3rd party services who want to monetize or redistribute their data. Users have no sense of how their data is being used and Facebook is not radically transparent about what that data is used for. Quite the opposite. Convolution works. It keeps the press out.

The battle that is underway is not a battle over the future of privacy and publicity. It’s a battle over choice and informed consent. It’s unfolding because people are being duped, tricked, coerced, and confused into doing things where they don’t understand the consequences. Facebook keeps saying that it gives users choices, but that is completely unfair. It gives users the illusion of choice and hides the details away from them “for their own good.”

I have no problem with Scoble being as public as he’d like to be. And I do think it’s unfortunate that Facebook never gave him that choice. I’m not that public, but I’m darn close. And I use Twitter and a whole host of other services to be quite visible. The key to addressing this problem is not to say “public or private?” but to ask how we can make certain people are 1) informed; 2) have the right to chose; and 3) are consenting without being deceived. I’d be a whole lot less pissed off if people had to opt-in in December. Or if they could’ve retained the right to keep their friends lists, affiliations, interests, likes, and other content as private as they had when they first opted into Facebook. Slowly disintegrating the social context without choice isn’t consent; it’s trickery.

What pisses me off the most are the numbers of people who feel trapped. Not because they don’t have another choice. (Technically, they do.) But because they feel like they don’t. They have invested time, energy, resources, into building Facebook what it is. They don’t trust the service, are concerned about it, and are just hoping the problems will go away. It pains me how many people are living like ostriches. If we don’t look, it doesn’t exist, right?? This isn’t good for society. Forcing people into being exposed isn’t good for society. Outting people isn’t good for society, turning people into mini-celebrities isn’t good for society. It isn’t good for individuals either. The psychological harm can be great. Just think of how many “heros” have killed themselves following the high levels of publicity they received.

Zuckerberg and gang may think that they know what’s best for society, for individuals, but I violently disagree. I think that they know what’s best for the privileged class. And I’m terrified of the consequences that these moves are having for those who don’t live in a lap of luxury. I say this as someone who is privileged, someone who has profited at every turn by being visible. But also as someone who has seen the costs and pushed through the consequences with a lot of help and support. Being publicly visible isn’t always easy, it’s not always fun. And I don’t think that anyone should go through what I’ve gone through without making a choice to do it. So I’m angry. Very angry. Angry that some people aren’t being given that choice, angry that they don’t know what’s going on, angry that it’s become OK in my industry to expose people. I think that it’s high time that we take into consideration those whose lives aren’t nearly as privileged as ours, those who aren’t choosing to take the risks that we take, those who can’t afford to. This isn’t about liberals vs. libertarians; it’s about monkeys vs. robots.

if you’re not angry / you’re just stupid / or you don’t care
how else can you react / when you know / something’s so unfair
the men of the hour / can kill half the world in war
make them slaves to a super power / and let them die poor

– Ani Difranco, Out of Range

(Also posted at Blogher)

(Translated to Italian by orangeek)

“Privacy and Publicity in the Context of Big Data”

I gave today’s opening keynote at the WWW Conference in Raleigh, North Carolina.  My talk was about methodological and ethical issues involved in the study of Big Data, focusing heavily on privacy issues in light of public data.  The first third focuses on four important arguments: 1) Bigger Data are Not Always Better Data; 2) Not All Data are Created Equal; 3) What and Why are Different Questions; 4) Be Careful of Your Interpretations. I then move into argue “Just because data is accessible doesn’t mean that using it is ethical,” providing a series of different ways of looking at how people think about privacy and publicity.  I conclude by critiquing Facebook’s approach to privacy, from News Feed to Social Plugins/Instant Personalizer.

Privacy and Publicity in the Context of Big Data

Please enjoy!!

Speaking about Privacy and Publicity

Yesterday I gave the opening keynote at SXSW to over 5000 people (OMG, that room was huuuuuuuge). My talk was about privacy and publicity and I spent a lot of time pushing back against the notion that “privacy is dead.” In some ways, the talk is a call to arms, an invitation for people to rethink their models of privacy so that we can collectively build a society we want to live in. As with many of my other talks, I wrote this one out so that I could share it with any of you who weren’t able to join me in Austin:

Making Sense of Privacy and Publicity

My hope is that this talk will also get you to think about these issues. I realize that this is a provocative argument and I would LOVE any and all feedback that you might be willing to share. I’m especially fond of folks who disagree with me. And I think that this topic requires some debating.

For those of you who are still in Austin, have a fantastic rest of SXSW! w00t!

The Future of Reputation: Gossip, Rumor, and Privacy on the Internet

When I was last in DC, I had lunch with Daniel Solove and we were talking about book publishing. He had been thinking of making his book downloadable under Creative Commons and I was like DO IT DO IT! This is the kind of book that is sooo relevant so many different audiences who would never hear about it through traditional advertising. My thought is that if it were available online, it could whet folks appetite before buying it (cuz printing it out is painful and reading it online is not wonderful either and your Kindle doesn’t support PDFs). Introducing…

The Future of Reputation: Gossip, Rumor, and Privacy on the Internet

This book examines the darker side of personal expression and communication online, looking at some of the social costs of what I’m always rambling on about as “persistence, searchability, replicability, and invisible audiences.” Our reputation is one of our greatest assets. What happens when our own acts or the acts of others sully that? What role does the technology play in enabling or stopping that? How should the law modernize its approach to privacy and slander to address the networked world?

While this book is written by a professor, it’s written in extremely accessible manner and should be devoured by parents, marketers, technologists, teachers, HR professionals, policy makers, and anyone else who might have a stake in the world of reputation. I also found excerpts helpful for students who are trying to make sense of the costs of their practices. Oh, and it’s a fun read.

If you hate reading from the screen, just go and buy the book. The author and his publisher will thank you.

(Oh, and go Yale University Press! You’re batting well in the CC/open-access publishing baseball game!)

a google horror story: what happens when you are disappeared

Earlier this week, an acquaintance of mine found himself trapped in a Kafka-esque nightmare, a nightmare that should make all of us stop and think. He wants to remain anonymous so let’s call him Bob. Bob was an early adopter of all things Google. His account was linked to all sorts of Google services. Gmail was the most important thing to him – he’d been using it for four years and all of his email (a.k.a. “his life”) was there. Bob also managed a large community in Orkut, used Google’s calendaring service, and had accounts on many of of their different properties.

Earlier this week, Bob received a notice that there was a spam problem in his Orkut community. The message was in English and it looked legitimate and so he clicked on it. He didn’t realize that he’d fallen into a phisher’s net until it was too late. His account was hijacked for god-knows-what-purposes until his account was blocked and deleted. He contacted Google’s customer service and their response basically boiled down to “that sucks, we can’t restore anything, sign up for a new account.” Boom! No more email, no more calendar, no more Orkut, no more gChat history, no more Blogger, no more anything connected to his Google account.

::gasp:: My heart threatens to attack my throat at the mere idea of losing four years worth of email. ::shudder:: Or what if this blog disappeared? Like, OMG. {insert horror film music here}

Luckily, Bob is well-connected. His friends in high places forwarded his story to powerful people inside Google. Today, his account was restored. While such a restoration should provide a sigh of relief, it’s also a bit disconcerting. What if Bob hadn’t been so well connected? What other kinds of damage can phishers do to people who have so many of their key tools linked together under a common account?

Most tech companies blame phishing victims. Basically, the general sentiment is that if people weren’t so stupid, there wouldn’t be a problem. Yet, there is great research on Why Phishing Works that shows that even sophisticated users can be deceived. While education is important, it is unrealistic to expect all users to keep up with the developments of scammers’ deceptive techniques. Consider the story of Clementine, a 13-year-old citizen of Gaia Online who fell victim to a phishing attack and had her account deleted without recourse. Once again, Clementine’s saving grace was that she had connections, but it took a long time and she was written out of her primary social space in the meantime.

When companies host all of your data and have the ability to delete you and it at-will, all sorts of nightmarish science fiction futures are possible. This is the other side of the “identity theft” nightmare where the companies thieve and destroy individuals’ identities. What are these companies’ responsibilities? Who is overseeing them? What kind of regulation is necessary?

There’s also a flip-side to this story. Google was able to restore his account because they kept everything on backup servers. In this case, Bob didn’t want to have all of his content deleted. But what if he had deleted it himself and expected it to be deleted permanently? Who should have the right to recall his data and under what circumstances? I find it particularly haunting that there is no way to delete your Facebook account. You can only “deactivate” it, but you can reactivate it at any time and everything will come right back. What if you don’t want to go down on Facebook’s permanent record?

These are the issues that worry all sorts of privacy and identity types. They are the cornerstone of books like Daniel Solove’s The Digital Person and Simson Garfinkel’s Database Nation. Yet, as with identity theft, few people stop to think about data loss until it happens to them. But perhaps we should. How would you feel if the company hosting your email suddenly decided to disappear you? Or if Facebook/MySpace/Flickr/Xanga/etc. decided to delete your account right now? (There are plenty of examples of this one too. For example, many celebrities have found their accounts obliterated because company reps think that they’re fake. And then there was Friendster…) Imagine if you had no path of recourse. Talk about disempowering!

In thinking about this, your first response should be to back up your data. (And grumble loudly about all of the places where this isn’t possible.) But what’s your second step? What kind of legislation is necessary to address this? What kind of data recovery (or non-recovery) policies should companies have?

Update: Check out this case of a guy being banished from Facebook for reasons that the company refuses to explain to him (in a Kafka-esque nightmare). This is particularly intriguing given that the company is trying to make Facebook a universal platform. If Facebook becomes a platform, what rights to due process do users have?

adults’ views on privacy (new PEW report)

PEW has a new report out on adults and privacy: Digital Footprints. It’s a solid report on the state of adults’ perception of privacy wrt the internet. Of course, what amuses me is that adults are saying one thing and doing another.

Adults are more likely than teens to have public profiles on SNSs. 60% of adults are not worried about how much information is available about them online. (Of course, young adults are more likely than older adults to believe it would be “very difficult” for someone to locate or contact them.) 61% of adults do not bother to limit the amount of information that can be found about them (including many who are purportedly worried).

In other words, adults (and presumably there are parents in this group) are telling teens to be careful online and restrict what information they put up there while they themselves are doing little to protect their own data.

This reminds me of adults who tell their kids never to meet strangers online under any circumstances and then proceed to use online dating sites and, rather than meet in public places, choose to go to the stranger’s private residence. Adults need to think about safety too – it’s not a story of binaries. The safe and practical approach is somewhere between abstinence and uber risky behavior.

Both adults and children need to learn how to negotiate safety and privacy in a meaningful and nuanced way. Adults need to socialize young people into conscientious participation online, both wrt to privacy and safety. You cannot simply wait until teens are 18 and then flip the switch and say GO! This has dreadful and dangerous consequences.

Anyhow, I’m not doing justice to the PEW report. Read it yourself. It’s quite interesting and there’s great data and it’s well situated.

SNS visibility norms (a response to Scoble)

A few days ago, I lamented the tech crowd’s Facebook fetish. Scoble raised the bar by responding to all of my nitpicks. Now, it’s my turn again. Tehehe.

I think that Scoble summed it all up perfectly with this:

“But what I don’t understand is why so much of the tech crowd who lament Walled Gardens worship Facebook.” Because there isn’t anything better. It’s like why we are so gaga over the iPhone. The iPhone is locked up tight and doesn’t let us play. But it is so superior to the alternatives that we’ll put up with all the walls.

He’s totally right. And what he’s really saying is that I should recognize and accept the hypocrisy within the tech crowd. They will happily say one thing loudly, but if the cool new glittery toy that they want has major failings, they’ll bite – hook, line, and sinker. I’m not convinced that FB is “so superior to the alternatives,” but I totally see how it plays into the values and aesthetics of the tech crowd. Maybe we should start calling FB (and other tech toys) “Precious”? And then we can run around in demented voices saying “One tool to rule them all!” ::giggle:: (OK, that’s probably not funny, but it’s late and I’m entertaining myself here.)

Anyhow… what I really want to address was a realization wrt visibility that I had while reading Scoble. In writing my earlier post, I was thinking primarily of teens when I was talking about visibility. Scoble points out that he really WANTS to be super visible, searchable on Google, etc. And he references the career-minded college students who will relish said visibility. This made me think about the different factors at play when it comes to visibility on social network sites.

MySpace started out as PUBLIC PUBLIC PUBLIC. They only added privacy features when they welcomed 14 and 15-year olds and for a while, you had to lie and say you were 14 to get a private profile. While the teen crowd was not using MySpace as a hyperpublic platform, artists were. They wanted to be as public as possible, to get as many fans as possible, to SEE and BE SEEN. This wasn’t just the story of musicians… even semi-porn divas like Forbidden and Tila were all about being hyperpublic and there were certainly teens who thought they’d be the next American Idol or Top Model by being found on MySpace. There are folks who want to leverage the platform to be the object of everyone’s gaze. As it expanded, MySpace received unbelievable pressure to add privacy options, to protect its users (both young and old). Even though a MS Friends-only profile is about as private as you can get, MySpace is constantly shat on for being dangerous because of exposure.

Facebook differentiated itself by being private, often irritatingly so. Hell, in the beginning Harvard kids couldn’t interact with their friends at Yale, but that quickly changed. Teens and their parents worship Facebook for its privacy structures, often not realizing that joining the “Los Angeles” network is not exactly private. For college students and high school students, the school and location network are really meaningful and totally viable structural boundaries for sociability. Yet, the 25+ crowd doesn’t really live in the same network boundaries. I’m constantly shifting between LA and SF as my city network. When I interview teens, 80%+ of their FB network is from their high school. Only 8% of my network is from Berkeley and the largest network (San Francisco) only comprises 17% of my network. Networks don’t work for highly-mobile 25+ crowd because they don’t live in pre-defined networks. (For once, I’m an example!)

The interesting thing is that Scoble wants to make Facebook do what MySpace does. He wants to be a micro-celeb with a bazillion friends/fans and he wants to interact with all of them. And he wants to do it on Facebook because he sees that as more his space than MySpace, even though the other is set up for that. (I can’t really see the porn-Scoble or the emo-Scoble, but it sure would be funny.) He’s bumping up against the fact that Facebook was designed to be closed, to be intimate, to be tight. It was what made its early adopters value it. And now, for whatever reason, Facebook has decided to move in the direction of MySpace – slowly tiptoeing to being a very public service.

It makes sense to attract those who want to be public, but how public can they go without affecting those who relish the closed-ness? For the most part, Facebook has been immune from privacy-related attacks from the Attorneys General and press. They’ve been toted as the “right” solution. Can people who want to be private live alongside those who want to be PUBLIC? How are boundaries going to be negotiated? It seems to me that this all comes back to context and context is really getting cloudy here. It seems to me that there might be two totally different sets of expectations emerging without an in-between solution. And I have a sneaking suspicion that the “solution” is to push people into accepting being public.

I feel the need to address folks’ response that it’s all about the privacy settings. Someone out there has to have public data on how frequently people change settings vs. staying with the defaults. (I’ve seen plenty of private reports on this, but don’t know of any that I can cite.) Let’s just say that defaults matter. Very few people change the defaults. They are more likely to shift their behavior (or leave a site) than change the defaults. Thus, a move to force people to “opt-out” is not only about dictating the social expectations, but also setting people up to face the costs of those defaults, even if they don’t really want to. I don’t really understand why Facebook decided to make public search opt-out. OK, I do get it, but I don’t like it. Those who want to be PUBLIC are more likely to change settings than those who chose Facebook for its perceived privacy. Why did Facebook go from default-to-privacy-protection to default-to-exposure? I guess I know the answer to this… it’s all about philosophy. Unfortunately, it’s not a philosophy that most of the teens I interviewed or their parents share. But this type of exposure is far more insidious and potentially harmful than the privacy trainwreck I documented earlier.

I think that one of the reasons that the tech crowd lurves Facebook is because they both want the “transparent society.” This is the philosophy that information dissemination can only be beneficial and that people should not seek to hide things. Embedded in this are unstated issues of privilege and normative views. It’s OK to be transparent when you look like everyone else, but go ask the gay Christian living in an Arab state how he feels about being transparent about his social world. Fleshing out a critique of the transparent society requires a different post, but I’m starting to get the sinking feeling that we’re all part of a transparent society experiment and my discomfort stems from a deep concern about who all is going to get washed up in that tsunami. The goal doesn’t seem to be about helping people maintain privacy; it seems more like pushing them to accept a world where they are constantly aware of everyone around them. Hmm…

controlling your public appearance

In the last month, I’ve received almost a dozen panicked emails from people who had commented on my blog at one point or another and were horrified to find that their comment was at the top of Google’s search for their name. In each case, I have respectfully altered the comment to an anonymous name. I prefer not to remove these comments because this leaves holes in my blog, especially when others’ comments are based on those earlier comments. Unfortunately, most of these people do not understand how Google’s cache works and write back in rage that it’s not fixed. I politely try to inform them that Google’s cache can take months to update and I cannot do anything to speed this up.

When people bitch about MySpace and Facebook being walled gardens, one of the positive things that I offer in return is, “at least those teens’ profiles aren’t in Google’s cache.” With Facebook’s opt-out decision, this is no longer the case. As I mentioned yesterday, I’m a bit terrified of what this might mean long-term.

As a teenager, I was petrified of my mother finding my Usenet posts. It’s not that I said much on Usenet that would’ve upset her (although the Bad Religion tirades are a wee bit embarrassing), but I didn’t want her to see my political or topical commentaries. (Sidenote: I left the sexuality exploration discussions for IRC which ::crossing fingers:: weren’t recorded.) I used various handles, most of which are not findable by anyone other than my brother (and even he can’t find all of them). That’s not to say that there’s not a lot of embarrassing material online – I’ve been blogging for over ten years and I’ve definitely posted things that would be drudged up if I were to run for office.

The best thing about being an active blogger is that stuff gets buried by repetitive blogging. My new stuff goes to the top of the search engines, my old stuff fades away. And we have a name for anyone who goes out of their way to find that old stuff: stalker. And we don’t really wanna work for, date, or befriend genuine stalkers. If it’s public, but not easy to find, it’s creepy that you went out of your way to find it. (I’m fascinated by the creeps… and journalists… who go through courthouses and other public records places to drudge up tax records, legal motions, housing details, etc. It’s all public, but c’mon now…)

We’ve all heard that privacy is dead, but you can still control your public appearance and it’s really critical that you start doing so. Don’t whimper about how Google is destroying your reputation. Take control!

So here are some suggestions, for adults and teenagers:

  • Create a public Internet identity. I strongly recommend blogging, but even a homepage will do. Have a genuine all-accessible identity online that you’re cool with grandma and your boss reading. Don’t make it uber drab, but do provide context for who you are, what you do, what you’re passionate about, etc. Think of it as a digital body and dress it up as if it were going into a job interview. Blogging is especially good because you can keep updating your identity over time in a way that shows that you think. It’s much easier to get a sense of someone through their commentary on public affairs or life around them than through a static page.
  • Say NO! to Facebook’s public search option. Click “privacy” – “search.” Under “Who can find my public search listing outside of Facebook?” uncheck both boxes. Be proactive about this. You might not think you care now, but having your Facebook profile at the top of a search for your name might not be what you want when you’re looking for a job.
  • Expect unexpected audiences. Your profile on Facebook and MySpace might be “private” but when you join the Los Angeles Network or when you accept someone who knows someone, you might find that the audience viewing your profile is not who you expected. Are you prepared for this? Make sure that profile says what you want it to say, even to those you don’t expect. If you want to be a porn diva and make it in Hollywood, put up that slutty photo, but if you want to be a lawyer, you might regret that photo a few years from now. Of course, I’m sure there are porn stars who later became lawyers, just like there are actors who became governors.
  • Write blog comments as though you’re writing your own blog. The more popular a blog, the more likely the comments from that blog are to show up high on Google’s lists. If you write inflammatory shit on those blogs just to piss people off, it will come back to haunt you. (It depresses me that a huge chunk of the comments on BoingBoing’s new comment system are extremely negative.) Personally, I don’t think that you should be anonymous on a blog. I think that you should stand by your name, but write articulately. And blog on your own blog so that the comments are not at the top.
  • Treat video and audio just like text. Right now, video and audio aren’t searchable, but they will be. Don’t think that you can say or do anything you want on a video and it will never come up. That Neo-Nazi video you made and put up on YouTube cuz you thought it was funny will eventually be searchable and associated with your name. Are you really ready for that to appear at the top of a Google ego search?

(If you have other suggestions, add them to the comments.)

But above all else, seriously, create a public Internet identity, maintain it, link to it, build it, love it, hug it, and call it George. I can’t tell you how important this is. I used to say that a LinkedIn profile would do, but now that they’re so locked down to people who don’t pay, they don’t provide that value any more. If you don’t want to go through the hassle of registering a domain and figuring out HTML, just make a Blogspot account and make the Title your name. But keep it up-to-date so that when people want to look up who you are, they’re going to see that page and go, “wow, she’s really interesting.”

Yesterday, I was talking about this uber smart college frosh to one of my colleagues. His name is about as generic as it gets and he shares it with a few celebs – “Sam Jackson” – so I wasn’t expecting much when I threw his name into Google. Much to my pleasure, his college blog comes up as #4 on Google. Here is a newly minted college freshman who put together a blog about applying to college when he was in high school, has commented on others’ blogs in an articulate and engaging manner, and is genuinely actively engaged in thinking about the world around him. He’s attracted the attention of all sorts of folks and I have no doubt that people who wish to hire him (or admit him) have looked at this blog to get a sense of who he is. He makes it clear that he understands this medium and how to present himself accordingly. Hell, I intend to hire him precisely because he gets it.

Carefully crafting and cautiously managing one’s public image is a critical aspect of living in a mediated public world. Every advice column I’ve read warns people of the dangers of living online. I think that this is idiotic. People need to embrace the world we live in and learn to work within its framework. Don’t panic about being public – embrace it and handle it with elegance.

[PS: I’ve said a lot of this before in the Harvard Business Review.]