Guilt Through Algorithmic Association

You’re a 16-year-old Muslim kid in America. Say your name is Mohammad Abdullah. Your schoolmates are convinced that you’re a terrorist. They keep typing in Google queries likes “is Mohammad Abdullah a terrorist?” and “Mohammad Abdullah al Qaeda.” Google’s search engine learns. All of a sudden, auto-complete starts suggesting terms like “Al Qaeda” as the next term in relation to your name. You know that colleges are looking up your name and you’re afraid of the impression that they might get based on that auto-complete. You are already getting hostile comments in your hometown, a decidedly anti-Muslim environment. You know that you have nothing to do with Al Qaeda, but Google gives the impression that you do. And people are drawing that conclusion. You write to Google but nothing comes of it. What do you do?

This is guilt through algorithmic association. And while this example is not a real case, I keep hearing about real cases. Cases where people are algorithmically associated with practices, organizations, and concepts that paint them in a problematic light even though there’s nothing on the web that associates them with that term. Cases where people are getting accused of affiliations that get produced by Google’s auto-complete. Reputation hits that stem from what people _search_ not what they _write_.

It’s one thing to be slandered by another person on a website, on a blog, in comments. It’s another to have your reputation slandered by computer algorithms. The algorithmic associations do reveal the attitudes and practices of people, but those people are invisible; all that’s visible is the product of the algorithm, without any context of how or why the search engine conveyed that information. What becomes visible is the data point of the algorithmic association. But what gets interpreted is the “fact” implied by said data point, and that gives an impression of guilt. The damage comes from creating the algorithmic association. It gets magnified by conveying it.

  1. What are the consequences of guilt through algorithmic association?
  2. What are the correction mechanisms?
  3. Who is accountable?
  4. What can or should be done?

Note: The image used here is Photoshopped. I did not use real examples so as to protect the reputations of people who told me their story.

Update: Guilt through algorithmic association is not constrained to Google. This is an issue for any and all systems that learn from people and convey collective “intelligence” back to users. All of the examples that I was given from people involved Google because Google is the dominant search engine. I’m not blaming Google. Rather, I think that this is a serious issue for all of us in the tech industry to consider. And the questions that I’m asking are genuine questions, not rhetorical ones.

Exciting News: Me @ Microsoft Research + New York University

When I was finishing my PhD and starting to think about post-school plans, I made a list of my favorite university departments. At the top of the list was New York University’s “Media, Culture, and Communication” (MCC) department. I am in awe of their faculty and greatly admire the students who I know who graduated from there. I decided that MCC was my dream department.

When I joined Microsoft Research, I had a bit of a pang of sadness over the fact that I was opting out of the formal academic job market before it opened, in part because I was really hoping that MCC would have a job opening. But I also realized that I’d be a fool not to take the MSR job. Working at Microsoft Research is a complete dream come true. I have enormous freedom, unbelievable support, and the opportunity to really create a community of researchers.

But then I started wondering… would there be any way to do both? Yes, this is a twisted thought coming from a workaholic, but it kept nagging at the back of my brain. Countless Microsoft Research faculty in Redmond have joint appointments at University of Washington. And I’m already splitting time between New York and Boston for personal reasons and will be spending more time in New York in the future. So, maybe I could have my cake and eat it too…

One day, hanging out with Helen Nissenbaum, I mentioned that I lurved her department from the bottom of my heart. And, in an off-hand comment, I said something about how I would love love love to have a joint position at MCC. And somehow, what began as a side comment, slowly blossomed into a flower when Marita Sturken – the MCC Chair – told me that she thought that this was a great idea. We started talking and negotiating and plotting and imagining. And, to my surprise and delight, Marita called to say that it was possible to create a joint position for me between MSR and MCC.

So, I am tickled pink to announce that I now have a joint appointment at NYU’s Media, Culture, and Communication department. I am joining the faculty as a Research Assistant Professor. I won’t be teaching any formal classes this year, although I’m looking forward to teaching in the future. In the meantime, I will be advising students and collaborating on research and getting involved in the department life. I will not be leaving Microsoft Research – I still don’t see why anyone would leave MSR. My primary affiliation will still be MSR and MSR will continue to be my academic home. But I’m also excited to have a joint appointment at NYU’s MCC that allows me to engage with the scholarly community and with students in new ways. And I’m really really really excited about this!

w000t!!!

I do not speak for my employer.

I don’t know whether to laugh or cry when people imply that when I make arguments, I’m speaking on behalf of Microsoft. Anyone who knows me knows that my opinions are my own. (This blog sez so too but no one ever seems to reads that.) What I most appreciate about my employer is that they allow me to speak my mind, even when we disagree. This is what it means to have freedom as a researcher and it’s one of the reasons that I love love love Microsoft Research. I never ever speak on behalf of Microsoft but I have zero clue of why people desperately want to perpetuate this myth. This is what makes me want to cry.

What makes me want to laugh is the irony of folks thinking I speak on behalf of Microsoft when I am critiquing an industry-wide practice that is most prominent because of Google’s recent implementation. Yes, I work for Microsoft. But I used to work for Google on social products. Many of my friends – and my brother – work for Google. I also used to work for Bradley Horowitz (one of the folks in charge of Google Plus) when we were both at Yahoo! and I adore him to pieces. I have nothing but respect for the challenges involved in building products, but I also have no qualms about highlighting problematic corporate logic. My arguments are not coming from a point of hatred towards any company or individual, but stemming from a determination to speak up for those who are voiceless in many of these discussions and to provide a different perspective with which to understand the issues.

I write and critique decisions in the tech industry when I feel as though those decisions have unintended consequences for those being affected. I’m particularly passionate when what’s at stake has implications for equality. I recognize and respect the libertarian ethos that persists in the Valley, but I think that it’s critical that privileged folks understand the cultural logic of those who are not that privileged. And, as someone who has an obscene amount of privilege at this stage in the game, I’m committed to using my stature to draw attention to issues that affect people who are marginalized. And when I get pissed off about something, I rant. And that can be both good and bad. But I’ve found that my rants often make people think. That’s what motivates me to keep ranting.

Sometimes, what I say pisses people off. Sometimes, it sounds like I’m dissing particular products or people. Usually, though, I’m critiquing assumptions that persist in the tech industry and the policies that unfold because of those assumptions. And I recognize that those who don’t know me have a bad tendency to misinterpret what I’m saying. I struggle every time I write to do my darndest to be understandable to as many people as I can. And when I’m most visible, folks often think I’m saying the darndest things. But even though I don’t correct everyone, that doesn’t mean that it’s not frustrating to be taken out of context so frequently.

And so it goes… and so it goes..

“Oh, how I miss substituting the conclusion to confrontation with a kiss.”

Designing for Social Norms (or How Not to Create Angry Mobs)

In his seminal book “Code”, Larry Lessig argued that social systems are regulated by four forces: 1) the market; 2) the law; 3) social norms; and 4) architecture or code. In thinking about social media systems, plenty of folks think about monetization. Likewise, as issues like privacy pop up, we regularly see legal regulation become a factor. And, of course, folks are always thinking about what the code enables or not. But it’s depressing to me how few people think about the power of social norms. In fact, social norms are usually only thought of as a regulatory process when things go terribly wrong. And then they’re out of control and reactionary and confusing to everyone around. We’ve seen this with privacy issues and we’re seeing this with the “real name” policy debates. As I read through the discussion that I provoked on this issue, I couldn’t help but think that we need a more critical conversation about the importance of designing with social norms in mind.

Good UX designers know that they have the power to shape certain kinds of social practices by how they design systems. And engineers often fail to give UX folks credit for the important work that they do. But designing the system itself is only a fraction of the design challenge when thinking about what unfolds. Social norms aren’t designed into the system. They don’t emerge by telling people how they should behave. And they don’t necessarily follow market logic. Social norms emerge as people – dare we say “users” – work out how a technology makes sense and fits into their lives. Social norms take hold as people bring their own personal values and beliefs to a system and help frame how future users can understand the system. And just as “first impressions matter” for social interactions, I cannot underestimate the importance of early adopters. Early adopters configure the technology in critical ways and they play a central role in shaping the social norms that surround a particular system.

How a new social media system rolls out is of critical importance. Your understanding of a particular networked system will be heavily shaped by the people who introduce you to that system. When a system unfolds slowly, there’s room for the social norms to slowly bake, for people to work out what the norms should be. When a system unfolds quickly, there’s a whole lot of chaos in terms of social norms. Whenever a network system unfolds, there are inevitably competing norms that arise from people who are disconnected to one another. (I can’t tell you how much I loved watching Friendster when the gay men, Burners, and bloggers were oblivious to one another.) Yet, the faster things move, the faster those collisions occur, and the more confusing it is for the norms to settle.

The “real name” culture on Facebook didn’t unfold because of the “real name” policy. It unfolded because the norms were set by early adopters and most people saw that and reacted accordingly. Likewise, the handle culture on MySpace unfolded because people saw what others did and reproduced those norms. When social dynamics are allowed to unfold organically, social norms are a stronger regulatory force than any formalized policy. At that point, you can often formalize the dominant social norms without too much pushback, particularly if you leave wiggle room. Yet, when you start with a heavy-handed regulatory policy that is not driven by social norms – as Google Plus did – the backlash is intense.

Think back to Friendster for a moment… Remember Fakester? (I wrote about them here.) Friendster spent ridiculous amounts of time playing whack-a-mole, killing off “fake” accounts and pissing off some of the most influential of its userbase. The “Fakester genocide” prompted an amazing number of people to leave Friendster and head over to MySpace, most notably bands, all because they didn’t want to be configured by the company. The notion of Fakesters died down on MySpace, but the most central practice – the ability for groups (bands) to have recognizable representations – ended up being the most central feature of MySpace.

People don’t like to be configured. They don’t like to be forcibly told how they should use a service. They don’t want to be told to behave like the designers intended them to be. Heavy-handed policies don’t make for good behavior; they make for pissed off users.

This doesn’t mean that you can’t or shouldn’t design to encourage certain behaviors. Of course you should. The whole point of design is to help create an environment where people engage in the most fruitful and healthy way possible. But designing a system to encourage the growth of healthy social norms is fundamentally different than coming in and forcefully telling people how they must behave. No one likes being spanked, especially not a crowd of opinionated adults.

Ironically, most people who were adopting Google Plus early on were using their real names, out of habit, out of understanding how they thought the service should work. A few weren’t. Most of those who weren’t were using a recognizable pseudonym, not even trying to trick anyone. Going after them was just plain stupid. It was an act of force and people felt disempowered. And they got pissed. And at this point, it’s no longer about whether or not the “real names” policy was a good idea in the first place; it’s now an act of oppression. Google Plus would’ve been ten bazillion times better off had they subtly encouraged the policy without making a big deal out of it, had they chosen to only enforce it in the most egregious situations. But now they’re stuck between a rock and a hard place. They either have to stick with their policy and deal with the angry mob or let go of their policy as a peace offering in the hopes that the anger will calm down. It didn’t have to be this way though and it wouldn’t have been had they thought more about encouraging the practices they wanted through design rather than through force.

Of course there’s a legitimate reason to want to encourage civil behavior online. And of course trolls wreak serious havoc on a social media system. But a “real names” policy doesn’t stop an unrepentant troll; it’s just another hurdle that the troll will love mounting. In my work with teens, I see textual abuse (“bullying”) every day among people who know exactly who each other is on Facebook. The identities of many trolls are known. But that doesn’t solve the problem. What matters is how the social situation is configured, the norms about what’s appropriate, and the mechanisms by which people can regulate them (through social shaming and/or technical intervention). A culture where people can build reputation through their online presence (whether “real” names or pseudonyms) goes a long way in combating trolls (although it is by no means a fullproof solution). But you don’t get that culture by force; you get it by encouraging the creation of healthy social norms.

Companies that build systems that people use have power. But they have to be very very very careful about how they assert that power. It’s really easy to come in and try to configure the user through force. It’s a lot harder to work diligently to design and build the ecosystem in which healthy norms emerge. Yet, the latter is of critical importance to the creation of a healthy community. Cuz you can’t get to a healthy community through force.

“Real Names” Policies Are an Abuse of Power

Everyone’s abuzz with the “nymwars,” mostly in response to Google Plus’ decision to enforce its “real names” policy. At first, Google Plus went on a deleting spree, killing off accounts that violated its policy. When the community reacted with outrage, Google Plus leaders tried to calm the anger by detailing their “new and improved” mechanism to enforce “real names” (without killing off accounts). This only sparked increased discussion about the value of pseudonymity. Dozens of blog posts have popped up with people expressing their support for pseudonymity and explaining their reasons. One of the posts, by Kirrily “Skud” Robert included a list of explanations that came from people she polled, including:

  • “I am a high school teacher, privacy is of the utmost importance.”
  • “I have used this name/account in a work context, my entire family know this name and my friends know this name. It enables me to participate online without being subject to harassment that at one point in time lead to my employer having to change their number so that calls could get through.”
  • “I do not feel safe using my real name online as I have had people track me down from my online presence and had coworkers invade my private life.”
  • “I’ve been stalked. I’m a rape survivor. I am a government employee that is prohibited from using my IRL.”
  • “As a former victim of stalking that impacted my family I’ve used [my nickname] online for about 7 years.”
  • “[this name] is a pseudonym I use to protect myself. My web site can be rather controversial and it has been used against me once.”
  • “I started using [this name] to have at least a little layer of anonymity between me and people who act inappropriately/criminally. I think the “real names” policy hurts women in particular.
  • “I enjoy being part of a global and open conversation, but I don’t wish for my opinions to offend conservative and religious people I know or am related to. Also I don’t want my husband’s Govt career impacted by his opinionated wife, or for his staff to feel in any way uncomfortable because of my views.”
  • “I have privacy concerns for being stalked in the past. I’m not going to change my name for a google+ page. The price I might pay isn’t worth it.”
  • “We get death threats at the blog, so while I’m not all that concerned with, you know, sane people finding me. I just don’t overly share information and use a pen name.”
  • “This identity was used to protect my real identity as I am gay and my family live in a small village where if it were openly known that their son was gay they would have problems.”
  • “I go by pseudonym for safety reasons. Being female, I am wary of internet harassment.”

You’ll notice a theme here…

Another site has popped up called “My Name Is Me” where people vocalize their support for pseudonyms. What’s most striking is the list of people who are affected by “real names” policies, including abuse survivors, activists, LGBT people, women, and young people.

Over and over again, people keep pointing to Facebook as an example where “real names” policies work. This makes me laugh hysterically. One of the things that became patently clear to me in my fieldwork is that countless teens who signed up to Facebook late into the game chose to use pseudonyms or nicknames. What’s even more noticeable in my data is that an extremely high percentage of people of color used pseudonyms as compared to the white teens that I interviewed. Of course, this would make sense…

The people who most heavily rely on pseudonyms in online spaces are those who are most marginalized by systems of power. “Real names” policies aren’t empowering; they’re an authoritarian assertion of power over vulnerable people. These ideas and issues aren’t new (and I’ve even talked about this before), but what is new is that marginalized people are banding together and speaking out loudly. And thank goodness.

What’s funny to me is that people also don’t seem to understand the history of Facebook’s “real names” culture. When early adopters (first the elite college students…) embraced Facebook, it was a trusted community. They gave the name that they used in the context of college or high school or the corporation that they were a part of. They used the name that fit into the network that they joined Facebook with. The names they used weren’t necessarily their legal names; plenty of people chose Bill instead of William. But they were, for all intents and purposes, “real.” As the site grew larger, people had to grapple with new crowds being present and discomfort emerged over the norms. But the norms were set and people kept signing up and giving the name that they were most commonly known by. By the time celebrities kicked in, Facebook wasn’t demanding that Lady Gaga call herself Stefani Germanotta, but of course, she had a “fan page” and was separate in the eyes of the crowd. Meanwhile, what many folks failed to notice is that countless black and Latino youth signed up to Facebook using handles. Most people don’t notice what black and Latino youth do online. Likewise, people from outside of the US started signing up to Facebook and using alternate names. Again, no one noticed because names transliterated from Arabic or Malaysian or containing phrases in Portuguese weren’t particularly visible to the real name enforcers. Real names are by no means universal on Facebook, but it’s the importance of real names is a myth that Facebook likes to shill out. And, for the most part, privileged white Americans use their real name on Facebook. So it “looks” right.

Then along comes Google Plus, thinking that it can just dictate a “real names” policy. Only, they made a huge mistake. They allowed the tech crowd to join within 48 hours of launching. The thing about the tech crowd is that it has a long history of nicks and handles and pseudonyms. And this crowd got to define the early social norms of the site, rather than being socialized into the norms set up by trusting college students who had joined a site that they thought was college-only. This was not a recipe for “real name” norm setting. Quite the opposite. Worse for Google… Tech folks are VERY happy to speak LOUDLY when they’re pissed off. So while countless black and Latino folks have been using nicks all over Facebook (just like they did on MySpace btw), they never loudly challenged Facebook’s policy. There was more of a “live and let live” approach to this. Not so lucky for Google and its name-bending community. Folks are now PISSED OFF.

Personally, I’m ecstatic to see this much outrage. And I’m really really glad to see seriously privileged people take up the issue, because while they are the least likely to actually be harmed by “real names” policies, they have the authority to be able to speak truth to power. And across the web, I’m seeing people highlight that this issue has more depth to it than fun names (and is a whole lot more complicated than boiling it down to being about anonymity, as Facebook’s Randi Zuckerberg foolishly did).

What’s at stake is people’s right to protect themselves, their right to actually maintain a form of control that gives them safety. If companies like Facebook and Google are actually committed to the safety of its users, they need to take these complaints seriously. Not everyone is safer by giving out their real name. Quite the opposite; many people are far LESS safe when they are identifiable. And those who are least safe are often those who are most vulnerable.

Likewise, the issue of reputation must be turned on its head when thinking about marginalized people. Folks point to the issue of people using pseudonyms to obscure their identity and, in theory, “protect” their reputation. The assumption baked into this is that the observer is qualified to actually assess someone’s reputation. All too often, and especially with marginalized people, the observer takes someone out of context and judges them inappropriately based on what they get online. Let me explain this in a concrete example that many of you have heard before. Years ago, I received a phone call from an Ivy League college admissions officer who wanted to accept a young black man from South Central in LA into their college; the student had written an application about how he wanted to leave behind the gang-ridden community he came from, but the admissions officers had found his MySpace which was filled with gang insignia. The question that was asked of me was “Why would he lie to us when we can tell the truth online?” Knowing that community, I was fairly certain that he was being honest with the college; he was also doing what it took to keep himself alive in his community. If he had used a pseudonym, the college wouldn’t have been able to get data out of context about him and inappropriately judge him. But they didn’t. They thought that their frame mattered most. I really hope that he got into that school.

There is no universal context, no matter how many times geeks want to tell you that you can be one person to everyone at every point. But just because people are doing what it takes to be appropriate in different contexts, to protect their safety, and to make certain that they are not judged out of context, doesn’t mean that everyone is a huckster. Rather, people are responsibly and reasonably responding to the structural conditions of these new media. And there’s nothing acceptable about those who are most privileged and powerful telling those who aren’t that it’s OK for their safety to be undermined. And you don’t guarantee safety by stopping people from using pseudonyms, but you do undermine people’s safety by doing so.

Thus, from my perspective, enforcing “real names” policies in online spaces is an abuse of power.

The Unintended Consequences of Obsessing Over Consequences (or why to support youth risk-taking)

Developmental psychologists love to remind us that the frontal lobe isn’t fully developed until humans are in their mid-20s. The prefrontal cortex is responsible for our ability to assess the consequences of our decisions, our ability to understand how what we do will play out into the future. This is often used to explain why teens (and, increasingly, college-aged people) lack the cognitive ability to be wise. Following from this logic, there’s a belief that we must protect the vulnerable young people from their actions because they don’t understand their consequences.

This logic assumes that understanding future consequences is *better* than not understanding them. I’m not sure that I believe this to be true.

Certainly, when we send young people off to fight our wars, we don’t want them to think about the consequences of what they have to do to survive (and, thus, help us survive). It’s not that we want them to shoot first and ask questions later, but we don’t want them to overthink their survival instincts when they’re being shot at.

Reproduction is an interesting counter-example. There’s no doubt that teens moms do little in the way of thinking about the consequence of getting pregnant. But folks in their 30s spend an obscene amount of time thinking about what it means to reproduce. Intensive parenting is clearly the product of constantly thinking about consequences, but I’m not sure that it’s actually healthier for kids or parents. I would hypothesize that biology wins when we don’t overthink parenting while the planet (as a delicate environmental ecosystem that can barely support the population) wins when we do overthink these things. Just a guess.

Creativity is another interesting area. We often talk about how older people are more rigid in their thinking. I love listening to mathematicians discuss whether or not someone who has not had a breakthrough insight in their 20s can have one in their 40s/50s. Certainly in the tech industry, we’re obsessed with youth. But our obsession in many ways is rooted in risk-taking, in not thinking too much about the future.

As I get older, I’m painfully aware of my brain getting more ‘conservative’ (not in a political sense). I am more strategic in my thinking, more judgmental of people who just try something radical. I spend a lot more time telling the little voice of fear and anxiety and neuroticism to STFU. I look back at my younger years and reflect on how stupid I was and then I laugh when I think about how well some of my more ridiculous ideas paid off. I find myself actually thinking about consequences before taking risks and then I get really annoyed at myself because I’ve always prided myself on my fly-by-the-seat-of-my-pants quality. In short, I can feel myself getting old and I think it’s really weird.

Most people judge from their current mental mindset, unable to remember a different mindset. Thus, I totally get why most people, if they’re undergoing the cognitive transition that I’ve watched myself do, would see young people’s risk-taking as inherently horrible. Sure, old folks respect the outcomes of some youth who change the world. But since most people don’t become Mark Zuckerberg, there’s more pressure to protect (and, often, confine) youth than to encourage their radical risk taking. And, of course, most risk-taking doesn’t result in a billion dollar valuation. Hell, most risk-taking has no chance of paying off. But it’s a weird, connected package. The same mindset that propelled me to do some seriously reckless, outright dangerous, and sometimes illegal things also prompted me to never say no to other institutional authorities in ways that allowed me to succeed professionally. This is why I don’t regret even the stupidist of things that I did as a youth. Of course, I’m also damn lucky that I never got caught.

I’m worried about our societal assumption that risk-taking without thinking of the consequences is an inherently bad thing. We need some radical thinking to solve many of the world’s biggest problems. And I don’t believe that it’s so easy to separate out what adults perceive as ‘good’ risk-taking from what they think is ‘bad’ risk-taking. But how many brilliant minds will we destroy by punishing their radical acts of defying authority? How many brilliant minds will we destroy by punishing them for ‘being stupid’? It’s easy to get caught up in a binary of ‘right’ and ‘wrong’ when all that you can think about is the consequences. But change has never happened when people simply play by the rules. You have to break the rules to create a better society. And I don’t think that it’s easy to do this when you’re always thinking about the consequences of your actions.

I’m not arguing for anarchy. I’m too old for that. But I am arguing that we should question our assumption that people are better off when they have the cognitive capacity to think through consequences. Or that society is better off when all individuals have that mental capability. From my perspective, there are definitely pros and cons to overthinking and while there are certainly cases where future-aware thought is helpful, there are also cases where it’s not. And I also think that there are some serious consequences of imprisoning youth until they grow up.

Anyhow, fun thoughts to munch on this weekend…

“Teen Sexting and Its Impact on the Tech Industry” (my RWW talk)

In a cultural context where Congressman Anthony Weiner foolishly published salacious content on Twitter, it’s hard to ignore sexting as a cultural phenomenon. Countless adults send sexually explicit content to one another, either as acts of flirtation or more explicit sex acts. And yet, when teenagers do so, new issues emerge. Teen sexting gets complicated, especially when images or videos are involved, because it butts up against child pornography laws. Unfortunately, teens have been arrested on child pornography charges for taking or sharing images of themselves or their peers.

Teen sexting isn’t just an issue for parents, teens, and the law; it’s also a challenge for the tech industry. Because technology companies are required by law to work diligently to combat child pornography, sexting creates new challenges for them. In this talk for the Read Write Web 2WAY conference, I outline some of the challenges that the tech industry faces with respect to teen sexting. I also invite those in the tech industry to engage about this issue, either out of goodwill, monetary interest, or fear of legal liability.

“Teen Sexting and Its Impact on the Tech Industry”

“Networked Privacy” (my PDF talk)

Our contemporary ideas about privacy are often shaped by legal discourse that emphasizes the notion of “individual harm.” Furthermore, when we think about privacy in online contexts, the American neoliberal frame and the techno-libertarian frame once again force us to really think about the individual. In my talk at Personal Democracy Forum this year, I decided to address some of the issues of “networked privacy” precisely because I think that we need to start thinking about how privacy fits into a social context. Even with respect to the individual frame, what others say/do about us affects our privacy. And yet, more importantly, all of the issues of privacy end up having a broader set of social implications.

Anyhow, I’m very much at the beginning of thinking through these ideas, but in the meantime, I took a first pass at PDF. A crib of the talk that I gave at the conference is available here: “Networked Privacy”

Photo Credit: Collin Key

Publicity and the Culture of Celebritization

In this month’s “Rolling Stone,” the magazine published an article called “Kiki Kannibal: The Girl Who Played With Fire”. The article tells the story of a 14-year-old teen in Florida who used MySpace to create a digital persona that attracted a lot of attention. An insecure and awkward teenager, Kirsten used MySpace to perform a confident, sexy persona named Kiki, sharing artistic photos that reveal a lot of skin. Not surprisingly, her sexy digital persona attracts a lot of attention – good, bad, and ugly. On one hand, she loves the validation; on the other, the stalking and personal attacks get increasingly severe and scary. This article raises all sorts of issues, in addition to those concerning attention, including sexual victimization (by a mentally unstable 18-year-old that she was dating), parental engagement (her parents encouraged her online participation as a depression-reducing strategy), and exploitation (by websites who profit off of drama). The story itself is actually quite complex, messy, and peculiar. It’s also quite clear that there’s a whole lot more to the story than what the journalist is able to pack into an article. Before Rolling Stone shut down the comments, people who said that they knew Kiki were commenting about how Kiki was more a bully than a victim and the self-declared mother of her now-dead ex was saying some fairly inflammatory things about Kiki. But in some senses, the details matter less than the overarching messy portrait. There are many fascinating aspects of this story and the ways in which it complicates how we think about teens and digital activities, but I want to drill down into the social factors involved in celebritization.

Part 1: Everyday Participation in the Attention Economy

As information swirls all around us, we have begun to build an attention economy where the value of a piece of content is driven by how much attention it can attract and sustain. It’s all about eyeballs, especially when advertising is involved. Countless social media consultants are swarming around Web2.0, trying to help organizations increase their status and profitability in the attention economy. But the attention economy doesn’t just affect the monetization of web properties; it’s increasingly shaping how people interact with one another.

Teens’ desire for attention is not new. Teens have always looked for attention and validation from others – parents, peers, and high-status individuals. And just as many in business argue that there’s no such thing as bad publicity, there are plenty of teens who believe that there’s no such thing as bad attention. The notion of an “attention whore” predates the internet. Likewise, the notion that a child might “act out” is recognized as being a call for attention. And it’s important to highlight that the gendered aspects of these tropes are reinforced online.

So what happens when a teen who is predisposed to seeking attention gets access to the tools of the attention economy? Needless to say, we see both exciting and horrifying events play out. We see teens like Tavi Gevinson propel her interest in fashion into a full-blown career before the age of 14. And we see countless teens replicating the trainwreck activities of Britney Spears, Lindsay Lohan, and other celebrities. When teens leverage social media to propel themselves into the spotlight, they fully (and with reckless abandon) engage in a set of practices that Terri Senft and Alice Marwick talk about as micro-celebrity. They work to manage their impressions, cultivate attention, and interact in ways that will increase their fame and social status.

Like it or not, the culture that we live in is saturated with narratives of celebrity success and celebrity failure. It’s downright hard to avoid Charlie Sheen’s meltdown or Kate Middleton’s wedding. With the rise of reality TV and the unfolding of major social media, individuals have felt closer than ever to the possibilities of celebrity. Celebrity becomes a correlate to a perfect life- money, designer clothes, and adulthood. What being a ‘celebrity’ means is discarded; fame is an end to itself with the assumption that fame equals all things awesome despite all the copious examples to the contrary. So teens only hold on to the positive aspects, hoping for the benefits of becoming famous and ignoring the consequences.

Kiki’s story is all about the celebritization of everyday life. She leveraged social media – and her image as a sexy young woman – to capture widespread attention. She turned herself into a commodity, and commoditized her popularity through her jewelry store. And the more attention she captured, the more she faced the benefits and costs of celebrity. Like her more famous peers – folks like Miley Cyrus and Demi Lovato – Kiki attracted both fans and haters. But what’s different is that Lovato’s fame has come with a fortune and a whole lot of handlers; Kiki lacked the resources to handle the onslaught and never made it big enough to recoup the ground she lost to weather the fame.

To complicate matters more, Kiki is engaged in a set of attention-seeking practices that make most adults nervous. She’s not capturing massive attention through being a clean-cut geek, like Rebecca Black (of “Friday” fame). And she’s not the center of a controversy for violating social norms, like Alexandra Wallace (of “Asians in the Library” fame). Instead, she’s using her image and her creativity to express herself as a sexy young woman in a sex-saturated society that hypocritically loves sexualized imagery and deplores young women who engage in it. This leaves her in an awkward and tenuous position – she’s successful at attracting attention but, because it’s outside of the media machine and involves the tropes of underage sexuality, she’s also under attack and not defended.

Part 2: The Toxicity of Fame

In reading Kiki’s story, it’s hard not to wonder why she doesn’t just walk away from her digital persona. Sure, her digital business – which depends on her celebritization – would falter (and there are some interesting implications in this wrt her family’s limited resources). But if the cruelty is so psychologically harmful, is it really worth it?

For years, I worked for V-Day, an organization committed to ending violence against women and girls. I’ve met countless women who continually return to abusive relationships because the highs make the lows worth it. My own experiences required me to constantly push back against the voice in my head that said: “this time will be different, right?” Again, it comes down to attention. Being showered with love feels so good that it’s often easy to forget the bad days. There are countless reasons behind why those in abusive relationships stay in them, but one things clear: walking away from an abusive relationship is never easy.

Life in the spotlight can easily take the form of an abusive relationship. When the attention is good, it’s really good and it feels really good. And when the attention fades, people can feel lonely and anxious, desperate for more, even if it’s negative attention. But when the attention gets negative, things can easily spiral out of control. There are countless examples of celebrities for whom fame is a toxic substance. There’s always a cost to the attention. Herein lies a challenge… is the fame worth it?

Personally, I’ve always struggled with this. I have the great fortune of being highly visible and the rewards of my micro-fame have been tremendous. But I can’t say it’s been easy or that it’s always fun. It’s hard to stomach photoshopped images and cruel comments. And people aren’t always kind when they think that I don’t deserve the attention I’ve gotten. But, all things told, I have it pretty good. I’m confident in who I am; I have a successful career; and my haters aren’t that vicious. But I’ve managed to achieve enough attention to be wary of it and to appreciate how toxic and cruel a substance it can be. And there are certainly times when I find myself slipping into a pattern that I know so well, where I feel like my relationship with the internet has the same cycles of some of my more abusive relationships.

I’m not convinced that people of any age are well equipped to handle fame, let alone the cruel cycles that can come with it. Certainly, experience makes it easier to stomach, but Charlie Sheen’s meltdown should make it crystal clear that it’s not just teen girls who struggle with the spotlight. The thing about youth is that they often crave celebrity a lot more than adults. And their mental images of what it means are often distorted. So many teens that I interviewed over the years have talked about fame as freedom, failing to recognize the constraints that come with those golden handcuffs.

Kiki is living in a whirlwind of fame, attention, and commodification. She’s turned herself into someone to watch and those who are watching are asserting power over her in deeply problematic ways. She’s created a digital icon but her audience has objectified her, failing to recognize or value the person behind the icon. This puts her in a peculiar place with limited control. And that would be the precise location of celebrity.

I suspect we are going to see more and more stories of individuals who have wanted celebrity to more or lesser degrees and who are sucked into a tumultuous relationship with fame. And I suspect the public will swing between feeling sympathy for their plight and blaming them for putting themselves out there. This is certainly the case with respect to those brought into the public eye through reality TV and tabloid magazines, and even those who are fired from jobs or kicked out of school for what they post on Facebook or Twitter. But I fear that our collective objectification of very visible people is also going to get much worse as more individuals come to prominence online. And many of those for whom the worst vitriol is reserved are young women, especially those who transgress the social boundaries of what it means to be nice or sexually appropriate. Internet commenting makes it easy to spew venom towards those in the crosshairs of celebrity, but we should recognize this isn’t simply a position to be envied. Just because people benefit from being visible doesn’t mean that they have the wherewithal to stomach the attacks. At the same time, just because celebrity is an option doesn’t mean that it’s a healthy one.

Widespread celebritization is the flipside of the “attention economy” coin and I think that we have a lot of deep thinking to do about the implications of both of these. Both are already rattling society in unexpected ways and I’m not convinced that we have the social, psychological, or cultural infrastructure to manage what will unfold. Some people will become famous or rich. Others will commit suicide or drown attempting to swim in these rocky waves. This doesn’t mean that we should blockade the technologies that are emerging, but it’s high time that we start reflecting on the societal values that are getting magnified by them.

[This post wouldn’t have been possible without the help Alice Marwick, Mary Gray, and Mike Ananny.]