just because we can, doesn’t mean we should

Learning to moderate desires and balance consequences is a sign of maturity. I could eat only chocolate for all of my meals, but it doesn’t mean that I should. If I choose to do so anyhow, I might be forced to face consequences that I will not like. “Just because I can doesn’t mean I should” is a decision dilemma and it doesn’t just apply to personal decisions. On a nation-state level, think about the cold war. Just because we could nuke Russia doesn’t mean that we should’ve. But, just like with most selfish children, our nation-state thought that it would be infinitely fun to sit on the edge of that decision regardless of the external stress that it caused. We managed to grow up and grow out of that stage (although I would argue that our current leadership regressed us back to infancy).

I am worried about the tech industry rhetoric around exposing user data and connections. This is another case of a decision dilemma concerning capability and responsibility. I said this ages ago wrt Facebook’s News Feed, but it is once again relevant with Google’s Social Graph API announcement. In both cases, the sentiment is that this is already public data and the service is only making access easier and more efficient for the end user. I totally get where Mark and Brad are coming at with this. I deeply respect both of them, but I also think that they live in a land of privilege where the consequences that they face when being exposed are relatively minor. In other words, they can eat meals of only chocolate because they aren’t diabetic.

Tim O’Reilly argues that social graph visibility is akin to pain reflex. Like many in the tech industry, he argues that we have a moral responsibility to eliminate “security by obscurity” so that people aren’t shocked when they are suddenly exposed. He thinks that forcing people to be exposed is a step in the right direction. He draws a parallel to illness, suggesting that people will develop antibodies to handle the consequences. I respectfully disagree. Or rather, I think that this is a valid argument to make from the POV of the extremely healthy (a.k.a. privileged). As someone who is not so “healthy,” I’m not jumping up and down at the idea of being in the camp who dies because the healthy think that infecting society with viruses to see who survives is a good idea. I’m also not so stoked to prepare for a situation where a huge chunk of society are chronically ill because of these experiments. What really bothers me is that the geeks get to make the decisions without any perspective from those who will be marginalized in the process.

Being socially exposed is AOK when you hold a lot of privilege, when people cannot hold meaningful power over you, or when you can route around such efforts. Such is the life of most of the tech geeks living in Silicon Valley. But I spend all of my time with teenagers, one of the most vulnerable populations because of their lack of agency (let alone rights). Teens are notorious for self-exposure, but they want to do so in a controlled fashion. Self-exposure is critical for the coming of age process – it’s how we get a sense of who we are, how others perceive us, and how we fit into the world. We exposure during that time period in order to understand where the edges are. But we don’t expose to be put at true risk. Forced exposure puts this population at a much greater risk, if only because their content is always taken out of context. Failure to expose them is not a matter of security through obscurity… it’s about only being visible in context.

As social beings, we are constantly exposing ourselves to the public eye. We go to restaurants, get on public transport, wander around shopping centers, etc. One of the costs of fame is that celebrities can no longer participate in this way. The odd thing about forced exposure is that it creates a scenario where everyone is a potential celebrity, forced into approaching every public interaction with the imagined costs of all future interpretations of that ephemeral situation. This is not just a matter of illegal acts, but even minor embarrassing ones. Both have psychological costs. Celebrities become hermits to cope (and when they break… well, we’ve all seen Britney). Do we really want the entire society to become hermits to cope with exposure? Hell, we’re doing that with our anti-terrorist rhetoric and I think it’s fucking up an entire generation.

Of course, teens are only one of the populations that such exposure will effect. Think about whistle blowers, women or queer folk in repressive societies, journalists, etc. The privileged often argue that society will be changed if all of those oppressed are suddenly visible. Personally, I don’t think that risking people’s lives is a good way to test this philosophy. There’s a lot to be said for being “below the radar” when you’re a marginalized person wanting to make change. Activists in repressive regimes always network below the radar before trying to go public en masse. I’m not looking forward to a world where their networking activities are exposed before they reach critical mass. Social technologies are super good for activists, but not if activists are going to constantly be exposed and have to figure out how to route around the innovators as well as the governments they are seeking to challenge.

Ad-hoc exposure is not the same as a vaccine. Sure, a vaccine is a type of exposure, but a very systematically controlled one. No one in their right mind would decide to expose all of society to a virus just to see who would survive. Why do we think that’s OK when it comes to untested social vaccines?

Just because people can profile, stereotype, and label people doesn’t mean that they should. Just because people can surveil those around them doesn’t mean that they should. Just because parents can stalk their children doesn’t mean that they should. So why on earth do we believe that just because technology can expose people means that it should?

On a side note, I can’t help but think about the laws around racial discrimination and hiring. The law basically says that just because you can profile people (since race is mostly written on the body) doesn’t mean you should. I can’t help but wonder if we need a legal intervention in other areas now that technology is taking us down a dangerous ‘can’ direction.

Print Friendly, PDF & Email

34 thoughts on “just because we can, doesn’t mean we should

  1. Marshall Kirkpatrick

    Good stuff here, much needed perspective on this conversation. Will try to point to it whenever possible. Would love to see this post become as frequently referenced as a couple of your others so often linked to.

  2. Joshua Porter

    I don’t think Tim was talking about “forcing” anybody to do anything. I took his argument to be that people are intelligent enough to learn how to manage over time. Is this not similar to teens you’ve reported on before, who learn to manage their MySpace accounts and not friend anyone and everyone? That is how I read his message.

    Nor do I think Brad or Mark are talking about forcing anybody to expose personal information if they don’t want to. Those guys are also working on OpenID, which might help solve some of those problems.

    Now, if Six Apart said “we’re publishing XFN and FOAF whether you want us to or not”…then that would be a problem.

    This, like all things, is a balance. There are benefits and drawbacks for the end user. For those people who want to expose their relationships, they’ll gain the benefits. For those people who don’t, they’ll keep their privacy. This is a design issue…designers need to give users control over this, just as with all parts of identity.

    We can’t pretend that people don’t need to learn about their changing culture. We live in a tech culture, for better or worse. You can bet I’ll be teaching my kid about how to expose her identity online…

    But you’re right, Danah, we shouldn’t just because we can. But this is a spectrum of control and privacy, not a yes/no.

  3. Nick Dynice

    It seems that XFN and FOAF were just waiting for something like Google’s Social Graph API to come along to help facilitate these connections in new ways. When I publicly declare someone a friend or muse in WordPress blogroll, it means I don’t mind that people know.

    So, I am wondering Danah, are you saying that this API is somehow exposing a list of MySpace friends that have their profiles set to private? Or are you saying that just becuase one can declare one’s friends, muses, and siblings are publicly mean that we should not? Are you saying that people who use sites with XFN and FOAF did not know that this might be a consequence?

    Maybe even technologist are not thinking of the consequences. Here are two I just thought of:
    -a spammer using the social graph to pose as one of your friends just to get you to open an e-mail.
    -a bot that sees you on one social network, creates a fake profile as a friend on another network, friends you, and and then tries some sort of phishing attack.

    Given these possible attack methods, I am assuming that Google will not give and API key just anyone. But given that this API is not necessary to carry out these kinds of attacks is even a scarier proposition.

    As Josh mentioned, Brad and and Mark are involved in OpenID, OAuth, and Data Portability, so maybe soon it will become standard to put XFN declarations behind an OAuth layer.

  4. ana

    While I agree with the main sentiment of this post, I am afraid the reasoning is based on a mistaken premise.

    “So why on earth do we believe that just because technology can expose people means that it should?”

    But it is not technology what is exposing people in this case. People expose themselves (ourselves); through their blogs, public profiles on various sites, etc. You yourself write about it. “Technology” has no agency. 🙂

    Providing a means to make it easier for all to see what kind of information we (and others) have exposed about our connections might hurt some in the short term, yes; but delaying the moment such a tool is uncovered would just hurt more people, and potentially in worse ways.

    Already folks are pointing out potential ways of misusing XFN and FOAF to pose as connections, phishing, etc; this will only accelerate our understanding of these technologies, and will lead to sounder, safer and more controllable protocols and tools.

  5. Curt

    I agree with everything you wrote. And a number of the commenters seem to be happy splitting hairs. Whether anyone is ‘forcing’ anyone to divulge information or not is the kind of precarious point you’re comfortable standing on when you’re not going to lose anything. There are plenty of people in iffy situations who are just tech enough to agree to something that will expose them without having thought it through enough regarding the outcome of that exposure until they’re in an Egyptian prison being raped with a broom handle. And to them, we can say, “No one forced you to do anything, so stop complaining” and all go have a nice omelette at Buck’s.

  6. Joseph Fung

    Danah – Good post.

    As a technologist, I think it’s easy to get caught up in the technical implications of a new initiative and possibly loose sight of the social and political implications.

    However, I don’t think it’s an issue of exposing information that should be better hidden – I think that the point of many of these initiatives, is to make it clear and obvious what can be done with your information.

    We bring our kids up telling them things like “don’t talk to strangers” and “walk with a buddy” – but we haven’t been able to educate them as well as we’d like about posting personal information online – people try, but it’s a tough slog. The problem is that this information is already available – by making it easier to draw these connections we’re not assisting in anything that wasn’t done before, rather we’re making the issue more of a concious one, rather than one easily ignored.

    For example, the Data Portability group is working to try and help standardize the social and political implications of these new technologies. You can check them out at dataportability.org and I think they even have a sub-group for Steering and one for Policy, which I think would help address your concerns.

    Thanks for the good read!

  7. Jacqueline

    For all the talk about college admissions and employers not admitting/hiring someone based on a “drunken photo”, I am much, much, more concerned about colleges or employers not hiring someone based on their sexuality, religion, political endorsements, etc. To me, it’s a lot scarier that my future employer could look up my MySpace profile, and even though it’s set to private, they are made privy to my sexuality. And even those people who make their profiles “professional” (safe for anyone to see) still list their political affiliations or join causes such as Pro-Choice, that could lead an employer, if s/he were so inclined, to discriminate based on this information. There would of course, be no way to prove that someone didn’t get hired because of their sexuality, or their pro-Obama support, or their “other” religious views, but it’s a scary thought. Discrimination is already rampant, and I think we need to stop worrying so much about “drunken nights at a frat party” and worry about much more troubling forms of discrimination that the Internet might be enabling. Teens are becoming more aware of the consequences of their profiles and the public nature of their profiles, but are we, adults and teens, considering the seemingly innocuous information employers and others can find out about us?

  8. Stefan Hayden

    it seems teenagers are always good at getting around rules online. I think that social graph visibility works well when your email is the same on Facebook and twitter. But it’s easy to create separate accounts and have it so they do no interact with each other. Different User names, emails.

    I think you need to talk more about the disadvantages when you don’t have a lot of privilege. As far is I can tell the social web exposes what a google search could have done and teaches people how to better say undetected on different services when you need to present a different face to different people.

  9. pageman

    I don’t know if the irony is lost on everyone, the title of this blog post is “just because we can, doesn’t mean we should” about social graphs on a blog named “apophenia :: making connections where none previously existed” … 😛 so which is which? do you want to make connections where none previously existed or do you not “connect” because just because you can, it means you should?

  10. jim

    I don’t know whether the fact that Google’s proposal is half-assed makes the situation better or worse.

    Google claims it’s going to provide an API so app developers can mine the googleplex for social graphs. We talked a couple of threads ago about what a social graph might be: it’s a graph whose nodes are identities and edges are relationships. So what’s Google using for identity? I looked through the announcement and still have no idea. Whoever wrote the FAQ seems to think it’s self-evident. What’s the relationship? Anyone I’ve ever linked to? (Gmail seems to think that I have a relationship with anyone who’s ever sent me email or to who I’ve ever sent email.) Anyone I’ve ever linked to using these specific tags? How do they detect a severed relationship? It’s not just that Google hasn’t thought through the social aspects, they haven’t even thought through the technical aspects.

    This may make things worse. If in practice few use this API, or quickly stop using it, because it doesn’t do what it says on the tin, no bad consequences will occur and the world will believe that this sort of thing is harmless.

  11. Lachlan Hardy

    Thanks for the reminder, Danah. I definitely think folks address this issue early on when thinking about data portability and the social web, determine that *they* are okay with it and then stop thinking about it.

    I guess I qualify as relatively privileged but this still distinctly bothers me. Projects such as GraphSync and others are a real concern to me. They require solid governance and guidelines that they simply don’t have.

    Implementors need to think about all the issues and get feedback from various sources before moving ahead with technology concerning privacy. The more people talk about it, the more the conversation will evolve and our understanding of the issues mature. So thanks for keeping that conversation flowing.

  12. Livia Labate

    Many relevant perspectives brought up in the comments, but I think a key message from Danah is being missed.

    The concern is not that people won’t find methods to work around those situations or that they are unable to do so, but that enabling technologies — such as the graph API — have the power to take things that are already exposed and public IN context and consolidate and expose them OUT of context (which modifies their meaning and relevance – it’s the possible misuse of this information that creates high risk for certain groups).

    While there may be ways to overcome, compensate or work around that, there is a learning curve for anyone to figure out what that approach may be, while the automatic nature of these services has immediate results which can cause damaging and unrecoverable results to the subjects involved.

    I can appreciate Danah’s side note about legal intervention to technology of this nature though I’m generally against regulatory measures towards technology. The idea shouldn’t be to limit technology or innovation through new technology, but to inform use and social responsibility — as in, what are the social consequences of these implementations.

    The danger of this conversation in the US is that the country has gone to extremes in terms of legislation to ensure social consequences are communicated, but more in a CYA fashion than a communal responsibility fashion (see every bag around with warning labels about toddlers suffocating).

  13. anonymous

    A simple analogy, for the many of you who still don’t get it. Private and public are not absolute, they are relative and very much dependent on context. For example, imagine that you and I are walking in the park, and having a conversation. We are in a public, and anyone who passes close enough can hear what we’re talking about. We don’t mind this, because, after all, who really cares what we are talking about?

    Now imagine that someone invented miniature flying cameras and recorded our entire conversation, and broadcast it on the evening news. They certainly could, it was a public place where we allegedly had no expectation of privacy. But the situation now becomes totally different: instead of a few random people overhearing, now everyone in the world can hear us. Both cases involve some degree of exposure, but a very different degree, with very different effects.

  14. Udi

    Perhaps the right notion here is not “just because you can, doesn’t mean you should” but instead, “it’s going to happen eventually, so we better get it out in the open now before someone gets hurt”.

    The longer we allow people to put information about themselves online, the more time we give them to embarrass themselves. One day when that previously obscure teen becomes a working professional, something they wrote a long time ago might come back to haunt them. The internet remembers everything. Everyone’s a scrutinized presidential candidate. If they behave like one early on, it might be more boring, but they’ll be safer in the end.

  15. Xanthe

    You know what, I think marketers and firms are coming at it all wrong.

    Instead of secretly mining data, they’d be better off opening discussion forums on their company sites.

    The consumers will willingly do the market research for you (see Benkler).

    If they want snazzy new sneakers, they will generate the concept sketches. If they want eco-friendly product, they will provide the firm with best practices. And no one needs to be spied on.

    What do you think?

  16. Iggi

    This story looks a lot like MP3 and P2P appearing in the world of music distribution – at some moment music became liquid and started leaking.
    Building DRM walls did not help – it leaked under, around and through.
    This world is made of glass. Everyone is visible, you just need to know where and how to look. People really need to get used to exposure, because feel of false safety is more dangerous that feel of danger itself and evolution is never fair.
    P.S. Try finding out who is “urbansheep” 🙂

  17. ts.info

    Great post. At the risk of extending the very loose healthcare analogies, this is not related to either vaccine or virus. In this case, a group of volunteers has exposed themselves to one etiologic agent (e.g. publishing of certain information on certain social network sites that support XFN and FOAF.) The experiment about to unfold is the involuntary exposure of the group to a second etiologic agent (Google’s Social Graph API.) The result of the experiment will be “exposure”, although there’s no agreement on whether the resulting exposure is good, bad, or indifferent.

    The proposal is that this experiment is particularly bad because teens, who make up a large portion of the “volunteer” group, have few rights and lack agency to protect themselves against the harm that could come. In the healthcare arena, that’s absolutely true. However, in the Social Network arena, the reverse is true. Teens have more “agency” than the adult “volunteers,” who are supposed to act… well, like adults. Celebrities are a bad example, because their loss of privacy occurs against their will. A teen has adequate control, all they have to do is to use a different social network service, or perhaps stop posting embarrassing photos and comments.

    This situation is more analogous to the marketing databases that are the outgrowth of our plastic-driven economy. If you don’t want credit card companies to track your spending habits, don’t use a credit card. If you don’t want Acxiom to track your grocery purchases, don’t use a shopping “club” card for discounts. If want to take advantage of these conveniences, be aware that you’re giving up a piece of your privacy. It’s an exchange, not a one-way “right.”

    By the same token, if someone doesn’t want *anyone* to track their network, they should choose a social network that limits the exposure. Claiming that you post something revealing on the internet, but didn’t think anyone would look, is more likely a mistake that an adult would make than a teen.

    As my Kindergarten teacher used to say… “There IS a permanent record”… And today it’s called the internet.

  18. Kevin Marks

    danah, thanks for the usual thoughtful commentary. I have learned a lot from you about separate publics and the problems of privilege and homophily you refer to here.

    What the Social Graph API reveals is information you can already find with a Google search, as it is based on the same corpus of webpages; it doesn’t index private pages, and it obeys the same robots.txt exemptions. So they are already exposed in this sense; people can already search for personal pages and the links between them.

    Our hope with this is to encourage people to express the connections they want seen, and not the ones they don’t, and by making this API available, help them to understand the difference.

  19. Bertil

    What I understand from danah is that: if the majority of people, who can’t think of anything to be blamed from their social relations, are fine with having them public, this leaves the rest out-lying. More importantly, the line is very hard to put: how to justify not to publicly show it?

    I recently re-read classic prose about digital identity, what to do to implement it, and I was shocked to see how everything leads to cookies: simple, ad hoc, distinct, parsimonious, bits on the user’s side. Why so many anti-cookie systems? The detachment from physical interface appears to challenge social science beyond any previous expectation.

  20. Christopher Herot

    Danah, thank you again for such a thoughtful piece, but I beg to differ on a few points:

    -Perhaps it’s not the teens who are being inoculated but rather the culture at large. While the more conservative may decry the defining down of deviance in our culture, I view it as a positive sign that a few racy photos of a Miss America contestant or some confessions of youthful drug use from a Presidential candidate can no longer derail their careers. Perhaps we need more exposure to keep the body politics’ immune system in balance.

    -I understand your concern for teens who may be exposed to a wider circle than can put their behavior in context, but let’s not overlook the benefits. My own son had a difficult time finding friends in middle/high school who shared his interest in technology and approach to the world. Then he discovered a much more diverse community on line where he could fit in and ultimately develop relationships that carried over to real life. It’s hard to think of any culture that couldn’t benefit more from shaking up than that endured by American teenagers.

    Anyway, in the capitalist economy we live in, if we can, someone will do it. While we are debating these issues, the rest of the population is voting with their feet (or fingers on the keyboards) and signing up for all these social networking services. I’m just glad we have you to tell us what they are doing there.

  21. Mark Murphy

    My concern with the line of argument here is that it plays nicely into a debilitating US national pastime: scapegoating, instead of solving.

    One of Jacqueline’s scenarios: a nefarious employer uses social mining tools to find out that a prospective employee is of a race/sexuality/religion/political party that the employer dislikes and therefore declines to hire the prospect.

    The prospective employee is not the bad guy. The social network the prospective employee posted the information to is not the bad guy. Google or whoever provided the tools the employer used is not the bad guy.

    The nefarious employer is the bad guy. After all, it’s the employer who did something we find distasteful.

    We should be blaming the nefarious employer. When concrete evidence of nefarious behavior is found, we should be publicizing and vilifying it. Beyond that, we need to be making the argument to all, but particularly to youth, that such nefarious behavior is bad for society, so that eventually the nefarious behavior will decline in frequency and severity. That’s how the civil rights movement took us from an era of poll taxes to the point where we have Barack Obama inches away from being the front-runner for the US Presidency.

    But that process took a half-century, and few people want to think about making progress over the long term anymore. It’s the Wall Street, what-have-you-done-for-me-lately attitude leaching into every aspect of life.

    Scapegoating — blaming Google or social networks, or even blaming the victim for posting the information in the first place — can be done instantly. It doesn’t address the real problem, but it’s convenient. Easy. No real effort involved, no risk, no pain. Of course, the real issue of discrimination continues, but, hey, isn’t that always somebody else’s problem?

    I don’t have a problem with advocating risk minimization: educating people on the dangers of “what might happen”, encouraging stronger privacy controls on technology, etc. But when I read, “I can’t help but wonder if we need a legal intervention in other areas now that technology is taking us down a dangerous ‘can’ direction”, I see scapegoating. I see “shooting the messenger”, instead of “shooting” those who do evil with the message.

    I’d rather explain to folk (particularly teens) not only what not to do, but the real reason why not to do it: because there are [bleep]s in the world who will do you harm. And emphasize that, when the time comes, and the shoe is on the other foot: don’t be a [bleep].

    Then maybe, just maybe, in another few decades, we’ll have truly addressed a problem.

  22. Monshogaku

    For a while I had a MySpace page, but took it down as I felt I was putting too much of myself out on the web. I guess cyberspace mirrors meatspace, as I avoid going to places where I might run into people from work, stay away from bars and other social places, and if an e-mail contact was needed, using an old Yahoo addy to screen messages. This probably makes me a bit paranoid, but the idea of a co-worker finding out about my internet self bothers me.

  23. jkd

    I’d simply offer that while O’Reilly is definitely out of bounds, I think we’re all going to just have to get used to dealing with there being so much information instantly accessible about not just ourselves but everyone we meet. And as we’re curious critters, we’ll take a peek – and I think in that peeking, we see how this goes both ways. There’s a lot more out there about us, but also about them – incongruous, interesting, odd, stuff. And we’ll just have to deal with people being a lot more interesting, and a lot more difficult-to-pigeonhole. Won’t happen overnight or without some bumps – and vulnerable people, as they always are, will be disproportionately the ones getting “bumped” – but we’re going to get used to it. I’d be surprised if this particular genie went back in the bottle.

  24. Eric Marcoullier

    Thought provoking post, Danah.

    One thing that people seem to be overlooking, though, is that social circles are only combined if you explicitly link the two (i.e. link to your Flickr acct from your Twitter profile).

    If users did not have this control, I might also feel some concern. But if I link to my Flickr and Delicious accounts from my MySpace account, can I really presume that these accounts will (or even should) remain distinct?

  25. Alison Mac

    Great post.

    I’ve been looking at blog aggregators and suchlike and one thing that I realised very quickly is that they assume that you will want all your identities rolled together. I really don’t: I want to keep most of my social life, for example, private and separate. I live my life, as we all do, enacting a whole range of personas that encompass me as employee, or mother, or colleague, or friend. This is not about being covert, or secretive: it’s about being private. The tech world, with its superkeen development of applications that let you do things like map where your contacts are, seems to assume that you welcome a life of full transparency and I totally agree that this coming from a white (young, male, straight, pro-capitalist) geek position of extreme privilege.

    Also, if I thought for one minute that this was about Google developing an application that made my life easier, I might go for it; but all of this is just about harvesting my networks as a consumer. And there’s no need for companies to know my networks, when they already know about me at the point where I’m in contact with them directly.

  26. Alison Mac

    Great post.

    I’ve been looking at blog aggregators and suchlike and one thing that I realised very quickly is that they assume that you will want all your identities rolled together. I really don’t: I want to keep most of my social life, for example, private and separate. I live my life, as we all do, enacting a whole range of personas that encompass me as employee, or mother, or colleague, or friend. This is not about being covert, or secretive: it’s about being private. The tech world, with its superkeen development of applications that let you do things like map where your contacts are, seems to assume that you welcome a life of full transparency and I totally agree that this coming from a white (young, male, straight, pro-capitalist) geek position of extreme privilege.

    Also, if I thought for one minute that this was about Google developing an application that made my life easier, I might go for it; but all of this is just about harvesting my networks as a consumer. And there’s no need for companies to know my networks, when they already know about me at the point where I’m in contact with them directly.

  27. Fred

    The discourse around privacy and the social web is dangerously reductionist: the notion of privacy essentialism, that there’s a private/not-private dichotomy, all of these things you’ve covered countless times, you wish companies would learn.

    There will always be a difference between intentful and ephemeral disclosure. Intentful disclosure may be thought of as our direct statements, products, etc. Ephemeral is our attention data, our mined data, the things we leave behind without agency or realization. When we’re forced to deal with intentful disclosure in and out of privacy context it’s creepy enough; it’s hard to imagine that we’re ever going to be comfortable with machine or human confronting us with our ephemeral disclosure.

    Unfortunately, the mining/harvesting of ephemeral disclosure seems to be considered the next step. And why not? It’s computationally possible, it exists in the ether, why not go get it and confront us with the data? This kind of thinking is fueled by privilege and tight clusters; the silicon valley designers of social software exist in a world where hyperconnectivity and persistent disclosure aren’t only norms, they’re a way to advance a career. Having powerful-person-X in your social sharing network is a powerful signal, incentivizing participation at the cost of “privacy”, which is certainly a secondary economic cost as compared to job success.

    I don’t see any way to stop this, but I have faith that the difference between intentful and ephemeral disclosure will always ring true, until we’re normalized to a life equation that places weirdness around privacy secondary to things like getting a job or meeting a mate (Facebook). We’re making these decisions in some places, but not all places. The greater question, as you point out, isn’t necessarily our reaction to the tools, but how technocrats and the power elite can use them for repressive or hegemonic means. I’d argue a lot of the tools for these ends are already there, but now they’re being mass-marketed.

  28. Curious Ray

    Danah:

    Since reading your post yesterday, I’ve spent a little time playing with Google’s Social Graph system. I tried it on a number of people, including you. The results were interesting in some ways, but hardly frightening. For example, I found some interesting bloggers by following the links to you.

    It seems to me that this tool is a two-edged sword, as is the case with most technological achievements. While it is possible for those in power (employers, parents, etc.) to use it to gain intelligence on the rest of us, it is also possible for us to use it in the reverse direction; employees can check up on their employers, teens on their parents, citizens on their elected officials, etc. Because Google is making this tool freely available to all comers, I personally do not see any specific danger here to society as a whole.

    While I have used the analogy of a sword above, I suspect that the social graph will mostly be a butter knife. The benefits of an open society will always seem to be tainted by a minority of abusers, but it would be tragic if we allowed them to keep us from buttering our bread… er… that is, allowed them to make us hostages to fear.

    Thanks for a very thoughtful and provocative post.

    Curious Ray

  29. Colleen

    This echoes exactly what I’ve been thinking about all this open-(insert suffix here) and personality-on-parade stuff…. There is a certain amount of agency implied in airing your dirty laundry willingly on the internet, and it also implies that you dirty laundry really isn’t that dirty. There are those out there who have a lot more to lose, and haven’t got that much in the first place whose privacy and normalcy (a word I hesitate to use, but I hope you get my meaning) are potentially at stake.

  30. rhbee

    I just finished reading William Gibson’s Pattern Recognition. He mentions you in there and I think he may have modeled part of Cayce on what you are doing. thanks for the intersect and I, too, love the lyrics of ani difranco.

  31. Steve

    Jacqueline,

    Part of your post has touched on a pet peeve of mine. To wit. If I have understood you correctly, you have exposed your sexuality on MySpace, even though you are uncomfortable with a future employer, etc. accessing it. For crying out loud, why???

    What were you thinking to post sensitive personal information in a venue accessible to 2 billion or so internet users?

    To me it seems obvious beyond question that information posted in publicly accessible locations will be accessed by people you did not expect. Why does this seem to be so hard for people to get?

    Of course, maybe it’s just that I am the curious sort that often reads strangers’ profiles. (Okay, so I need a life :). If you never read strangers’ profiles, maybe you never imagine that a stranger would read yours. But to me it is just obvious.

    And please don’t take this rant as being aimed at you personally. Posting “private” info in public venues and being shocked when it comes to light is a major trend. I just don’t get it.

    -Steve

  32. Dragos ILINCA

    What if I AM THE EMPLOYER. And what if it helps to see that some person I might want to hire has joined some racist cause, or writes discriminatory posts on her blog?

    Information can be used both ways. If you make it public, it should mean that you accept it and that’s who you are. Or at least, that’s what you want people to know that you are. If you don’t it means you’ve been using your online persona as a different entity from who you really are. Sooner or later, you’ll be found out. And it’s not because of technology. George Eliot was found out, with no Internet and social graph around.

    As for the firefly cameras overhearing conversations, I think that argument in flawed. You would not accept 10 strangers following you and overhearing what you are saying either. So it must be that you consider your conversation PRIVATE. It does not matter that it’s a public space. Facebook is a public space, yet you can still have private conversations.

    I don’t think anyone will steal your privacy if you don’t want that. But you should acknowledge that once you’ve made some information public, then it’s really public.

  33. Bob

    What if I AM THE EMPLOYEE? And what if I don’t write discriminatory posts on my blog, don’t join racist causes, and am still probed by some guy on the other side of the planet who’s just itching to get his hands on my private info? Man, I get a good deal!

    That said, if I am dumb enough to post information a crim could actually use, it’s my fault. As such, I do what I think everyone else should: use fake info! Bob is not my real name. If I had to give my location, I would give a false one, etc. (Ok, I’m paranoid- you were going to find out at some point) It is truly that simple.

    I also don’t post information that could link me to my other activities on the internet if I don’t want you to see them. I’m reasonably sure that this post, however hard you API’d it, would not turn up a great deal of info. Even if you got the email address, it’s not my main one; I have a couple of email accounts I always use when I’m needed to provide it to someone I don’t know (I only give my main email address to people I already know). I use ePrompter to check all the accounts automatically rather than check all my accounts at once.

    In short, provided you know what you’re doing, don’t publish private info, and don’t do something stupid like commit a nice big crime and get the cops onto you, chances are you’re pretty safe.

Comments are closed.