Since the earliest days of Usenet and email, people have complained about how much easier it is to be mean online than offline. If you spend enough time on public forums, it’s hard not to run into mean-spirited rhetoric: defamation, hate speech, flaming, etc. The latest story of helicopter parenting turning deadly highlights how easy it is to deceive to be cruel. Discussions of using mediating technologies for the purpose of bullying often rely on arguments about how technology aids and embeds malicious acts by reducing the consequences of breaking social norms. Governments often seek to ban technologies because of mean-spirited interactions that take place.
Of course, what’s at stake is fundamentally a philosophical question, the precise one that got me kicked out of my 9th grade English classroom: is “man” basically good or evil? (I argued that man was basically evil, but apparently this was the incorrect answer and I wouldn’t back down.)
There are all sorts of forces that limit social behavior in everyday life: fear of legal consequences, fear of social consequences, fear of damage to our bodies, lack of functional capability, whether potential gains outweigh costs, etc. Our legal system takes these forces into consideration and this is where punishments like jail (or the death penalty) operate at disincentives. Likewise, we often try to regulate structures so that it is functionally impossible to commit an act that is perceived to be collectively “wrong” (legal or social). Yet, in truth, we rely primarily on the things that are essential to humanness: desire not to face physical harm and desire to fit in socially.
Mediated environment throw these forces for a loop. I can say anything I want here and you can’t punch me. At least not while you’re sitting on your computer reading this. And I have a reasonable expectation that your potential anger will dissipate before you see me again. Furthermore, this fear of bodily harm is very ephemeral – we are much worse about evaluating whether or not an act will result in _future_ bodily harm than determining if it will result in immediate harm. The lack of immediate harm is key here.
The bigger issue has to do with social consequences. I have no way of determining if you’re nodding along or scrunching your face in disgust and violent disagreement. I have to imagine your reaction as I write this (and I’m imagining the nods). I have no way of adjusting the next paragraph according to your implicit responses while reading this paragraph, both because I can’t see you and because you’re reading this in a time-shifted manner. Furthermore, unless you explicitly provide feedback (like comments), I have no real understanding that you’re out there let alone what you thought of my post. The lack of social feedback sucks, but the lack of immediate social consequences can be far more dangerous.
Impression management is a core process of human participation in social situations. I try to present myself in the way that I want to be received and based on your feedback, I adjust my presentation. This is not easily learned and teenagers often struggle with this (thus, an “identity crisis” is when one’s imagined self doesn’t mesh well with how one is perceived) but adults are by no means perfect at this. We all learn through experience which is why social interaction is crucial.
Yet, in mediated environments, impression management is stilted. There’s no implicit feedback and explicit feedback is minimal at best (“nice picture” isn’t really informative). The immediate social consequences are also not there because there’s no way of knowing if someone just walked away. As a result, social norms aren’t really enforced online and without this re-inforcement, it’s easy to break them without even knowing it.
This gets even trickier when you remember that networked publics bring together people from all sorts of environments with fundamentally different sets of social norms and expectations. Many imagine a melting pot where a new set of collective norms evolves, but because it’s hard to provide social feedback, that doesn’t happen. It’s more like a rotting salad bowl.
Now, add in the fact that people regularly seek attention (even negative attention) in public situations and that public forums notoriously draw in those who are lonely, bored, desperate, angry, depressed, and otherwise not in best form. Mix this with the lack of social feedback and you’ve got a recipe for disaster. There are few consequences for negative behaviors, but they generate a whole lot of attention.
The question remains: is this the fault of the environment? In some sense, yes because the architectural underpinnings of these environments don’t allow for social feedback or meaningful social (or bodily) consequences. This is where legal folks get into a tizzy because they think that legal consequences will solve everything. For this reason, they often argue against anonymity, viewing it as a barrier to regulating social behavior online. Unfortunately, this argument is flawed. While legal consequences certainly limit some people from some acts, they certainly do not limit everything. If they did, we wouldn’t need jails and murder would be a thing of the past. More problematically, most of what needs regulated in social environments online is not a rupture of law but a rupture of social decorum. “He’s being mean” is not something that the law really wants to involve itself with.
So then how do we fix it? Is it a matter of design? Do we need to bake in social feedback loops and consequences into the core of our technologies? If so, how?
Alternatively, is there a way to socialize people into an environment where they do “what’s right” simply because it’s right? Of course, this question extends beyond the internet. I fear that as a society, we are relying more on legal regulation and less on social regulation and I can’t work out why. But, perhaps the problem is not the internet but a general lack of collectively understood everyday norms. Older people certainly spend enough time bitching about “kids these days,” but there are all sorts of contributing factors for building and maintaining collective social norms is hard: age segregation, class segregation, homophily more broadly. We can blame overworked adults, cars, lack of public spaces, single family social units, and other such bits on contributing to homophily and the lack of collective social norms.
But here’s where I think that there’s an interesting sociological puzzle. What network structures result in strong collective norms? What forces are needed to create those kinds of social network? (This is a classic question of tolerance… we know fairly well that diverse networks have higher levels of tolerance, not surprisingly.) Given that universal unitedness isn’t really going to happen, what are the structural changes that increase norm maintenance?
As for the internet, mass media hype aside, I bet that the internet is statistically nicer than it was when I was growing up. While many public forums and community sites like Slashdot are still bogged down with crud, most people are going online to interact with people that they know. There’s only so much you can get away with when you’re going to see the person the next day. Time delay might not be ideal for social feedback, but it certainly helps.