why privacy issues matter… to me
why privacy issues matter… to me
Bloody Gmail (and the scarier A9) has me back to thinking about my love/hate relationship with privacy issues and my deep need to unpack the term and insert the issues of vulnerability into the discussion. Privacy is a loaded term. I’ve heard way too many people talk past one another thinking that they’re both talking about privacy issues. It’s a slippery discussion and i leave it to Dourish to fully flesh out why. But i do think that there are important issues that must be teased out in order to have a conversation about privacy, vulnerability or any of our data woes.
Key privacy-related questions
Given XYZ situation, i ask myself two key privacy-related questions:
1) Does XYZ make any person or group of persons feel icky? Who? Why?
2) Are there any rational scenarios of how XZY can be abused by the creators, potential hackers, or ill-advised governments/coups?
[Note: these are my questions for myself and thus i define rational, a notably arbitrary definition that falls under the "i know it when i see it" category. The key anecdote that i keep in my head is that at the turn of the century, Holland (and other countries) collected religion as part of their census data. In 1939, that data was horribly horribly abused. This may not have appeared to be a rational situation in the 1920s, but it is in my scope of the possible now.]
Reasons for the ickiness factor
First, i address the ickiness factor. I immediately disregard any groups that involve the paranoid from my list of ickiness contenders that must be addressed. I do not exclude the marginalized. Often, the ‘why’ answer for this group has to do with heightened walls around what is normative and what is not. Given that i’m politically all-in-favor of challenging normative values, i recognize their plight and pay special attention to it, albeit reflexively so.
Of the groups who fall into the ickiness reaction zone, i’ve identified a few reasons why there’s usually a reaction to XYZ:
- XYZ makes a someone feel at risk to situations of theft, notably identity theft. This is usually from people who have experienced identity theft, a growing group.
- XYZ asserts values or normative boundaries that feel uncomfortable. Example: you tend to be hyper-aware of demographic requests when your race/religion/sexuality/gender are not listed; thus, you feel invaded in ways that you wouldn’t feel if you fit the mold perfectly.
- XYZ opens the possibility of having material available to an undesired audience. This is a control issue. Most frequently, the undesired audience consists of known individuals with whom the individual has a relationship but that relationship does not include the sharing of material required by XYZ.
- XYZ makes information available to authorities with power over the individual. This is not simply a fear of the paranoids. This is a rational concern of many people who reside in countries whose governments have abused their power and individuals who work in companies whose bosses have regulated employee’s behavior.
Vulnerability embedded in ickiness
This ickiness feeling in relation to ‘privacy’ is what i called vulnerability. Something that XYZ has done has made people feel vulnerable to potentially abusive strangers, cultures and cultural norms, known others, and institutions with power. I am particularly interested in rational constructions of vulnerability, particularly amongst those who have felt the fire. We already live in a culture of fear – i’m not interested in magnifying it.
Outside of those who live in a fear for fear’s sake mentality, there’s a pretty consistent set of patterns regarding vulnerability:
- New situation raises people’s vulnerability concerns; walls go up
- Situation appears to cause no harm; walls start lowering
- Incentives are used to encourage participation; walls lower faster
- Vulnerability comes to forefront with resultant situation; walls spike
Point two is where the concerns slumber and why civil rights activists are essential. People’s innate vulnerability concerns definitely subside over time. Incentives definitely work, particularly when the consequences are not high.
While you may not give any demographic information just because, you will probably give it for the chance of winning a Porsche. For most people, this isn’t an issue of high vulnerability and there are low consequences so they don’t need a strong incentive. Take it to the next level. What will it cost to have a bot track your web surfing? Many people will do it… but the necessary incentive is usually more than dreadful odds at winning a Porsche. Take it to the next level. What will it take for you to be willing to turn your personal web surfing data over to your boss, lover or parents? Surfed any porn lately? The incentive (or, more likely, extreme guilt/requirement) must be high because the consequences of having to face your actions are much higher, particularly if you weren’t prepared to turn over your data to those with power over you. Note that for many people, fear of turning over this information to known undesired audience is far more threatening than having to turn this over to institutions; this is not the case in certain countries where vulnerability to dreadful governments runs much deeper than vulnerability to known individuals. A lot has to do with power and ability to execute enforcement over undesired behavior.
Why we need civil rights activists, legal changes and architects
Let me dig out of this hole and return to the civil rights activists. As people’s concerns lower, they’re willing to tolerate much more invasive access to data because they only see the incentives and they don’t see the consequences. This is rational. We tend to operate on local, not meta levels in everyday life. The role of the civil rights activist is to go meta and deal with first point #2 – can any rational abuse of data be expected? Their role is to look at the larger picture and protect people from engaging in localized decisions that might harm the larger picture.
There are usually two approaches that said activists take:
1) Try to educate the masses.
2) Try to change XYZ from happening through any means possible.
Education is nice and it works locally through social networks, but i genuinely do not believe that privacy education (which usually works by inserting fears) will overcome the incentives. Furthermore, the incentives will be increased and living in a culture of fear sucks; even Americans have started to ignore the bloody terrorist warning color markers. Of course, a moment of super-fear and then its slow decline to disregard always puts people on greater guard than originally. But i wouldn’t want the education camp to educate by creating situations that instigated super-fear. Leave that to governments.
I should clarify… i’m not entirely opposed to education; i just don’t believe that it’s the solution. Let’s keep it in mind as the social norms part of Lessig’s 4 point regulation scheme – valuable as a contributor, but not effective as the sole approach.
Then there’s the systemic changes. Going with Lessig, there are three types of systemic changes that can be made – the market, the law and the architecture. Personally, i think that the market is the reason that things are being moved in this direction and thus, i think that they’re a bit impossible to swing, so i believe that more effective approaches can be made on the law and the architecture side. Architecture is a bit more obvious, except that it is inherently tied into the market (or government). That kinda leaves law. And law continued to become more fubared. One excuse is that it is in bed with the market. Another excuse is that it’s fending off the paranoids.
The reality, i believe, ties into how law negotiates social norms. I wish i remember the details, but i remember learning once that social practices are often enough to affirm laws. In other words, if a law and the social practices are primarily in cahoots, it is unlikely that the law will change. It is only when there are significant differences that change is likely to occur. In other words, if people are tolerant of invasive practices, why regulate against them?
This is where i start to believe in the education branch of the civil rights movement. The key shouldn’t be to make people see the world differently, but stall them enough that they don’t assimilate to problematic breaches of privacy so that laws can be changed. Of course, i don’t know how to do this and thus, i suspect that it will take extreme conditions of masses feeling vulnerable to upset the law structure. (It is for this reason that Europe is much slower about opening up privacy… they remember WWII.)
The opportunity for designers and why i’m involved
Bring this back to me. From my perspective, a lot of the architectural decisions that induce vulnerability emerge from naivety, not poor intention. I genuinely believe that many creators really meant to do the right thing. The problem is that their construction of how to do the right thing is about privacy, not vulnerability. They only imagine how to address the data, not how to address people’s relationship with the data. The approaches are fundamentally about creating control or transparency. I’ve never found anyone who really thought through the implications of having all of the data in the first place. And most designers don’t realize the cultural norms that they insert into a system. Also, control is really really hard when people are trying to manage an external representation of their information. These systems insert new architectures: persistence, searchability, lurkers, etc. Control doesn’t work when people don’t know how to operate the controls. As for transparency, i am horrified by most people’s reading of Brin. Universal transparency will only heighten vulnerability, particularly that on a local level. It is not a solution for most of the situations that i’m concerned with.
So, as i see it, i have two roles as an activist on this issue:
- Educate people to conceptualize vulnerability and go through the exercise of thinking about who a design might affect, how, and why. Encourage them to minimize vulnerability in their design, not simply protect privacy.
- Work directly in domains that are all about vulnerability management and dive deep into the design issues with a conscientious perspective trying to maximize the protections afforded to users.
Dear me that was a rant…