In Implicit or creepy?, David is dead-on and i would like to expound on this.
First, if you aren’t familiar with the lessons of Cobot, you should be. Cobot was a nice friendly little bot that sat in LambdaMOO, collecting data for its masters. Members of the MOO were bothered by this and felt that Cobot should give back to the community it was observing, like any good social scientist. So it did. You could ask Cobot anything about the social patterns going on and the data it was collecting. People started asking ego-centric questions: “Who do i talk to the most?” and such. And then, people started asking who other people talked to the most. Trouble emerged from there. All of a sudden, human jealousy reared its head. People were irate that those who they spoke to the most did not speak to them the most. What did this say about reciprocal value? Gah!
Cobot’s willingness to provide social data created a social rupture because it was evaluating data, not its meaning. Yet, people who were accessing the data were deriving meaning. They were using coarse data about social relationships to imply something much deeper. Sound familiar?
I talk to Phil from the corner deli more frequently than my best friend or my mother simply because of proximity. Yet, they play a much more central emotional role in my life than Phil. Quantity and quality are often not correlated. Yet, if some system were to rank my relations and Phil came out above my mom, damn straight she’d be pissed.
The way that systems and users of systems interpret our data often affects how we interact with them. When Viegas and i were visualizing email data, we often joked that our systems motivated you to write more messages to the friends who had strong emotional connection but apparently not frequent email connection simply so that they played a more visible role.
In the case of David’s metadata, this is particularly true. How many of us can truly list our favorite books? We know that this will be publicly displayed. What we list is a performance where we try to select titles that convey something meaningful about us for the viewer. We count on that audience, on that interpretation in selecting our titles. We are performing for that human audience to interpret, not the system. Yet, if the system starts interpreting our data, we may shift our scope of audience. But then what is it that either the system or the humans are interpreting? Are they capturing essence? What happens when the system re-projects its interpretations back to a human audience? How do we then deal with this doubly-mediated projection of self to a human audience?
It is not simply creepy, it’s outright destabilizing.