Icarus Diving has the most hysterical post called Google the Magnificent which addresses the peculiarity of a “how do you use” search on Google resulting in the following suggestions:
As he puts, “Wow! That’s amazing! I had no idea I wanted to know any of those things! And wasn’t that a great example of what Web 2.0 has to offer? Well, keep at it guys. Any month now you’ll be making the same impression on people that paper clip thing on Windows did.” I cannot duplicate the humor of his post, so read it in full.
I reference this because i think it is a really important issue. We often talk about the power of collective knowledge/questioning and the transparency of such information without thinking about the moral issues. On one hand, it’s a fascinating insight into what people are looking for. On the other, it’s kinda disturbing. What if the queries were “How do you use a machine gun?” or “How do you use a hanger for an abortion?” ::shudder:: Regardless of where you stand on these issues, such queries would make you want to reach out to the person asking them, to see if you can help them. But you can’t. Does the machine have a moral responsibility to prevent people’s dangerous acts? Most people would probably say no. But what if the machine makes its knowledge transparent to people? What happens when those people feel responsible but only the machine has the ability to communicate back to the person in trouble?
Furthermore, how would you feel about your own query (or about the system) if a suggested query like that came up? The things that disturb our moral senses stick with us; they are hard to get out of our heads. Sometimes, there are costs to making the knowledge of a machine visible to people unrelated to the interaction between the person and machine. It’s eavesdropping and it’s not always wonderful to overhear things.