Tag Archives: surveillance

Frameworks for Understanding the Future of Work

Technology is changing work. It’s changing labor. Some imagine radical transformations, both positive and negatives. Words like robots and drones conjure up all sorts of science fiction imagination. But many of the transformations that are underway are far more mundane and, yet, phenomenally disruptive, especially for those who are struggling to figure out their place in this new ecosystem. Disruption, a term of endearment in the tech industry, sends shutters down the spine of many, from those whose privilege exists because of the status quo to those who are struggling to put bread on the table.

A group of us at Data & Society decided to examine various different emergent disruptions that affect the future of work. Thanks to tremendous support from the Open Society Foundations, we’ve produced five working papers that help frame various issues at play. We’re happy to share them with you today.

  • Understanding Intelligent Systems unpacks the science fiction stories of robots to look at the various ways in which intelligent systems are being integrated into the workforce in both protective and problematic ways. Much of what’s at stake in this domain stems from people’s conflicting values regarding robots, drones, and other intelligent systems.
  • Technologically Mediated Artisanal Production considers the disruptions introduced by 3D printing and “maker culture,” as the very act of physical production begins to shift from large-scale manufacturing to localized creation. The implications for the workforce are profound, but there are other huge potential shifts here, ranging from positive possibilities like democratizing design to more disconcerting concerns like increased environmental costs.
  • Networked Employment Discrimination examines the automation of hiring and the implications this has on those seeking jobs. The issues addressed here range from the ways in which algorithms automatically exclude applicants based on keywords to the ways in which people are dismissed for not having the right networks.
  • Workplace Surveillance traces the history of efforts to using tracking technologies to increase efficiency and measure productivity while decreasing risks for employers. As new technologies come into the workplace to enable new forms of surveillance, a whole host of ethical and economic questions emerge.
  • Understanding Fair Labor Practices in a Networked Age dives into the question of what collective bargaining and labor protections look like when work is no longer cleanly delineated, bounded, or structured within an organization, such as those engaged in peer economy work. Far from being an easy issue, we seek to show the complexity of trying to get at fair labor in today’s economy.

Each of these documents provides a framework for understanding the issues at play while also highlighting the variety of questions that go unanswered. We hope that these will provide a foundation for those trying to better understand these issues and we see this as just the beginning of much needed work in these areas. As we were working on these papers, we were delighted to see a wide variety of investigative journalism into these issues and we hope that much more work is done to better understand the social and cultural dimensions of these technological shifts. We look forward to doing more work in this area and would love to hear feedback from others, including references to other work and efforts to address these issues. Feel free to contact us at feedback@datasociety.net

(All five papers were authored by a combination of Alex Rosenblat, Tamara Kneese, and danah boyd; author order varies by document. This work was supported by the Open Society Foundations and is part of ongoing efforts at Data & Society to better understand the Future of Labor.)

(Photo by David Blaine.)

eyes on the street or creepy surveillance?

This summer, with NSA scandal after NSA scandal, the public has (thankfully) started to wake up to issues of privacy, surveillance, and monitoring. We are living in a data world and there are serious questions to ask and contend with. But part of what makes this data world messy is that it’s not so easy as to say that all monitoring is always bad. Over the last week, I’ve been asked by a bunch of folks to comment on the report that a California school district hired an online monitoring firm to watch its students. This is a great example of a situation that is complicated.

The media coverage focuses on how the posts that they are monitoring are public, suggesting that this excuses their actions because “no privacy is violated.” We should all know by now that this is a terrible justification. Just because teens’ content is publicly accessible does not mean that it is intended for universal audiences nor does it mean that the onlooker understands what they see. (Alice Marwick and I discuss youth privacy dynamics in detail in “Social Privacy in Networked Publics”.) But I want to caution against jumping to the opposite conclusion because these cases aren’t as simple as they might seem.

Consider Tess’ story. In 2007, she and her friend killed her mother. The media reported it as “girl with MySpace kills mother” so I decided to investigate the case. For 1.5 years, she documented on a public MySpace her struggles with her mother’s alcoholism and abuse, her attempts to run away, her efforts to seek help. When I reached out to her friends after she was arrested, I learned that they had reported their concerns to the school but no one did anything. Later, I learned that the school didn’t investigate because MySpace was blocked on campus so they couldn’t see what she had posted. And although the school had notified social services out of concern, they didn’t have enough evidence to move forward. What became clear in this incident – and many others that I tracked – is that there are plenty of youth crying out for help online on a daily basis. Youth who could really benefit from the fact that their material is visible and someone is paying attention.

Many youth cry out for help through social media. Publicly, often very publicly. Sometimes for an intended audience. Sometimes as a call to the wind for anyone who might be paying attention. I’ve read far too many suicide notes and abuse stories to believe that privacy is the only frame viable here. One of the most heartbreaking was from a girl who was commercially sexually exploited by her middle class father. She had gone to her school who had helped her go to the police; the police refused to help. She published every detail on Twitter about exactly what he had done to her and all of the people who failed to help her. The next day she died by suicide.  In my research, I’ve run across too many troubled youth to count. I’ve spent many a long night trying to help teens I encounter connect with services that can help them.

So here’s the question that underlies any discussion of monitoring: how do we leverage the visibility of online content to see and hear youth in a healthy way? How do we use the technologies that we have to protect them rather than focusing on punishing them?  We shouldn’t ignore youth who are using social media to voice their pain in the hopes that someone who cares might stumble across their pleas.

Urban theorist Jane Jacobs used to argue that the safest societies are those where there are “eyes on the street.” What she meant by this was that healthy communities looked out for each other, were attentive to when others were hurting, and were generally present when things went haywire. How do we create eyes on the digital street? How do we do so in a way that’s not creepy?  When is proactive monitoring valuable for making a difference in teens’ lives?  How do we make sure that these same tools aren’t abused for more malicious purposes?

What matters is who is doing the looking and for what purposes. When the looking is done by police, the frame is punitive. But when the looking is done by caring, concerned, compassionate people – even authority figures like social workers – the outcome can be quite different. However well-intended, law enforcement’s role is to uphold the law and people perceive their presence as oppressive even when they’re trying to help. And, sadly, when law enforcement is involved, it’s all too likely that someone will find something wrong. And then we end up with the kinds of surveillance that punishes.

If there’s infrastructure put into place for people to look out for youth who are in deep trouble, I’m all for it. But the intention behind the looking matters the most. When you’re looking for kids who are in trouble in order to help them, you look for cries for help that are public. If you’re looking to punish, you’ll misinterpret content, take what’s intended to be private and publicly punish, and otherwise abuse youth in a new way.

Unfortunately, what worries me is that systems that are put into place to help often get used to punish. There is often a slippery slope where the designers and implementers never intended for it to be used that way. But once it’s there….

So here’s my question to you. How can we leverage technology to provide an additional safety net for youth who are struggling without causing undue harm? We need to create a society where people are willing to check in on each other without abusing the power of visibility. We need more eyes on the street in the Jacbos-ian sense, not in the surveillance state sense. Finding this balance won’t be easy but I think that it behooves us to not jump to extremes. So what’s the path forward?

(I discuss this issue in more detail in my upcoming book “It’s Complicated: The Social Lives of Networked Teens.”  You can pre-order the book now!)

where “nothing to hide” fails as logic

Every April, I try to wade through mounds of paperwork to file my taxes. Like most Americans, I’m trying to follow the law and pay all of the taxes that I owe without getting screwed in the process. I try and make sure that every donation I made is backed by proof, every deduction is backed by logic and documentation that I’ll be able to make sense of three to seven years later. Because, like many Americans, I completely and utterly dread the idea of being audited. Not because I’ve done anything wrong, but the exact opposite. I know that I’m filing my taxes to the best of my ability and yet, I also know that if I became a target of interest from the IRS, they’d inevitably find some checkbox I forgot to check or some subtle miscalculation that I didn’t see. And so what makes an audit intimidating and scary is not because I have something to hide but because proving oneself to be innocent takes time, money, effort, and emotional grit.

Sadly, I’m getting to experience this right now as Massachusetts refuses to believe that I moved to New York mid-last-year. It’s mindblowing how hard it is to summon up the paperwork that “proves” to them that I’m telling the truth. When it was discovered that Verizon (and presumably other carriers) was giving metadata to government officials, my first thought was: wouldn’t it be nice if the government would use that metadata to actually confirm that I was in NYC not Massachusetts. But that’s the funny thing about how data is used by our current government. It’s used to create suspicion, not to confirm innocence.

The frameworks of “innocent until proven guilty” and “guilty beyond a reasonable doubt” are really really important to civil liberties, even if they mean that some criminals get away. These frameworks put the burden on the powerful entity to prove that someone has done something wrong. Because it’s actually pretty easy to generate suspicion, even when someone is wholly innocent. And still, even with this protection, innocent people are sentenced to jail and even given the death penalty. Because if someone has a vested interest in you being guilty, it’s often viable to paint that portrait, especially if you have enough data. Just watch as the media pulls up random quotes from social media sites whenever someone hits the news to frame them in a particular light.

It’s disturbing to me how often I watch as someone’s likeness is constructed in ways that contorts the image of who they are. This doesn’t require a high-stakes political issue. This is playground stuff. In the world of bullying, I’m astonished at how often schools misinterpret situations and activities to construct narratives of perpetrators and victims. Teens get really frustrated when they’re positioned as perpetrators, especially when they feel as though they’ve done nothing wrong. Once the stakes get higher, all hell breaks loose. In “Sticks and Stones”, Emily Bazelon details how media and legal involvement in bullying cases means that they often spin out of control, such as they did in South Hadley. I’m still bothered by the conviction of Dharun Ravi in the highly publicized death of Tyler Clementi. What happens when people are tarred and feathered as symbols for being imperfect?

Of course, it’s not just one’s own actions that can be used against one’s likeness. Guilt-through-association is a popular American pastime. Remember how the media used Billy Carter to embarrass Jimmy Carter? Of course, it doesn’t take the media or require an election cycle for these connections to be made. Throughout school, my little brother had to bear the brunt of teachers who despised me because I was a rather rebellious students. So when the Boston marathon bombing occurred, it didn’t surprise me that the media went hogwild looking for any connection to the suspects. Over and over again, I watched as the media took friendships and song lyrics out of context to try to cast the suspects as devils. By all accounts, it looks as though the brothers are guilty of what they are accused of, but that doesn’t make their friends and other siblings evil or justify the media’s decision to portray the whole lot in such a negative light.

So where does this get us? People often feel immune from state surveillance because they’ve done nothing wrong. This rhetoric is perpetuated on American TV. And yet the same media who tells them they have nothing to fear will turn on them if they happen to be in close contact with someone who is of interest to – or if they themselves are the subject of – state interest. And it’s not just about now, but it’s about always.

And here’s where the implications are particularly devastating when we think about how inequality, racism, and religious intolerance play out. As a society, we generate suspicion of others who aren’t like us, particularly when we believe that we’re always under threat from some outside force. And so the more that we live in doubt of other people’s innocence, the more that we will self-segregate. And if we’re likely to believe that people who aren’t like us are inherently suspect, we won’t try to bridge those gaps. This creates societal ruptures and undermines any ability to create a meaningful republic. And it reinforces any desire to spy on the “other” in the hopes of finding something that justifies such an approach. But, like I said, it doesn’t take much to make someone appear suspect.

In many ways, the NSA situation that’s unfolding in front of our eyes is raising a question that is critical to the construction of our society. These issues cannot be washed away by declaring personal innocence. A surveillance state will produce more suspect individuals. What’s at stake has to do with how power is employed, by whom, and in what circumstances. It’s about questioning whether or not we still believe in checks and balances to power. And it’s about questioning whether or not we’re OK with continue to move towards a system that presumes entire classes and networks of people as suspect. Regardless of whether or not you’re in one of those classes or networks, are you OK with that being standard fare? Because what is implied in that question is a much uglier one: Is your perception of your safety worth the marginalization of other people who don’t have your privilege?

meandering thoughts on the NSA scandal

As an activist, a geek, and a privacy scholar, I’ve been watching the NSA scandal unfold with a mixture of curiosity, outrage, and skepticism. I don’t feel as though I have enough information yet to make an informed opinion about exactly what the State is doing or how tech companies are involved, let alone the implications of these procedures. But one thing I do know is that most Americans are going to shrug their shoulders and move on while most of my friends are going to rally for increased transparency, governmental oversight, corporate commitments to resist governmental abuse, and efforts to better inform the public. And although I share all of their values and desires, I also feel the need to reflect on why I think that our activism as it is currently constructed is not going to rally the mainstream.

Whenever I asked my British grandfather any ethical question about his military service, I received one consistent reply: “for God and country.” He was a bomber pilot. And as a young activist, I couldn’t understand how he could table any ethics questions that way. So many innocent people died as a byproduct of his efforts to kill off Nazis. I never doubted the value of his service, but didn’t he every wonder about the random people who were killed in the process? No. “For God and country.”

I’m consistently amazed by how many Americans, who distrust the State’s “socialist” agenda, are fully supportive of any effort by the State to protect citizens from “terrorists” and other perceived miscreants. All too often, this is often cloaked in prejudicial language, focused on a narrative of “them” that is marked as other because of race, ethnicity, or religion. Ironically, even though it’s discussed as being about citizens vs. the other, naturalized citizens and children of naturalized citizens often get categorized as the other when their race, ethnicity, or religion is part of the broader feared other.

Embedded in this desire to be protected from the other is people’s belief that the State will never use sweeping power to surveil them or their friends, only the other. Some people recognize that they may end up in the large databases, but they assume they’ll be thrown away because they’re irrelevant. And besides, they’ve done nothing wrong. They have nothing to hide. Christianity often plays a role here, as people feel as though they’re already being watched and judged for their actions. And this is how we get back to “for God and country.”

When people view the State – or its military – as being a source of good to protect the populace from evil, they’re often willing to accept that actions will be taken to enhance security that may result in surveillance. They don’t necessarily see this as a trade-off between civil liberties and security because they don’t think that they’ll feel any restriction on *their* civil liberties. Rather, only people who’ve done something wrong will. And thus anyone who does feel a restriction on civil liberties must be doing something wrong.

On the flipside, I’m always astonished by how normative surveillance is in poverty-stricken communities. Surveillance is common place and many poor people are used to having to fork over tremendous amounts of personal information to get social services. And, in communities defined by practices like “stop and frisk,” the idea of not being watched and targeted is completely alien. So when these groups find out that the State is monitoring mediated interactions, why should they be surprised? Why should they react? From their perspective, it’s just another tool for the State to do what they’ve always been doing, only perhaps without the direct costs to dignity that many of these people face on a regular basis.

So who will be outraged? Who will be shocked? Who will be surprised? Mostly, I expect, my friends. All told, my friends are a highly educated, highly connected, highly privileged lot who are passionate about changing the world through making, educating, research, and activism. By and large, my friends’ only negative interactions with law enforcement are through protesting or other efforts to stand up to The Man. They expect civil liberties to protect them as they push for causes that they believe are just. They know (at least in theory) that the legal process is broken for less privileged people, but they still expect that it’ll work for them. Or they at least believe that they can call on their networks to bail them out, publicize their case, and generally support them to right any wrong. They have a widespread faith in fairness and justice, even when they’re fighting to combat inequality and injustice.

No activist wants to hear about secret abuses of power because it tilts the playing field, rendering challenges to the status quo even more difficult. Even when those very same activists have a healthy paranoia and believe that their foes are secretly abusing power. But “proof” is different. “Proof” is a rallying call, a justification for long-standing and difficult efforts to speak truth to power. “Proof” reinforces one’s beliefs, while also serving as fuel for being angry that more people don’t get angry. But it also blinds people from seeing why others don’t necessarily jump on their bandwagon because of their own values, beliefs, and assumptions.

I’m glad that my friends are energized and determined to fight harder to make a more just world. And I understand why they’re scared and angry by the potential of what’s being revealed. We’re all easy targets to watch because we’re loudspoken and we extensively use technology to coordinate our change-making efforts. And our networks are full of people who are politically suspect. Particularly activists, hackers, and foreign nationals from problematic nations. In many ways, we’re more the targets of the panopticon than so-called terrorists. Because destabilizing our privilege and belief in justice means that we can be controlled by fear. And so while I suspect that my friends will continue to speak of civil liberties and marginalized peoples, I can’t help but wonder if these kinds of revelations have more implications for activists than for anyone else. And if that’s the case, then what?

Update 9 June 2013 @ 5:53PM: Today, Edward Snowden revealed that he is a patriotic American and the NSA whistleblower. This is most likely going to change every aspect of what unfolds, how the American public reacts, and what the long term implications of this story are.  But, at this point, it’s hard to tell exactly where the chips will fall. I am hopeful that this means more people will engage. At the same time, I’m even more afraid for my activist friends. But I don’t yet have the foggiest clue of what the implications of all of this will mean.