Tag Archives: privacy

What If Social Media Becomes 16-Plus? New battles concerning age of consent emerge in Europe

At what age should children be allowed to access the internet without parental oversight? This is a hairy question that raises all sorts of issues about rights, freedoms, morality, skills, and cognitive capability. Cultural values also come into play full force on this one.

Consider, for example, that in the 1800s, the age of sexual (and marital) consent in the United States was between 10 and 12 (except Delaware, where it was seven). The age of consent in England was 12, and it’s still 14 in Germany. This is discomforting for many Western parents who can’t even fathom their 10- or 12-year-old being sexually mature. And so, over time, many countries have raised the age of sexual consent.

But the internet has raised new questions about consent. Is the internet more or less risky than sexual intercourse?
How can youth be protected from risks they cannot fully understand, such as the reputational risks associated with things going terribly awry? And what role should the state and parents have in protecting youth?

This ain’t a new battle. These issues have raged since the early days of the internet. In 1998, the United States passed a law known as the Children’s Online Privacy Protection Act (COPPA), which restricts the kinds of data companies can collect from children under 13 without parental permission. Most proponents of the law argue that this intervention has stopped countless sleazy companies from doing inappropriate things with children’s data.
I have a more cynical view.

Watching teens and parents navigate this issue — and then surveying parents about it — I came to the conclusion that the law prompted companies to restrict access to under-13s, which then prompted children (with parental knowledge) to lie about their age. Worse, I watched as companies stopped innovating for children or providing services that could really help them.

Proponents often push back, highlighting that companies could get parental permission rather than just restrict children. Liability issues aside, why would they? Most major companies aren’t interested in 12-year-olds, so it’s a lot easier to comply with the law by creating a wall than going through a hellacious process of parental consent.

So here we are, with a U.S. law that prompts companies to limit access to 13-plus, a law that has become the norm around the globe. Along comes the EU, proposing a new law to regulate the flow of personal data, including a provision that would allow individual countries to restrict children’s access to the internet at any age (with a cap at age 16).

Implicitly, this means the European standard is to become 16-plus, because how else are companies going to build a process that gives Spanish kids access at 14, German kids at 16, and Italian kids at 12?
Many in the EU are angry at how American companies treat people’s data and respond to values of privacy. We saw this loud and clear when the European Court of Justice invalidated the “safe harbor” and in earlier issues, such as “the right to be forgotten.” Honestly? The Europeans have a right to be angry. They’re so much more thoughtful on issues of privacy, and many U.S. companies pretty much roll their eyes and ignore them. But the problem is that this new law isn’t going to screw American companies, even if it makes them irritable. Instead, it’s going to screw kids. And that infuriates me.

Implicit in this new law — and COPPA more generally — is an assumption that parents can and should consent on behalf of their children. I take issue with both. While some educated parents have thought long and hard about the flows of data, the identity work that goes into reputation, and the legal mechanisms that do or don’t protect children, they are few and far between.

Most parents don’t have the foggiest clue what happens to their kids’ data, and giving them the power to consent sure doesn’t help them become more informed. Hell, most parents don’t have enough information to make responsible decisions for themselves, so why are we trusting them to know enough to protect their children?
We’re doing so because we believe they should have control, that they have the right to control and protect their children, and that no company or government should take this away.

The irony is that this runs completely counter to the treaty that most responsible countries signed at the UN Convention on the Rights of the Child. Every European country committed to making sure that children have the right to privacy — including a right to privacy from their parents. Psychotically individualistic and anti-government, the United States decided not to sign onto this empowering treaty because it was horrifying to U.S. sensibilities that the government would be able to give children rights in opposition to parents. But European countries understood that kids deserved rights. So why is the EU now suggesting that kids can’t consent to using the internet?

This legislation is shaped by a romanticization of parent-child relationships and an assumption of parental knowledge that is laughable.

But what really bothers me are the consequences to the least-empowered youth. While the EU at least made a carve-out for kids who are accessing counseling services, there’s no consideration of how many LGBTQ kids are accessing sites that might put them in danger if their parents knew. There’s no consideration for kids who are regularly abused and using technology and peer relations to get support. There’s no consideration for kids who are trying to get health information, privately. And so on. The UN Rights of the Child puts vulnerable youth front and center in protections. But somehow they’ve been forgotten by EU policymakers.

Child advocates are responding critically. I’m also hearing from countless scholars who are befuddled by and unsure of why this is happening. And it doesn’t seem as though the EU process even engaged the public or experts on these issues before moving forward. So my hope is that some magical outcry will stymie this proposal sooner rather than later. But I’m often clueless when it comes to how lawmakers work.

What baffles me the most is the logic of this proposal given the likely outcomes. We know from the dynamics around COPPA that, if given the chance, kids will lie about their age. And parents will help them. But even if we start getting parental permission, this means we’ll be collecting lots more information about youth, going against the efforts to minimize information. Still, most intriguing is what I expect this will do to the corporate ecosystem.

Big multinationals like Facebook and Twitter, which operate in the EU, will be required to follow this law. All companies based in the EU will be required to comply with this law. But what about small non-EU companies that do not store data in the EU or work with EU vendors and advertisers? It’s unclear if they’ll have to comply because they aren’t within the EU’s reach. Will this mean that EU youth will jump from non-EU service to non-EU service to gain access? Will this actually end up benefiting non-EU startups who are trying to challenge the big multinationals? But doesn’t this completely undermine the EU’s efforts to build EU companies and services?

I don’t know, but that’s my gut feeling when reading the new law.
While I’m not a lawyer, one thing I’ve learned in studying young people and technology is that when there’s a will, there’s a way. And good luck trying to stop a 15-year-old from sharing photos with her best friend when her popularity is on the line.

I don’t know what will come from this law, but it seems completely misguided. It won’t protect kids’ data. It won’t empower parents. It won’t enhance privacy. It won’t make people more knowledgeable about data abuses. It will irritate but not fundamentally harm U.S. companies. It will help vendors that offer age verification become rich. It will hinder EU companies’ ability to compete. But above all else, it will make teenagers’ lives more difficult, make vulnerable youth more vulnerable, and invite kids to be more deceptive. Is that really what we want?

(This was originally posted on Bright on Medium.)

Which Students Get to Have Privacy?

There’s a fresh push to protect student data. But the people who need the most protection are the ones being left behind.

It seems that student privacy is trendy right now. At least among elected officials. Congressional aides are scrambling to write bills that one-up each other in showcasing how tough they are on protecting youth. We’ve got Congressmen Polis and Messer (with Senator Blumenthal expected to propose a similar bill in the Senate). Kline and Scott have a discussion draft of their bill out while Markey and Hatch have reintroduced the bill they introduced a year ago. And then there’s Senator Vitter’s proposed bill. And let’s not even talk about the myriad of state-level legislation.

Most of these bills are responding in some way or another to a 1974 piece of legislation called the Family Educational Rights and Privacy Act (FERPA), which restricted what schools could and could not do with student data.

Needless to say, lawmakers in 1974 weren’t imagining the world of technology that we live with today. On top of that, legislative and bureaucratic dynamics have made it difficult for the Department of Education to address failures at the school level without going nuclear and just defunding a school outright. And schools lack security measures (because they lack technical sophistication) and they’re entering into all sorts of contracts with vendors that give advocates heartburn.

So there’s no doubt that reform is needed, but the question — as always — is what reform? For whom? And with what kind of support?

The bills are pretty spectacularly different, pushing for a range of mechanisms to limit abuses of student data. Some are fine-driven; others take a more criminal approach. There are also differences in who can access what data under what circumstances. The bills give different priorities to parents, teachers, and schools. Of course, even though this is all about *students*, they don’t actually have a lot of power in any of these bills. It’s all a question of who can speak on their behalf and who is supposed to protect them from the evils of the world. And what kind of punishment for breaches is most appropriate. (Not surprisingly, none of the bills provide for funding to help schools come up to speed.)

As a youth advocate and privacy activist, I’m generally in favor of student privacy. But my panties also get in a bunch when I listen to how people imagine the work of student privacy. As is common in Congress as election cycles unfold, student privacy has a “save the children” narrative. And this forces me to want to know more about the threat models we’re talking about. What are we saving the children *from*?

Threat Models

There are four external threats that I think are interesting to consider. These are the dangers that students face if their data leaves the education context.

#1: The Stranger Danger Threat Model. It doesn’t matter how much data we have to challenge prominent fears, the possibly of creepy child predators lurking around school children still overwhelms any conversation about students, including their data.

#2: The Marketing Threat Model. From COPPA to the Markey/Hatch bill, there’s a lot of concern about how student data will be used by companies to advertise products to students or otherwise fuel commercial data collection that drives advertising ecosystems.

#3: The Consumer Finance Threat Model. In a post-housing bubble market, the new subprime lending schemes are all about enabling student debt, especially since students can’t declare bankruptcy when they default on their obscene loans. There is concern about how student data will be used to fuel the student debt ecosystem.

#4: The Criminal Justice Threat Model. Law enforcement has long been interested in student performance, but this data is increasingly desirable in a world of policing that is trying to assess risk. There are reasons to believe that student data will fuel the new policing architectures.

The first threat model is artificial (see: “It’s Complicated”), but it propels people to act and create laws that will not do a darn thing to address abuse of children. The other three threat models are real, but these threats are spread differently over the population. In the world of student privacy, #2 gets far more attention than #3 and #4. In fact, almost every bill creates carve-outs for “safety” or otherwise allows access to data if there’s concern about a risk to the child, other children, or the school. In other words, if police need it. And, of course, all of these laws allow parents and guardians to get access to student data with no consideration of the consequences for students who are under state supervision. So, really, #4 isn’t even in the cultural imagination because, as with nearly everything involving our criminal justice system, we don’t believe that “those people” deserve privacy.

The reason that I get grouchy is that I hate how the risks that we’re concerned about are shaped by the fears of privileged parents, not the risks of those who are already under constant surveillance, those who are economically disadvantaged, and those who are in the school-prison pipeline. #2-#4 are all real threat models with genuine risks, but we consistently take #2 far more seriously than #3 or #4, and privileged folks are more concerned with #1.

What would it take to actually consider the privacy rights of the most marginalized students?

The threats that poor youth face? That youth of color face? And the trade-offs they make in a hypersurveilled world? What would it take to get people to care about how we keep building out infrastructure and backdoors to track low-status youth in new ways? It saddens me that the conversation is constructed as being about student privacy, but it’s really about who has the right to monitor which youth. And, as always, we allow certain actors to continue asserting power over youth.

This post was originally published to The Message at Medium on May 22, 2015. Image credit: Francisco Osorio

What is Privacy?

Earlier this week, Anil Dash wrote a smart piece unpacking the concept of “public.” He opens with some provocative questions about how we imagine the public, highlighting how new technologies that make heightened visibility possible. For example,

Someone could make off with all your garbage that’s put out on the street, and carefully record how many used condoms or pregnancy tests or discarded pill bottles are in the trash, and then post that information up on the web along with your name and your address. There’s probably no law against it in your area. Trash on the curb is public.

The acts that he describes are at odds with — or at least complicate — our collective sense of what’s appropriate. What’s at stake is not about the law, but about our idea of the society we live in. This leads him to argue that the notion of public is not easy to define. “Public is not just what can be viewed by others, but a fragile set of social conventions about what behaviors are acceptable and appropriate.” He then goes on to talk about the vested interests in undermining people’s conception of public and expanding the collective standards of what is in.

To get there, he pushes back at the dichotomy between “public” and “private,” suggesting that we should think of these as a spectrum. I’d like to push back even further to suggest that our notion of privacy, when conceptualized in relationship to “public,” does a disservice to both concepts. The notion of private is also a social convention, but privacy isn’t a state of a particular set of data. It’s a practice and a process, an idealized state of being, to be actively negotiated in an effort to have agency. Once we realize this, we can reimagine how to negotiate privacy in a networked world. So let me unpack this for a moment.

Imagine that you’re sitting in a park with your best friend talking about your relationship troubles. You may be in a public space (in both senses of that term), but you see your conversation as private because of the social context, not the physical setting. Most likely, what you’ve thought through is whether or not your friend will violate your trust, and thus your privacy. If you’re a typical person, you don’t even begin to imagine drones that your significant other might have deployed or mechanisms by which your phone might be tapped. (Let’s leave aside the NSA, hacker-geek aspect of this.)

You imagine privacy because you have an understanding of the context and are working hard to control the social situation. You may even explicitly ask your best friend not to say anything (prompting hir to say “of course not” as a social ritual).

As Alice Marwick and I traversed the United States talking with youth, trying to make sense of privacy, we quickly realized that the tech-centric narrative of privacy just doesn’t fit with people’s understandings and experience of it. They don’t see privacy as simply being the control of information. They don’t see the “solution” to privacy being access-control lists or other technical mechanisms of limiting who has access to information. Instead, they try to achieve privacy by controlling the social situation. To do so, they struggle with their own power in that situation. For teens, it’s all about mom looking over their shoulder. No amount of privacy settings can solve for that one. While learning to read social contexts is hard, it’s especially hard online, where the contexts seem to be constantly destabilized by new technological interventions. As such, context becomes visible and significant in the effort to achieve privacy. Achieving privacy requires a whole slew of skills, not just in the technological sense, but in the social sense. Knowing how to read people, how to navigate interpersonal conflict, how to make trust stick. This is far more complex that people realize, and yet we do this every day in our efforts to control the social situations around us.

The very practice of privacy is all about control in a world in which we fully know that we never have control. Our friends might betray us, our spaces might be surveilled, our expectations might be shattered. But this is why achieving privacy is desirable. People want to be *in* public, but that doesn’t necessarily mean that they want to *be* public. There’s a huge difference between the two. As a result of the destabilization of social spaces, what’s shocking is how frequently teens have shifted from trying to restrict access to content to trying to restrict access to meaning. They get, at a gut level, that they can’t have control over who sees what’s said, but they hope to instead have control over how that information is interpreted. And thus, we see our collective imagination of what’s private colliding smack into the notion of public. They are less of a continuum and more of an entwined hairball, reshaping and influencing each other in significant ways.

Anil is right when he highlights the ways in which tech companies rely on conceptions of “public” to justify data collection practices. He points to the lack of consent, which signals what’s really at stake. When powerful actors, be they companies or governmental agencies, use the excuse of something being “public” to defend their right to look, they systematically assert control over people in a way that fundamentally disenfranchises them. This is the very essence of power and the core of why concepts like “surveillance” matter. Surveillance isn’t simply the all-being all-looking eye. It’s a mechanism by which systems of power assert their power. And it is why people grow angry and distrustful. Why they throw fits over beingexperimented on. Why they cry privacy foul even when the content being discussed is, for all intents and purposes, public.

As Anil points out, our lives are shaped by all sorts of unspoken social agreements. Allowing organizations or powerful actors to undermine them for personal gain may not be illegal, but it does tear at the social fabric. The costs of this are, at one level, minuscule, but when added up, they can cause a serious earthquake. Is that really what we’re seeking to achieve?

(The work that Alice and I did with teens, and the implications that this has for our conception of privacy writ large, is written up as “Networked Privacy” in New Media & Society. If you don’t have library access, email me and I’ll send you a copy.)

(This entry was first posted on August 1, 2014 at Medium under the title “What is Privacy” as part of The Message.)

New White House Report on Big Data

I’m delighted to see that the White House has just released its report on “big data” — “Big Data: Seizing Opportunities, Preserving Values” along with an amazing collection of supporting documents. This report is the culmination of a 90-day review by the Administration, spearheaded by Counselor John Podesta. I’ve had the fortune to be a part of this process and have worked hard to share what I know with Podesta and his team.

In January, shortly after the President announced his intention to reflect on the role of big data and privacy in society, I received a phone call from Nicole Wong at the Office of Science and Technology Policy, asking if I’d help run one of the three public conferences that the Administration hoped to co-host as part of this review. Although I was about to embark on a book tour, I enthusiastically agreed, both because the goal of the project aligned brilliantly with what I was hoping to achieve with my new Data & Society Research Institute and also because one does not say no when asked to help Nicole (or the President). We hadn’t intended to publicly launch Data & Society until June nor did we have all of the infrastructure necessary to run a large-scale event, but we had passion and gumption so we teamed up with the great folks at New York University’s Information Law Institute (directed by the amazing Helen Nissenbaum) and called on all sorts of friends and collaborators to help us out. It was a bit crazy at times, but we did it.

In under six weeks, our amazing team produced six guiding documents and crafted a phenomenal event called The Social, Cultural & Ethical Dimensions of “Big Data.” On our conference page, you can find an event summary, videos of the sessions, copies of the workshop primers and discussion notes, a zip file of important references, and documents that list participants, the schedule, and production team. This amazing event was made possible through the generous gifts and institutional support of: Alfred P. Sloan Foundation, Ford Foundation, John D. and Catherine T. MacArthur Foundation, the John S. and James L. Knight Foundation, Microsoft Research, and Robert Wood Johnson Foundation. (These funds were not solicited or collected on behalf of the Office of Science & Technology Policy (OSTP), or the White House. Acknowledgment of a contributor by the Data & Society Research Institute does not constitute an endorsement by OSTP or the White House.) Outcomes from this event will help inform the National Science Foundation-supported Council on Social, Legal, and Ethical aspects of Big Data (spearheaded by the conference’s steering committee: danah boyd, Geoffrey C. Bowker, Kate Crawford, and Helen Nissenbaum). And, of course, the event we hosted help shape the report that was released today.

Words cannot express how grateful I am to see the Administration seriously reflect on the issues of discrimination and power asymmetries as they grapple with both the potential benefits and consequences of data-centric technological development. Discrimination is a tricky issue, both because of its implications on individuals and because of what it means for society as a whole. In teasing out the issues of discrimination and big data, my colleague Solon Barocas pointed me to this fantastic quote by Alistair Croll:

Perhaps the biggest threat that a data-driven world presents is an ethical one. Our social safety net is woven on uncertainty. We have welfare, insurance, and other institutions precisely because we can’t tell what’s going to happen — so we amortize that risk across shared resources. The better we are at predicting the future, the less we’ll be willing to share our fates with others.

Navigating the messiness of “big data” requires going beyond common frames of public vs. private, collection vs. usage. Much to my frustration, the conversation around the “big data” phenomenon tends to get quickly polarized – it’s good or it’s bad, plain and simple. But it’s never that simple. The same tools that streamline certain practices and benefit certain individuals can have serious repercussions for other people and for our society as a whole. As the quote above hints at, what’s at stake is the very essence of our societal fabric. Building a healthy society in a data-centric world requires keeping one eye on the opportunities and one eye on the potential risks. While it’s not perfect, the report from the White House did a darn good job of striking this balance.

Not only did the White House team tease out many core issues for both public and private sector, but they helped scaffold a framework for policy makers. The recommendations they offer aren’t silver bullets, but they are reasonable first steps. Many will inevitably argue that they don’t go far enough (or, in some cases, go too far) – and I can definitely get nitpicky here – but that’s par for the course. This doesn’t damper my appreciation. I’m still uber grateful to see the Administration take the time to tease out the complexity of the issues and offer a path forward that is not simply polarizing.

Please take a moment to read this important report. I’d love to hear your thoughts. Data & Society would love to hear your thoughts. And if you’re curious to know more about what I’ll be doing next with this Research Institute, please join our newsletter.

Psst: Academics – check out the last line of the report ends on Page 68. Science and Technology Studies for teh win!

(Flickr credit: Stuart Richards)

Keeping Teens ‘Private’ on Facebook Won’t Protect Them

(Originally written for TIME Magazine)

We’re afraid of and afraid for teenagers. And nothing brings out this dualism more than discussions of how and when teens should be allowed to participate in public life.

Last week, Facebook made changes to teens’ content-sharing options. They introduced the opportunity for those ages 13 to 17 to share their updates and images with everyone and not just with their friends. Until this change, teens could not post their content publicly even though adults could. When minors select to make their content public, they are given a notice and a reminder in order to make it very clear to them that this material will be shared publicly. “Public” is never the default for teens; they must choose to make their content public, and they must affirm that this is what they intended at the point in which they choose to publish.

Representatives of parenting organizations have responded to this change negatively, arguing that this puts children more at risk. And even though the Pew Internet & American Life Project has found that teens are quite attentive to their privacy, and many other popular sites allow teens to post publicly (e.g. Twitter, YouTube, Tumblr), privacy advocates are arguing that Facebook’s decision to give teens choices suggests that the company is undermining teens’ privacy.

But why should youth not be allowed to participate in public life? Do paternalistic, age-specific technology barriers really protect or benefit teens?

One of the most crucial aspects of coming of age is learning how to navigate public life. The teenage years are precisely when people transition from being a child to being an adult. There is no magic serum that teens can drink on their 18th birthday to immediately mature and understand the world around them. Instead, adolescents must be exposed to — and allowed to participate in — public life while surrounded by adults who can help them navigate complex situations with grace. They must learn to be a part of society, and to do so, they must be allowed to participate.

Most teens no longer see Facebook as a private place. They befriend anyone they’ve ever met, from summer-camp pals to coaches at universities they wish to attend. Yet because Facebook doesn’t allow youth to contribute to public discourse through the site, there’s an assumption that the site is more private than it is. Facebook’s decision to allow teens to participate in public isn’t about suddenly exposing youth; it’s about giving them an option to treat the site as being as public as it often is in practice.

Rather than trying to protect teens from all fears and risks that we can imagine, let’s instead imagine ways of integrating them constructively into public life. The key to doing so is not to create technologies that reinforce limitations but to provide teens and parents with the mechanisms and information needed to make healthy decisions. Some young people may be ready to start navigating broad audiences at 13; others are not ready until they are much older. But it should not be up to technology companies to determine when teens are old enough to have their voices heard publicly. Parents should be allowed to work with their children to help them navigate public spaces as they see fit. And all of us should be working hard to inform our younger citizens about the responsibilities and challenges of being a part of public life. I commend Facebook for giving teens the option and working hard to inform them of the significance of their choices.

(Originally written for TIME Magazine)

eyes on the street or creepy surveillance?

This summer, with NSA scandal after NSA scandal, the public has (thankfully) started to wake up to issues of privacy, surveillance, and monitoring. We are living in a data world and there are serious questions to ask and contend with. But part of what makes this data world messy is that it’s not so easy as to say that all monitoring is always bad. Over the last week, I’ve been asked by a bunch of folks to comment on the report that a California school district hired an online monitoring firm to watch its students. This is a great example of a situation that is complicated.

The media coverage focuses on how the posts that they are monitoring are public, suggesting that this excuses their actions because “no privacy is violated.” We should all know by now that this is a terrible justification. Just because teens’ content is publicly accessible does not mean that it is intended for universal audiences nor does it mean that the onlooker understands what they see. (Alice Marwick and I discuss youth privacy dynamics in detail in “Social Privacy in Networked Publics”.) But I want to caution against jumping to the opposite conclusion because these cases aren’t as simple as they might seem.

Consider Tess’ story. In 2007, she and her friend killed her mother. The media reported it as “girl with MySpace kills mother” so I decided to investigate the case. For 1.5 years, she documented on a public MySpace her struggles with her mother’s alcoholism and abuse, her attempts to run away, her efforts to seek help. When I reached out to her friends after she was arrested, I learned that they had reported their concerns to the school but no one did anything. Later, I learned that the school didn’t investigate because MySpace was blocked on campus so they couldn’t see what she had posted. And although the school had notified social services out of concern, they didn’t have enough evidence to move forward. What became clear in this incident – and many others that I tracked – is that there are plenty of youth crying out for help online on a daily basis. Youth who could really benefit from the fact that their material is visible and someone is paying attention.

Many youth cry out for help through social media. Publicly, often very publicly. Sometimes for an intended audience. Sometimes as a call to the wind for anyone who might be paying attention. I’ve read far too many suicide notes and abuse stories to believe that privacy is the only frame viable here. One of the most heartbreaking was from a girl who was commercially sexually exploited by her middle class father. She had gone to her school who had helped her go to the police; the police refused to help. She published every detail on Twitter about exactly what he had done to her and all of the people who failed to help her. The next day she died by suicide.  In my research, I’ve run across too many troubled youth to count. I’ve spent many a long night trying to help teens I encounter connect with services that can help them.

So here’s the question that underlies any discussion of monitoring: how do we leverage the visibility of online content to see and hear youth in a healthy way? How do we use the technologies that we have to protect them rather than focusing on punishing them?  We shouldn’t ignore youth who are using social media to voice their pain in the hopes that someone who cares might stumble across their pleas.

Urban theorist Jane Jacobs used to argue that the safest societies are those where there are “eyes on the street.” What she meant by this was that healthy communities looked out for each other, were attentive to when others were hurting, and were generally present when things went haywire. How do we create eyes on the digital street? How do we do so in a way that’s not creepy?  When is proactive monitoring valuable for making a difference in teens’ lives?  How do we make sure that these same tools aren’t abused for more malicious purposes?

What matters is who is doing the looking and for what purposes. When the looking is done by police, the frame is punitive. But when the looking is done by caring, concerned, compassionate people – even authority figures like social workers – the outcome can be quite different. However well-intended, law enforcement’s role is to uphold the law and people perceive their presence as oppressive even when they’re trying to help. And, sadly, when law enforcement is involved, it’s all too likely that someone will find something wrong. And then we end up with the kinds of surveillance that punishes.

If there’s infrastructure put into place for people to look out for youth who are in deep trouble, I’m all for it. But the intention behind the looking matters the most. When you’re looking for kids who are in trouble in order to help them, you look for cries for help that are public. If you’re looking to punish, you’ll misinterpret content, take what’s intended to be private and publicly punish, and otherwise abuse youth in a new way.

Unfortunately, what worries me is that systems that are put into place to help often get used to punish. There is often a slippery slope where the designers and implementers never intended for it to be used that way. But once it’s there….

So here’s my question to you. How can we leverage technology to provide an additional safety net for youth who are struggling without causing undue harm? We need to create a society where people are willing to check in on each other without abusing the power of visibility. We need more eyes on the street in the Jacbos-ian sense, not in the surveillance state sense. Finding this balance won’t be easy but I think that it behooves us to not jump to extremes. So what’s the path forward?

(I discuss this issue in more detail in my upcoming book “It’s Complicated: The Social Lives of Networked Teens.”  You can pre-order the book now!)

where “nothing to hide” fails as logic

Every April, I try to wade through mounds of paperwork to file my taxes. Like most Americans, I’m trying to follow the law and pay all of the taxes that I owe without getting screwed in the process. I try and make sure that every donation I made is backed by proof, every deduction is backed by logic and documentation that I’ll be able to make sense of three to seven years later. Because, like many Americans, I completely and utterly dread the idea of being audited. Not because I’ve done anything wrong, but the exact opposite. I know that I’m filing my taxes to the best of my ability and yet, I also know that if I became a target of interest from the IRS, they’d inevitably find some checkbox I forgot to check or some subtle miscalculation that I didn’t see. And so what makes an audit intimidating and scary is not because I have something to hide but because proving oneself to be innocent takes time, money, effort, and emotional grit.

Sadly, I’m getting to experience this right now as Massachusetts refuses to believe that I moved to New York mid-last-year. It’s mindblowing how hard it is to summon up the paperwork that “proves” to them that I’m telling the truth. When it was discovered that Verizon (and presumably other carriers) was giving metadata to government officials, my first thought was: wouldn’t it be nice if the government would use that metadata to actually confirm that I was in NYC not Massachusetts. But that’s the funny thing about how data is used by our current government. It’s used to create suspicion, not to confirm innocence.

The frameworks of “innocent until proven guilty” and “guilty beyond a reasonable doubt” are really really important to civil liberties, even if they mean that some criminals get away. These frameworks put the burden on the powerful entity to prove that someone has done something wrong. Because it’s actually pretty easy to generate suspicion, even when someone is wholly innocent. And still, even with this protection, innocent people are sentenced to jail and even given the death penalty. Because if someone has a vested interest in you being guilty, it’s often viable to paint that portrait, especially if you have enough data. Just watch as the media pulls up random quotes from social media sites whenever someone hits the news to frame them in a particular light.

It’s disturbing to me how often I watch as someone’s likeness is constructed in ways that contorts the image of who they are. This doesn’t require a high-stakes political issue. This is playground stuff. In the world of bullying, I’m astonished at how often schools misinterpret situations and activities to construct narratives of perpetrators and victims. Teens get really frustrated when they’re positioned as perpetrators, especially when they feel as though they’ve done nothing wrong. Once the stakes get higher, all hell breaks loose. In “Sticks and Stones”, Emily Bazelon details how media and legal involvement in bullying cases means that they often spin out of control, such as they did in South Hadley. I’m still bothered by the conviction of Dharun Ravi in the highly publicized death of Tyler Clementi. What happens when people are tarred and feathered as symbols for being imperfect?

Of course, it’s not just one’s own actions that can be used against one’s likeness. Guilt-through-association is a popular American pastime. Remember how the media used Billy Carter to embarrass Jimmy Carter? Of course, it doesn’t take the media or require an election cycle for these connections to be made. Throughout school, my little brother had to bear the brunt of teachers who despised me because I was a rather rebellious students. So when the Boston marathon bombing occurred, it didn’t surprise me that the media went hogwild looking for any connection to the suspects. Over and over again, I watched as the media took friendships and song lyrics out of context to try to cast the suspects as devils. By all accounts, it looks as though the brothers are guilty of what they are accused of, but that doesn’t make their friends and other siblings evil or justify the media’s decision to portray the whole lot in such a negative light.

So where does this get us? People often feel immune from state surveillance because they’ve done nothing wrong. This rhetoric is perpetuated on American TV. And yet the same media who tells them they have nothing to fear will turn on them if they happen to be in close contact with someone who is of interest to – or if they themselves are the subject of – state interest. And it’s not just about now, but it’s about always.

And here’s where the implications are particularly devastating when we think about how inequality, racism, and religious intolerance play out. As a society, we generate suspicion of others who aren’t like us, particularly when we believe that we’re always under threat from some outside force. And so the more that we live in doubt of other people’s innocence, the more that we will self-segregate. And if we’re likely to believe that people who aren’t like us are inherently suspect, we won’t try to bridge those gaps. This creates societal ruptures and undermines any ability to create a meaningful republic. And it reinforces any desire to spy on the “other” in the hopes of finding something that justifies such an approach. But, like I said, it doesn’t take much to make someone appear suspect.

In many ways, the NSA situation that’s unfolding in front of our eyes is raising a question that is critical to the construction of our society. These issues cannot be washed away by declaring personal innocence. A surveillance state will produce more suspect individuals. What’s at stake has to do with how power is employed, by whom, and in what circumstances. It’s about questioning whether or not we still believe in checks and balances to power. And it’s about questioning whether or not we’re OK with continue to move towards a system that presumes entire classes and networks of people as suspect. Regardless of whether or not you’re in one of those classes or networks, are you OK with that being standard fare? Because what is implied in that question is a much uglier one: Is your perception of your safety worth the marginalization of other people who don’t have your privilege?

thoughts on Pew’s latest report: notable findings on race and privacy

Yesterday, Pew Internet and American Life Project (in collaboration with Berkman) unveiled a brilliant report about “Teens, Social Media, and Privacy.” As a researcher who’s been in the trenches on these topics for a long time now, none of their finding surprised me but it still gives me absolute delight when our data is so beautifully in synch. I want to quickly discuss two important issues that this report raise.

Race is a factor in explaining differences in teen social media use.

Pew provides important measures on shifts in social media, including the continued saturation of Facebook, the decline of MySpace, and the rise of other social media sites (e.g., Twitter, Instagram). When they drill down on race, they find notable differences in adoption. For example, they highlight data that is the source of “black Twitter” narratives: 39% of African-American teens use Twitter compared to 23% of white teens.

Most of the report is dedicated to the increase in teen sharing, but once again, we start to see some race differences. For example, 95% of white social media-using teens share their “real name” on at least one service while 77% of African-American teens do. And while 39% of African-American teens on social media say that they post fake information, only 21% of white teens say they do this.

Teens’ practices on social media also differ by race. For example, on Facebook, 48% of African-American teens befriend celebrities, athletes, or musicians while one 25% of white teen users do.

While media and policy discussions of teens tend to narrate them as an homogenous group, there are serious and significant differences in practices and attitudes among teens. Race is not the only factor, but it is a factor. And Pew’s data on the differences across race highlight this.

Of course, race isn’t actually what’s driving what we see as race differences. The world in which teens live is segregated and shaped by race. Teens are more likely to interact with people of the same race and their norms, practices, and values are shaped by the people around them. So what we’re actually seeing is a manifestation of network effects. And the differences in the Pew report point to black youth’s increased interest in being a part of public life, their heightened distrust of those who hold power over them, and their notable appreciation for pop culture. These differences are by no means new, but what we’re seeing is that social media is reflecting back at us cultural differences shaped by race that are pervasive across America.

Teens are sharing a lot of content, but they’re also quite savvy.

Pew’s report shows an increase in teens’ willingness to share all sorts of demographic, contact, and location data. This is precisely the data that makes privacy advocates anxious. At the same time, their data show that teens are well-aware of privacy settings and have changed the defaults even if they don’t choose to manage the accessibility of each content piece they share. They’re also deleting friends (74%), deleting previous posts (59%), blocking people (58%), deleting comments (53%), detagging themselves (45%), and providing fake info (26%).

My favorite finding of Pew’s is that 58% of teens cloak their messages either through inside jokes or other obscure references, with more older teens (62%) engaging in this practice than younger teens (46%). This is the practice that I’ve seen significantly rise since I first started doing work on teens’ engagement with social media. It’s the source of what Alice Marwick and I describe as “social steganography” in our paper on teen privacy practices.

While adults are often anxious about shared data that might be used by government agencies, advertisers, or evil older men, teens are much more attentive to those who hold immediate power over them – parents, teachers, college admissions officers, army recruiters, etc. To adults, services like Facebook that may seem “private” because you can use privacy tools, but they don’t feel that way to youth who feel like their privacy is invaded on a daily basis. (This, btw, is part of why teens feel like Twitter is more intimate than Facebook. And why you see data like Pew’s that show that teens on Facebook have, on average 300 friends while, on Twitter, they have 79 friends.) Most teens aren’t worried about strangers; they’re worried about getting in trouble.

Over the last few years, I’ve watched as teens have given up on controlling access to content. It’s too hard, too frustrating, and technology simply can’t fix the power issues. Instead, what they’ve been doing is focusing on controlling access to meaning. A comment might look like it means one thing, when in fact it means something quite different. By cloaking their accessible content, teens reclaim power over those who they know who are surveilling them. This practice is still only really emerging en masse, so I was delighted that Pew could put numbers to it. I should note that, as Instagram grows, I’m seeing more and more of this. A picture of a donut may not be about a donut. While adults worry about how teens’ demographic data might be used, teens are becoming much more savvy at finding ways to encode their content and achieve privacy in public.

Anyhow, I have much more to say about Pew’s awesome report, but I wanted to provide a few thoughts and invite y’all to read it. If there is data that you’re curious about or would love me to analyze more explicitly, leave a comment or drop me a note. I’m happy to dive in more deeply on their findings.

Reflecting on Dharun Ravi’s conviction

On Friday, Dharun Ravi – the Rutgers student whose roommate Tyler Clementi killed himself – was found guilty of privacy invasion, tampering with evidence, and bias intimidation (a hate crime). When John Palfrey and I wrote about this case three weeks ago, I was really hopeful that the court proceedings would give clarity and relieve my uncertainty. Instead, I am left more conflicted and deeply saddened. I believe that the jury did their job, but I am not convinced that justice was served. More disturbingly, I think that the symbolic component of this case is deeply troubling.

In New Jersey, someone can be convicted of bias intimidation for committing an act…

  1. with the express purpose of intimidating an individual or group…
  2. knowing that the offense would cause an individual or group to feel intimidated…
  3. with which the individual or group on the receiving end believes that they were targeted…

… because of their race, color, religion, gender, handicap, sexual orientation, or ethnicity.

In Ravi’s trial, the jury concluded that Ravi neither intended to intimidate Clementi nor believed that his acts would make Clementi feel intimidated because of his sexuality. Yet, the jury did conclude that, based on computer evidence, Clementi probably felt intimidated because of his sexuality.

As someone who wants to rid the world of homophobia, this conviction leaves me devastated. I recognize the symbolic move that this is supposed to make. This is supposed to signal that homophobia will not be tolerated. But Ravi wasn’t convicted of being homophobic, but, rather, creating the “circumstances” in which Clementi would probably feel intimidated. In other words, Ravi is being punished for living in a culture of homophobia even though there’s little evidence to suggest that he perpetuated it intentionally. As Mary Gray has argued, we are all to blame for the culture of homophobia that has resulted in this tragedy.

I can’t help but think of Clementi’s parents in light of this. By all accounts, their reaction to their son’s confession that he was gay did more to intimidate Clementi based on his sexuality than Ravi’s stupid act. Yet, I can’t even begin to imagine that the court would charge, let alone convict, Clementi’s distraught parents of a hate crime. ::shudder::

I can’t justify Ravi’s decision to invade his roommate’s privacy, especially not at a moment in which he would be extremely vulnerable. I also cannot justify Ravi’s decision to mess with evidence, even though I suspect he did so out of fear. But I also don’t think that either of these actions deserve 10 years of jail time or deportation (two of the options given to the judge). I don’t think that’s justice.

This case is being hailed for its symbolism, but what is the message that it conveys? It says that a brown kid who never intended to hurt anyone because of their sexuality will do jail time, while politicians and pundits who espouse hatred on TV and radio and in stump speeches continue to be celebrated. It says that a teen who invades the privacy of his peer will be condemned, even while companies and media moguls continue to profit off of more invasive invasions.

I’m also sick and tired of people saying that this will teach kids an important lesson. Simply put, it won’t. No teen that I know identifies their punking and pranking of their friends and classmates as bullying, let alone bias intimidation. Sending Ravi to jail will do nothing to end bullying. Yet, it lets people feel like it will and that makes me really sad. There’s a lot to be done in this realm and this does nothing to help those who are suffering every day.

The jury did its job. The law was followed. I have little doubt that Ravi did the things that he was convicted of doing. But I am not celebrating because I don’t think that this case made the world a better place. I think that it simply destroyed another life.

(Translated to Ukrainian)

How Parents Normalized Teen Password Sharing

In 2005, I started asking teenagers about their password habits. My original set of questions focused on teens’ attitudes about giving their password to their parents, but I quickly became enamored with teens’ stories of sharing passwords with friends and significant others. So I was ecstatic when Pew Internet & American Life Project decided to survey teens about their password sharing habits. Pew found that one third of online 12-17 year olds share their password with a friend or significant other and that almost half of those 14-17 do. I love when data gets reinforced.

Last week, Matt Richtel at the New York Times did a fantastic job of covering one aspect of why teens share passwords: as a show of affection. Indeed, I have lots of fun data that supports Richtel’s narrative — and complicates it. Consider Meixing’s explanation for why she shares her password with her boyfriend:

Meixing, 17, TN: It made me feel safer just because someone was there to help me out and stuff. It made me feel more connected and less lonely. Because I feel like Facebook sometimes it kind of like a lonely sport, I feel, because you’re kind of sitting there and you’re looking at people by yourself. But if someone else knows your password and stuff it just feels better.

For Meixing, sharing her password with her boyfriend is a way of being connected. But it’s precisely these kinds of narratives that have prompted all sorts of horror by adults over the last week since that NYTimes article came out. I can’t count the number of people who have gasped “How could they!?!” at me. For this reason, I feel the need to pick up on an issue that the NYTimes let out.

The idea of teens sharing passwords didn’t come out of thin air. In fact, it was normalized by adults. And not just any adult. This practice is the product of parental online safety norms. In most households, it’s quite common for young children to give their parents their passwords. With elementary and middle school youth, this is often a practical matter: children lose their passwords pretty quickly. Furthermore, most parents reasonably believe that young children should be supervised online. As tweens turn into teens, the narrative shifts. Some parents continue to require passwords be forked over, using explanations like “because I’m your mother.” But many parents use the language of “trust” to explain why teens should share their passwords with them.

There are different ways that parents address the password issue, but they almost always build on the narrative of trust. (Tangent: My favorite strategy is when parents ask children to put passwords into a piggy bank that must be broken for the paper with the password to be retrieved. Such parents often explain that they don’t want to access their teens’ accounts, but they want to have the ability to do so “in case of emergency.” A piggy bank allows a social contract to take a physical form.)

When teens share their passwords with friends or significant others, they regularly employ the language of trust, as Richtel noted in his story. Teens are drawing on experiences they’ve had in the home and shifting them into their peer groups in order to understand how their relationships make sense in a broader context. This shouldn’t be surprising to anyone because this is all-too-common for teen practices. Household norms shape peer norms.

There’s another thread here that’s important. Think back to the days in which you had a locker. If you were anything like me and my friends, you gave out your locker combination to your friends and significant others. There were varied reasons for doing so. You wanted your friends to pick up a book for you when you left early because you were sick. You were involved in a club or team where locker decorating was common. You were hoping that your significant other would leave something special for you. Or – to be completely and inappropriately honest – you left alcohol in your locker and your friends stopped by for a swig. (One of my close friends was expelled for that one.) We shared our locker combinations because they served all sorts of social purposes, from the practical to the risqué.

How are Facebook passwords significantly different than locker combos? Truth be told, for most teenagers, they’re not. Teens share their passwords so that their friends can check their messages for them when they can’t get access to a computer. They share their passwords so their friends can post the cute photos. And they share their passwords because it’s a way of signaling an intimate relationship. Just like with locker combos.

Can password sharing be abused? Of course. I’ve heard countless stories of friends “punking” one another by leveraging password access. And I’ve witnessed all sorts of teen relationship violence where mandatory password sharing is a form of surveillance and abuse. But, for most teens, password sharing is as risky as locker combo sharing. This is why, even though 1/3 of all teens share their passwords, we only hear of scattered horror stories.

I know that this practice strikes adults as seriously peculiar, but it irks me when adults get all judgmental on this teen practice, as though it’s “proof” that teens can’t properly judge how trustworthy a relationship is. First, it’s through these kinds of situations where they learn. Second, adults are dreadful at judging their own relationships (see: divorce rate) so I don’t have a lot of patience for the high and mighty approach. Third, I’m much happier with teens sharing passwords as a form of intimacy than sharing many other things.

There’s no reason to be aghast at teen password sharing. Richtel’s story is dead-on. It’s pretty darn pervasive. But it also makes complete sense given how notions of trust have been constructed for many teens.

(Image Credit: Darwin Bell)