Tag Archives: law

Which Students Get to Have Privacy?

There’s a fresh push to protect student data. But the people who need the most protection are the ones being left behind.

It seems that student privacy is trendy right now. At least among elected officials. Congressional aides are scrambling to write bills that one-up each other in showcasing how tough they are on protecting youth. We’ve got Congressmen Polis and Messer (with Senator Blumenthal expected to propose a similar bill in the Senate). Kline and Scott have a discussion draft of their bill out while Markey and Hatch have reintroduced the bill they introduced a year ago. And then there’s Senator Vitter’s proposed bill. And let’s not even talk about the myriad of state-level legislation.

Most of these bills are responding in some way or another to a 1974 piece of legislation called the Family Educational Rights and Privacy Act (FERPA), which restricted what schools could and could not do with student data.

Needless to say, lawmakers in 1974 weren’t imagining the world of technology that we live with today. On top of that, legislative and bureaucratic dynamics have made it difficult for the Department of Education to address failures at the school level without going nuclear and just defunding a school outright. And schools lack security measures (because they lack technical sophistication) and they’re entering into all sorts of contracts with vendors that give advocates heartburn.

So there’s no doubt that reform is needed, but the question — as always — is what reform? For whom? And with what kind of support?

The bills are pretty spectacularly different, pushing for a range of mechanisms to limit abuses of student data. Some are fine-driven; others take a more criminal approach. There are also differences in who can access what data under what circumstances. The bills give different priorities to parents, teachers, and schools. Of course, even though this is all about *students*, they don’t actually have a lot of power in any of these bills. It’s all a question of who can speak on their behalf and who is supposed to protect them from the evils of the world. And what kind of punishment for breaches is most appropriate. (Not surprisingly, none of the bills provide for funding to help schools come up to speed.)

As a youth advocate and privacy activist, I’m generally in favor of student privacy. But my panties also get in a bunch when I listen to how people imagine the work of student privacy. As is common in Congress as election cycles unfold, student privacy has a “save the children” narrative. And this forces me to want to know more about the threat models we’re talking about. What are we saving the children *from*?

Threat Models

There are four external threats that I think are interesting to consider. These are the dangers that students face if their data leaves the education context.

#1: The Stranger Danger Threat Model. It doesn’t matter how much data we have to challenge prominent fears, the possibly of creepy child predators lurking around school children still overwhelms any conversation about students, including their data.

#2: The Marketing Threat Model. From COPPA to the Markey/Hatch bill, there’s a lot of concern about how student data will be used by companies to advertise products to students or otherwise fuel commercial data collection that drives advertising ecosystems.

#3: The Consumer Finance Threat Model. In a post-housing bubble market, the new subprime lending schemes are all about enabling student debt, especially since students can’t declare bankruptcy when they default on their obscene loans. There is concern about how student data will be used to fuel the student debt ecosystem.

#4: The Criminal Justice Threat Model. Law enforcement has long been interested in student performance, but this data is increasingly desirable in a world of policing that is trying to assess risk. There are reasons to believe that student data will fuel the new policing architectures.

The first threat model is artificial (see: “It’s Complicated”), but it propels people to act and create laws that will not do a darn thing to address abuse of children. The other three threat models are real, but these threats are spread differently over the population. In the world of student privacy, #2 gets far more attention than #3 and #4. In fact, almost every bill creates carve-outs for “safety” or otherwise allows access to data if there’s concern about a risk to the child, other children, or the school. In other words, if police need it. And, of course, all of these laws allow parents and guardians to get access to student data with no consideration of the consequences for students who are under state supervision. So, really, #4 isn’t even in the cultural imagination because, as with nearly everything involving our criminal justice system, we don’t believe that “those people” deserve privacy.

The reason that I get grouchy is that I hate how the risks that we’re concerned about are shaped by the fears of privileged parents, not the risks of those who are already under constant surveillance, those who are economically disadvantaged, and those who are in the school-prison pipeline. #2-#4 are all real threat models with genuine risks, but we consistently take #2 far more seriously than #3 or #4, and privileged folks are more concerned with #1.

What would it take to actually consider the privacy rights of the most marginalized students?

The threats that poor youth face? That youth of color face? And the trade-offs they make in a hypersurveilled world? What would it take to get people to care about how we keep building out infrastructure and backdoors to track low-status youth in new ways? It saddens me that the conversation is constructed as being about student privacy, but it’s really about who has the right to monitor which youth. And, as always, we allow certain actors to continue asserting power over youth.

This post was originally published to The Message at Medium on May 22, 2015. Image credit: Francisco Osorio

Stop the Cycle of Bullying

[John Palfrey and I originally wrote this as an op-ed for the Huffington Post. See HuffPo for more comments.]

On 22 September 2010, the wallet of Tyler Clementi – a gay freshman at Rutgers University – was found on the George Washington Bridge; his body was found in the Hudson River the following week. His roommate, Dharun Ravi, was charged with 15 criminal counts, including invasion of privacy, bias intimidation, and tampering with witnesses and evidence tampering. Ravi pleaded not guilty.

Ravi’s trial officially begins this week, but in the court of public opinion, he has already been convicted. This is a terrible irony, since the case itself is about bullying.

Wading through the news reports, it’s hard to tell exactly what happened in the hours leading up to Clementi’s suicide. Some facts are unknown. What seems apparent is that Clementi asked Ravi to have his dormroom to himself on two occasions – September 19 and 21 – so that he could have alone time with an older gay man. On the first occasion, Ravi appears to have jiggered his computer so that he could watch the encounter from a remote computer. Ravi announced that he did so on Twitter. When Clementi asked Ravi for a second night in the room, Ravi invited others to watch via Twitter. It appears as though Clementi read this and unplugged Ravi’s computer, thereby preventing Ravi from watching. What happened after this incident on September 21 is unclear. A day later, Clementi’s body was discovered.

The media-driven narrative quickly blamed Ravi and his friend Molly Wei, from whose room Ravi watched Clementi. Amidst a series of other highly publicized LGBT suicides, Clementi’s suicide was labeled as a tragic product of homophobic bullying. Ravi has been portrayed as a malicious young man, hellbent on making his roommate miserable. Technology was blamed for providing a new mechanism by which Ravi could spy on and torment his roommate. The overwhelming presumption: Ravi’s guilty for causing Clementi’s death. Ravi may well be guilty of these crimes, but we have trials for a reason.

As information has emerged from the legal discovery process, the story became more complicated. It appears as though Clementi turned to online forums and friends to get advice; his messages conveyed a desire for getting support, but they didn’t suggest a pending suicide attempt. In one document submitted to the court, Clementi appears to have written to a friend that he was not particularly upset by Ravi’s invasion. Older digital traces left by Clementi – specifically those produced after he came out to and was rejected by those close to him – exhibited terrible emotional pain. At Rutgers, Clementi appears to have been handling his frustrations with his roommate reasonably well. After the events of September 20 and 21, Clementi appears to have notified both his resident assistant and university officials and asked for a new room; the school appears to have responded properly and Clementi appeared pleased.

The process of discovery in a lawsuit is an essential fact-finding exercise. The presumption of innocence is an essential American legal principle. Unfortunately, in highly publicized cases, this doesn’t stop people from jumping to conclusions based on snippets of information. Media speculation and hype surrounding Clementi’s suicide has been damning for Ravi, but the incident has also prompted all sorts of other outcomes. Public policy wheels have turned, prompting calls for new state and federal cyberbullying prevention laws. Well-meaning advocates have called for bullying to be declared a hate crime.

As researchers, we know that bullying is a serious, urgent issue. We favor aggressive and meaningful intervention programs to address it and to prevent young people from taking their lives. These programs should especially support LGBT youth, themselves more likely to be the targets of bullying. Yet, it’s also critical that we pay attention to the messages that researchers have been trying to communicate for years. “Bullies” are often themselves victims of other forms of cruelty and pressure. Zero-tolerance approaches to bullying don’t work; they often increase bullying. Focusing on punishment alone does little to address the underlying issues. Addressing bullying requires a serious social, economic, and time-based commitment to educating both young people and adults. Research shows that curricula and outreach programs can work. We are badly underfunding youth empowerment programs that could help enormously. Legislative moves that focus on punishment instead of education only make the situation worse.

Not only are most young people often ill-equipped to recognize how their meanness, cruelty, and pranking might cause pain, but most adults are themselves are ill-equipped to help young people in a productive way. Worse, many adults are themselves perpetuating the idea that being cruel is socially acceptable. Not only has cruelty and deception become status quo on TV talk shows; it plays a central role in televised entertainment and political debates. In contemporary culture, it has become acceptable to be outright cruel to any public figure, whether they’re a celebrity, reality TV contestant, or teenager awaiting trial.

Tyler Clementi’s suicide is a tragedy. We should all be horrified that a teenager felt the need to take his life in our society. But in our frustration, we must not prosecute Dharun Ravi before he has had his day in court. We must not be bullies ourselves. Ravi’s life has already been destroyed by what he may or may not have done. The way we, the public, have treated him, even before his trial, has only made things worse.

To combat bullying, we need to stop the cycle of violence. We need to take the high road; we must refrain from acting like a mob, in Clementi’s name or otherwise. Every day, there are young people who are being tormented by their peers and by adults in their lives. If we want to make this stop, we need to get to the root of the problem. We should start by looking to ourselves.

danah boyd is a senior researcher at Microsoft Research and a research assistant professor at New York University. John Palfrey is a professor of law at Harvard Law School.

Why Parents Help Children Violate Facebook’s 13+ Rule

Announcing new journal article: “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, First Monday.

“At what age should I let my child join Facebook?” This is a question that countless parents have asked my collaborators and me. Often, it’s followed by the following: “I know that 13 is the minimum age to join Facebook, but is it really so bad that my 12-year-old is on the site?”

While parents are struggling to determine what social media sites are appropriate for their children, government tries to help parents by regulating what data internet companies can collect about children without parental permission. Yet, as has been the case for the last decade, this often backfires. Many general-purpose communication platforms and social media sites restrict access to only those 13+ in response to a law meant to empower parents: the Children’s Online Privacy Protection Act (COPPA). This forces parents to make a difficult choice: help uphold the minimum age requirements and limit their children’s access to services that let kids connect with family and friends OR help their children lie about their age to circumvent the age-based restrictions and eschew the protections that COPPA is meant to provide.

In order to understand how parents were approaching this dilemma, my collaborators — Eszter Hargittai (Northwestern University), Jason Schultz (University of California, Berkeley), John Palfrey (Harvard University) — and I decided to survey parents. In many ways, we were responding to a flurry of studies (e.g. Pew’s) that revealed that millions of U.S. children have violated Facebook’s Terms of Service and joined the site underage. These findings prompted outrage back in May as politicians blamed Facebook for failing to curb underage usage. Embedded in this furor was an assumption that by not strictly guarding its doors and keeping children out, Facebook was undermining parental authority and thumbing its nose at the law. Facebook responded by defending its practices — and highlighting how it regularly ejects children from its site. More controversially, Facebook’s founder Mark Zuckerberg openly questioned the value of COPPA in the first place.

While Facebook has often sparked anger over its cavalier attitudes towards user privacy, Zuckerberg’s challenge with regard to COPPA has merit. It’s imperative that we question the assumptions embedded in this policy. All too often, the public takes COPPA at face-value and politicians angle to build new laws based on it without examining its efficacy.

Eszter, Jason, John, and I decided to focus on one core question: Does COPPA actually empower parents? In order to do so, we surveyed parents about their household practices with respect to social media and their attitudes towards age restrictions online. We are proud to release our findings today, in a new paper published at First Monday called “Why parents help their children lie to Facebook about age: Unintended consequences of the ‘Children’s Online Privacy Protection Act’.” From a national sample of 1,007 U.S. parents who have children living with them between the ages of 10-14 conducted July 5-14, 2011, we found:

  • Although Facebook’s minimum age is 13, parents of 13- and 14-year-olds report that, on average, their child joined Facebook at age 12.
  • Half (55%) of parents of 12-year-olds report their child has a Facebook account, and most (82%) of these parents knew when their child signed up. Most (76%) also assisted their 12-year old in creating the account.
  • A third (36%) of all parents surveyed reported that their child joined Facebook before the age of 13, and two-thirds of them (68%) helped their child create the account.
  • Half (53%) of parents surveyed think Facebook has a minimum age and a third (35%) of these parents think that this is a recommendation and not a requirement.
  • Most (78%) parents think it is acceptable for their child to violate minimum age restrictions on online services.

The status quo is not working if large numbers of parents are helping their children lie to get access to online services. Parents do appear to be having conversations with their children, as COPPA intended. Yet, what does it mean if they’re doing so in order to violate the restrictions that COPPA engendered?

One reaction to our data might be that companies should not be allowed to restrict access to children on their sites. Unfortunately, getting the parental permission required by COPPA is technologically difficult, financially costly, and ethically problematic. Sites that target children take on this challenge, but often by excluding children whose parents lack resources to pay for the service, those who lack credit cards, and those who refuse to provide extra data about their children in order to offer permission. The situation is even more complicated for children who are in abusive households, have absentee parents, or regularly experience shifts in guardianship. General-purpose sites, including communication platforms like Gmail and Skype and social media services like Facebook and Twitter, generally prefer to avoid the social, technical, economic, and free speech complications involved.

While there is merit to thinking about how to strengthen parent permission structures, focusing on this obscures the issues that COPPA is intended to address: data privacy and online safety. COPPA predates the rise of social media. Its architects never imagined a world where people would share massive quantities of data as a central part of participation. It no longer makes sense to focus on how data are collected; we must instead question how those data are used. Furthermore, while children may be an especially vulnerable population, they are not the only vulnerable population. Most adults have little sense of how their data are being stored, shared, and sold.

COPPA is a well-intentioned piece of legislation with unintended consequences for parents, educators, and the public writ large. It has stifled innovation for sites focused on children and its implementations have made parenting more challenging. Our data clearly show that parents are concerned about privacy and online safety. Many want the government to help, but they don’t want solutions that unintentionally restrict their children’s access. Instead, they want guidance and recommendations to help them make informed decisions. Parents often want their children to learn how to be responsible digital citizens. Allowing them access is often the first step.

Educators face a different set of issues. Those who want to help youth navigate commercial tools often encounter the complexities of age restrictions. Consider the 7th grade teacher whose students are heavy Facebook users. Should she admonish her students for being on Facebook underage? Or should she make sure that they understand how privacy settings work? Where does digital literacy fit in when what children are doing is in violation of websites’ Terms of Service?

At first blush, the issues surrounding COPPA may seem to only apply to technology companies and the government, but their implications extend much further. COPPA affects parenting, education, and issues surrounding youth rights. It affects those who care about free speech and those who are concerned about how violence shapes home life. It’s important that all who care about youth pay attention to these issues. They’re complex and messy, full of good intention and unintended consequences. But rather than reinforcing or extending a legal regime that produces age-based restrictions which parents actively circumvent, we need to step back and rethink the underlying goals behind COPPA and develop new ways of achieving them. This begins with a public conversation.

We are excited to release our new study in the hopes that it will contribute to that conversation. To read our complete findings and learn more about their implications for policy makers, see “Why Parents Help Their Children Lie to Facebook About Age: Unintended Consequences of the ‘Children’s Online Privacy Protection Act'” by danah boyd, Eszter Hargittai, Jason Schultz, and John Palfrey, published in First Monday.

To learn more about the Children’s Online Privacy Protection Act (COPPA), make sure to check out the Federal Trade Commission’s website.

(Versions of this post were originally written for the Huffington Post and for the Digital Media and Learning Blog.)

Image Credit: Tim Roe

Regulating the Use of Social Media Data

If you were to walk into my office, I’d have a pretty decent sense of your gender, your age, your race, and other identity markers. My knowledge wouldn’t be perfect, but it would give me plenty of information that I could use to discriminate against you if I felt like it. The law doesn’t prohibit me for “collecting” this information in a job interview nor does it say that discrimination is acceptable if you “shared” this information with me. That’s good news given that faking what’s written on your body is bloody hard. What the law does is regulate how this information can be used by me, the theoretical employer. This doesn’t put an end to all discrimination – plenty of people are discriminated against based on what’s written on their bodies – but it does provide you with legal rights if you think you were discriminated against and it forces the employer to think twice about hiring practices.

The Internet has made it possible for you to create digital bodies that reflect a whole lot more than your demographics. Your online profiles convey a lot about you, but that content is produced in a context. And, more often than not, that context has nothing to do with employment. This creates an interesting conundrum. Should employers have the right to discriminate against you because of your Facebook profile? One might argue that they should because such a profile reflects your “character” or your priorities or your public presence. Personally, I think that’s just code for discriminating against you because you’re not like me, the theoretical employer.

Of course, it’s a tough call. Hiring is hard. We’re always looking for better ways to judge someone and goddess knows that an interview plus resume is rarely the best way to assess whether or not there’s a “good fit.” It’s far too tempting to jump on the Internet and try to figure out who someone is based on what we can drudge up online. This might be reasonable if only we were reasonable judges of people’s signaling or remotely good at assessing them in context. Cuz it’s a whole lot harder to assess someone’s professional sensibilities by their social activities if they come from a world different than our own.

Given this, I was fascinated to learn that the German government is proposing legislation that would put restrictions on what Internet content employers could use when recruiting.

A decade ago, all of our legal approaches to the Internet focused on what data online companies could collect. This makes sense if you think of the Internet as a broadcast medium. But then along came the mainstreamification of social media and user-generated content. People are sharing content left right and center as part of their daily sociable practices. They’re sharing as if the Internet is a social place, not a professional place. More accurately, they’re sharing in a setting where there’s no clear delineation of social and professional spheres. Since social media became popular, folks have continuously talked about how we need to teach people to not share what might cause them professional consternation. Those warnings haven’t worked. And for good reason. What’s professionally questionable to one may be perfectly appropriate to another. Or the social gain one sees might outweigh the professional risks. Or, more simply, people may just be naive.

I’m sick of hearing about how the onus should be entirely on the person doing the sharing. There are darn good reasons in which people share information and just because you can dig it up doesn’t mean that it’s ethical to use it. So I’m delighted by the German move, if for no other reason than to highlight that we need to rethink our regulatory approaches. I strongly believe that we need to spend more time talking about how information is being used and less time talking about how stupid people are for sharing it in the first place.

Deception + fear + humiliation != education

I hate fear-based approaches to education. I grew up on the “this is your brain on drugs” messages and watched classmates go from being afraid of drugs to trying marijuana to deciding that all of the messages about drugs were idiotic. (Crystal meth and marijuana shouldn’t be in the same category.) Much to my frustration, adults keep turning to fear to “educate” the kids with complete disregard to the unintended consequences of this approach. Sometimes, it’s even worse. I recently received an email from a friend of mine (Chloe Cockburn) discussing an issue brought before the ACLU. She gave me permission to share this with you:

A campus police officer has been offering programs about the dangers inherent in using the internet to middle and high school assemblies. As part of her presentation she displays pictures that students have posted on their Facebook pages. The idea is to demonstrate that anyone can have access to this information, so be careful. She gains access to the students’ Facebook pages by creating false profiles claiming to be a student at the school and asking to be “friended”, evidently in violation of Facebook policy.

An ACLU affiliate received a complaint from a student at a small rural high school. The entire assembly was shown a photo of her holding a beer. The picture was not on the complainant’s Facebook page, but on one belonging to a friend of hers, who allowed access to the bogus profile created by the police officer. The complainant was not “punished” as the plaintiff above was, but she was humiliated, and she is afraid that she will not get some local scholarship aid as a result.

So here we have a police officer intentionally violating Facebook’s policy and creating a deceptive profile to entrap teenagers and humiliate them to “teach them a lesson”??? Unethical acts + deception + fear + humiliation != education. This. Makes. Me. Want. To. Scream.