Tag Archives: regulation

Which Students Get to Have Privacy?

There’s a fresh push to protect student data. But the people who need the most protection are the ones being left behind.

It seems that student privacy is trendy right now. At least among elected officials. Congressional aides are scrambling to write bills that one-up each other in showcasing how tough they are on protecting youth. We’ve got Congressmen Polis and Messer (with Senator Blumenthal expected to propose a similar bill in the Senate). Kline and Scott have a discussion draft of their bill out while Markey and Hatch have reintroduced the bill they introduced a year ago. And then there’s Senator Vitter’s proposed bill. And let’s not even talk about the myriad of state-level legislation.

Most of these bills are responding in some way or another to a 1974 piece of legislation called the Family Educational Rights and Privacy Act (FERPA), which restricted what schools could and could not do with student data.

Needless to say, lawmakers in 1974 weren’t imagining the world of technology that we live with today. On top of that, legislative and bureaucratic dynamics have made it difficult for the Department of Education to address failures at the school level without going nuclear and just defunding a school outright. And schools lack security measures (because they lack technical sophistication) and they’re entering into all sorts of contracts with vendors that give advocates heartburn.

So there’s no doubt that reform is needed, but the question — as always — is what reform? For whom? And with what kind of support?

The bills are pretty spectacularly different, pushing for a range of mechanisms to limit abuses of student data. Some are fine-driven; others take a more criminal approach. There are also differences in who can access what data under what circumstances. The bills give different priorities to parents, teachers, and schools. Of course, even though this is all about *students*, they don’t actually have a lot of power in any of these bills. It’s all a question of who can speak on their behalf and who is supposed to protect them from the evils of the world. And what kind of punishment for breaches is most appropriate. (Not surprisingly, none of the bills provide for funding to help schools come up to speed.)

As a youth advocate and privacy activist, I’m generally in favor of student privacy. But my panties also get in a bunch when I listen to how people imagine the work of student privacy. As is common in Congress as election cycles unfold, student privacy has a “save the children” narrative. And this forces me to want to know more about the threat models we’re talking about. What are we saving the children *from*?

Threat Models

There are four external threats that I think are interesting to consider. These are the dangers that students face if their data leaves the education context.

#1: The Stranger Danger Threat Model. It doesn’t matter how much data we have to challenge prominent fears, the possibly of creepy child predators lurking around school children still overwhelms any conversation about students, including their data.

#2: The Marketing Threat Model. From COPPA to the Markey/Hatch bill, there’s a lot of concern about how student data will be used by companies to advertise products to students or otherwise fuel commercial data collection that drives advertising ecosystems.

#3: The Consumer Finance Threat Model. In a post-housing bubble market, the new subprime lending schemes are all about enabling student debt, especially since students can’t declare bankruptcy when they default on their obscene loans. There is concern about how student data will be used to fuel the student debt ecosystem.

#4: The Criminal Justice Threat Model. Law enforcement has long been interested in student performance, but this data is increasingly desirable in a world of policing that is trying to assess risk. There are reasons to believe that student data will fuel the new policing architectures.

The first threat model is artificial (see: “It’s Complicated”), but it propels people to act and create laws that will not do a darn thing to address abuse of children. The other three threat models are real, but these threats are spread differently over the population. In the world of student privacy, #2 gets far more attention than #3 and #4. In fact, almost every bill creates carve-outs for “safety” or otherwise allows access to data if there’s concern about a risk to the child, other children, or the school. In other words, if police need it. And, of course, all of these laws allow parents and guardians to get access to student data with no consideration of the consequences for students who are under state supervision. So, really, #4 isn’t even in the cultural imagination because, as with nearly everything involving our criminal justice system, we don’t believe that “those people” deserve privacy.

The reason that I get grouchy is that I hate how the risks that we’re concerned about are shaped by the fears of privileged parents, not the risks of those who are already under constant surveillance, those who are economically disadvantaged, and those who are in the school-prison pipeline. #2-#4 are all real threat models with genuine risks, but we consistently take #2 far more seriously than #3 or #4, and privileged folks are more concerned with #1.

What would it take to actually consider the privacy rights of the most marginalized students?

The threats that poor youth face? That youth of color face? And the trade-offs they make in a hypersurveilled world? What would it take to get people to care about how we keep building out infrastructure and backdoors to track low-status youth in new ways? It saddens me that the conversation is constructed as being about student privacy, but it’s really about who has the right to monitor which youth. And, as always, we allow certain actors to continue asserting power over youth.

This post was originally published to The Message at Medium on May 22, 2015. Image credit: Francisco Osorio

Regulating the Use of Social Media Data

If you were to walk into my office, I’d have a pretty decent sense of your gender, your age, your race, and other identity markers. My knowledge wouldn’t be perfect, but it would give me plenty of information that I could use to discriminate against you if I felt like it. The law doesn’t prohibit me for “collecting” this information in a job interview nor does it say that discrimination is acceptable if you “shared” this information with me. That’s good news given that faking what’s written on your body is bloody hard. What the law does is regulate how this information can be used by me, the theoretical employer. This doesn’t put an end to all discrimination – plenty of people are discriminated against based on what’s written on their bodies – but it does provide you with legal rights if you think you were discriminated against and it forces the employer to think twice about hiring practices.

The Internet has made it possible for you to create digital bodies that reflect a whole lot more than your demographics. Your online profiles convey a lot about you, but that content is produced in a context. And, more often than not, that context has nothing to do with employment. This creates an interesting conundrum. Should employers have the right to discriminate against you because of your Facebook profile? One might argue that they should because such a profile reflects your “character” or your priorities or your public presence. Personally, I think that’s just code for discriminating against you because you’re not like me, the theoretical employer.

Of course, it’s a tough call. Hiring is hard. We’re always looking for better ways to judge someone and goddess knows that an interview plus resume is rarely the best way to assess whether or not there’s a “good fit.” It’s far too tempting to jump on the Internet and try to figure out who someone is based on what we can drudge up online. This might be reasonable if only we were reasonable judges of people’s signaling or remotely good at assessing them in context. Cuz it’s a whole lot harder to assess someone’s professional sensibilities by their social activities if they come from a world different than our own.

Given this, I was fascinated to learn that the German government is proposing legislation that would put restrictions on what Internet content employers could use when recruiting.

A decade ago, all of our legal approaches to the Internet focused on what data online companies could collect. This makes sense if you think of the Internet as a broadcast medium. But then along came the mainstreamification of social media and user-generated content. People are sharing content left right and center as part of their daily sociable practices. They’re sharing as if the Internet is a social place, not a professional place. More accurately, they’re sharing in a setting where there’s no clear delineation of social and professional spheres. Since social media became popular, folks have continuously talked about how we need to teach people to not share what might cause them professional consternation. Those warnings haven’t worked. And for good reason. What’s professionally questionable to one may be perfectly appropriate to another. Or the social gain one sees might outweigh the professional risks. Or, more simply, people may just be naive.

I’m sick of hearing about how the onus should be entirely on the person doing the sharing. There are darn good reasons in which people share information and just because you can dig it up doesn’t mean that it’s ethical to use it. So I’m delighted by the German move, if for no other reason than to highlight that we need to rethink our regulatory approaches. I strongly believe that we need to spend more time talking about how information is being used and less time talking about how stupid people are for sharing it in the first place.