Category Archives: Uncategorized

Facebook Must Be Accountable to the Public

A pair of Gizmodo stories have prompted journalists to ask questions about Facebook’s power to manipulate political opinion in an already heated election year. If the claims are accurate, Facebook contractors have depressed some conservative news, and their curatorial hand affects the Facebook Trending list more than the public realizes. Mark Zuckerberg took to his Facebook page yesterday to argue that Facebook does everything possible to be neutral and that there are significant procedures in place to minimize biased coverage. He also promises to look into the accusations.

Watercolor by John Orlando Parry, “A London Street Scene” 1835, in the Alfred Dunhill Collection.

As this conversation swirls around intentions and explicit manipulation, there are some significant issues missing. First, all systems are biased. There is no such thing as neutrality when it comes to media. That has long been a fiction, one that traditional news media needs and insists on, even as scholars highlight that journalists reveal their biases through everything from small facial twitches to choice of frames and topics of interests. It’s also dangerous to assume that the “solution” is to make sure that “both” sides of an argument are heard equally. This is the source of tremendous conflict around how heated topics like climate change and evolution are covered. Itis even more dangerous, however, to think that removing humans and relying more on algorithms and automation will remove this bias.

Recognizing bias and enabling processes to grapple with it must be part of any curatorial process, algorithmic or otherwise. As we move into the development of algorithmic models to shape editorial decisions and curation, we need to find a sophisticated way of grappling with the biases that shape development, training sets, quality assurance, and error correction, not to mention an explicit act of “human” judgment.

There never was neutrality, and there never will be.

This issue goes far beyond the Trending box in the corner of your Facebook profile, and this latest wave of concerns is only the tip of the iceberg around how powerful actors can affect or shape political discourse. What is of concern right now is not that human beings are playing a role in shaping the news — they always have — it is the veneer of objectivity provided by Facebook’s interface, the claims of neutrality enabled by the integration of algorithmic processes, and the assumption that what is prioritized reflects only the interests and actions of the users (the “public sphere”) and not those of Facebook, advertisers, or other powerful entities.

The key challenge that emerges out of this debate concerns accountability.In theory, news media is accountable to the public. Like neutrality, this is more of a desired goal than something that’s consistently realized. While traditional news media has aspired to — but not always realized — meaningful accountability, there are a host of processes in place to address the possibility of manipulation: ombudspeople, whistleblowers, public editors, and myriad alternate media organizations. Facebook and other technology companies have not, historically, been included in that conversation.

I have tremendous respect for Mark Zuckerberg, but I think his stance that Facebook will be neutral as long as he’s in charge is a dangerous statement.This is what it means to be a benevolent dictator, and there are plenty of people around the world who disagree with his values, commitments, and logics. As a progressive American, I have a lot more in common with Mark than not, but I am painfully aware of the neoliberal American value systems that are baked into the very architecture of Facebook and our society as a whole.

Who Controls the Public Sphere in an Era of Algorithms?

In light of this public conversation, I’m delighted to announce that Data & Society has been developing a project that asks who controls the public sphere in an era of algorithms. As part of this process, we convened a workshop and have produced a series of documents that we think are valuable to the conversation:

These documents provide historical context, highlight how media has always been engaged in power struggles, showcase the challenges that new media face, and offer case studies that reveal the complexities going forward.

This conversation is by no means over. It is only just beginning. My hope is that we quickly leave the state of fear and start imagining mechanisms of accountability that we, as a society, can live with. Institutions like Facebook have tremendous power and they can wield that power for good or evil. Butfor society to function responsibly, there must be checks and balances regardless of the intentions of any one institution or its leader.

This work is a part of Data & Society’s developing Algorithms and Publics project, including a set of documents occasioned by the Who Controls the Public Sphere in an Era of Algorithms? workshop. More posts from workshop participants:

Where Do We Find Ethics?

I was in elementary school, watching the TV live, when the Challenger exploded. My classmates and I were stunned and confused by what we saw. With the logic of a 9-year-old, I wrote a report on O-rings, trying desperately to make sense of a science I did not know and a public outcry that I couldn’t truly understand. I wanted to be an astronaut (and I wouldn’t give up that dream until high school!).

Years later, with a lot more training under my belt, I became fascinated not simply by the scientific aspects of the failure, but by the organizational aspects of it. Last week, Bob Ebeling died. He was an engineer at a contracting firm, and he understood just how badly the O-rings handled cold weather. He tried desperately to convince NASA that the launch was going to end in disaster. Unlike many people inside organizations, he was willing to challenge his superiors, to tell them what they didn’t want to hear. Yet, he didn’t have organizational power to stop the disaster. And at the end of the day, NASA and his superiors decided that the political risk of not launching was much greater than the engineering risk.

Organizations are messy, and the process of developing and launching a space shuttle or any scientific product is complex and filled with trade-offs. This creates an interesting question about the site of ethics in decision-making. Over the last two years, Data & Society has been convening a Council on Big Data, Ethics, and Society where we’ve had intense discussions about how to situate ethics in the practice of data science. We talked about the importance of education and the need for ethical thinking as a cornerstone of computational thinking. We talked about the practices of ethical oversight in research, deeply examining the role of IRBs and the different oversight mechanisms that can and do operate in industrial research. Our mandate was to think about research, but, as I listened to our debates and discussions, I couldn’t help but think about the messiness of ethical thinking in complex organizations and technical systems more generally.

I’m still in love with NASA. One of my dear friends — Janet Vertesi — has been embedded inside different spacecraft teams, understanding how rovers get built. On one hand, I’m extraordinarily jealous of her field site (NASA!!!), but I’m also intrigued by how challenging it is to get a group of engineers and scientists to work together for what sounds like an ultimate shared goal. I will never forget her description of what can go wrong: Imagine if a group of people were given a school bus to drive, only they were each given a steering wheel of their own and had to coordinate among themselves which way to go. Introduce power dynamics, and it’s amazing what all can go wrong.

Like many college students, encountering Stanley Milgram’s famous electric shock experiment floored me. Although I understood why ethics reviews came out of the work that Milgram did, I’ve never forgotten the moment when I fully understood that humans could do inhuman things because they’ve been asked to do so. Hannah Arendt’s work on the banality of evil taught me to appreciate, if not fear, how messy organizations can get when bureaucracies set in motion dynamics in which decision-making is distributed. While we think we understand the ethics of warfare and psychology experiments, I don’t think we have the foggiest clue how to truly manage ethics in organizations. As I continue to reflect on these issues, I keep returning to a college debate that has constantly weighed on me. Audre Lorde said, “the master’s tools will never dismantle the master’s house.” And, in some senses, I agree. But I also can’t see a way of throwing rocks at a complex system that would enable ethics.

My team at Data & Society has been grappling with different aspects of ethics since we began the Institute, often in unexpected ways. When the Intelligence and Autonomy group started looking at autonomous vehicles, they quickly realized that humans were often left in the loop to serve as “liability sponges,” producing “moral crumple zones.” We’ve seen this in organizations for a long time. When a complex system breaks down, who is to be blamed? As the Intelligence & Autonomy team has shown, this only gets more messy when one of the key actors is a computational system.

And that leaves me with a question that plagues me as we work on our Council on Big Data, Ethics, and Society whitepaper: How do we enable ethics in the complex big data systems that are situated within organizations, influenced by diverse intentions and motivations, shaped by politics and organizational logics, complicated by issues of power and control?

No matter how thoughtful individuals are, no matter how much foresight people have, launches can end explosively.

(This was originally posted on Points.)

What is the Value of a Bot?

Bots are tools, designed by people and organizations to automate processes and enable them to do something technically, socially, politically, or economically.

Most of the bots that I have built have been in the pursuit of laziness. I have built bots to sit on my server to check to see if processes have died and to relaunch them, mostly to avoid trying to figure out why the process would die in the first place. I have also built bots under the guise of “art.” For example, I built a bot to crawl online communities to quantitatively assess the interactions.

I’ve also written some shoddy code, and my bots haven’t always worked as intended. While I never designed them to be malicious, a few poorly thought through keystrokes had unintended consequences. One rev of my process-checker bot missed the mark and kept launching new processes every 30 seconds until it brought the server down. And in some cases, it wasn’t the bot that was the problem, but my own stupid interpretation of the information I got back from the bot. For example, I got the great idea to link my social bot designed to assess the “temperature” of online communities up to a piece of hardware designed to produce heat. I didn’t think to cap my assessment of the communities and so when my bot stumbled upon a super vibrant space and offered back a quantitative measure intended to signal that the community was “hot,” another piece of my code interpreted this to mean: jack the temperature up the whole way. I was holding that hardware and burnt myself. Dumb. And totally, 100% my fault.

Most of the bots that I’ve written were slipshod, irrelevant, and little more than a nuisance. But, increasingly, huge systems rely on bots. Bots make search engines possible and, when connected to sensors, are often key to smart cities and other IoT instantiations. Bots shape the financial markets and play a role in helping people get information. Of course, not all bots are designed to be helpful to large institutions. Bots that spread worms, viruses, and spam are often capitalizing on the naivety of users. There are large networks of bots (“botnets”) that can be used to bring down systems (e.g., DDoS attacks). There are also pesky bots that mess with the ecosystem by increasing people’s Twitter follower counts, automating “likes” on Instagram, and create the appearance of natural interest even when there is none.

Identifying the value of these different kinds of bots requires a theory of power. We may want to think that search engines are good, while fake-like bots are bad, but both enable the designer of the bots to profit economically and socially.

Who gets to decide the value of a bot? The technically savvy builder of the bot? The people and organizations that encounter or are affected by the bot? Bots are being designed for all sorts of purposes, and most of them are mundane. But even mundane bots can have consequences.

In the early days of search engines, many website owners were outraged by search engine bots, or web crawlers. They had to pay for traffic, and web crawlers were not seen as legitimate or desired traffic. Plus, they visited every page and could easily bring down a web server through their intensive crawling. As a result, early developers came together and developed a proposal for web crawler politeness, including a mechanism known as the “robots exclusion standard” (or robots.txt), which allowed a website owner to dictate which web crawler could look at which page.

As systems get more complex, it’s hard for developers to come together and develop politeness policies for all bots out there. And it’s often hard for a system to discern between bots that are being helpful and bots that are a burden and not beneficial. After all, before Google was Google, people didn’t think that search engines could have much value.

Standards bodies are no longer groups of geeky friends hashing out protocols over pizza. They’re now structured processes involving all sorts of highly charged interests — they often feel more formal than the meeting of the United Nations. Given high-profile disagreements, it’s hard to imagine such bodies convening to regulate the mundane bots that are creating fake Twitter profiles and liking Instagram photos. As a result, most bots are simply seen as a nuisance. But how many gnats come together to make a wasp?

Bots are first and foremost technical systems, but they are derived from social values and exert power into social systems. How can we create the right social norms to regulate them? What do the norms look like in a highly networked ecosystem where many pieces of the pie are often glued together by digital duct tape?

(This was originally written for Points as part of a series on how to think about bots.)

It’s not Cyberspace anymore

It’s been 20 years — 20 years!? — since John Perry Barlow wrote “A Declaration of the Independence of Cyberspace” — a rant in response to the government and corporate leaders who descend on a certain snowy resort town each year as part of the World Economic Forum (WEF). Picture that pamphleteering with me for a moment…

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone.

I first read Barlow’s declaration when I was 18 years old. I was in high school and in love with the Internet. His manifesto spoke to me. It was a proclamation of freedom, a critique of the status quo, a love letter to the Internet that we all wanted to exist. I didn’t know why he was in Davos, Switzerland, nor did I understand the political conversation he was engaging in. All I knew is that he was on my side.

Twenty years after Barlow declared cyberspace independent, I myself was in Davos for the WEF annual meeting. The Fourth Industrial Revolution was the theme this year, and a big part of me was giddy to go, curious about how such powerful people would grapple with questions introduced by technology.

What I heard left me conflicted and confused. In fact, I have never been made to feel more nervous and uncomfortable by the tech sector than I did at Davos this year.

Walking down the promenade through the center of Davos, it was hard not to notice the role of Silicon Valley in shaping the conversation of the powerful and elite. Not only was everyone attached to their iPhones and Androids, but companies like Salesforce and Palantir and Facebook took over storefronts and invited attendees in for coffee and discussions about Syrian migrants, while camouflaged snipers protected the scene from the roofs of nearby hotels. As new tech held fabulous parties in the newest venues, financial institutions, long the stalwarts of Davos, took over the same staid venues that they always have.

A Big Dose of AI-induced Hype and Fear

Yet, what I struggled with the most wasn’t the sheer excess of Silicon Valley in showcasing its value but the narrative that underpinned it all. I’m quite used to entrepreneurs talking hype in tech venues, but what happened at Davos was beyond the typical hype, in part because most of the non-tech people couldn’t do a reality check. They could only respond with fear. As a result, unrealistic conversations about artificial intelligence led many non-technical attendees to believe that the biggest threat to national security is humanoid killer robots, or that AI that can do everything humans can is just around the corner, threatening all but the most elite technical jobs. In other words, as I talked to attendees, I kept bumping into a 1970s science fiction narrative.

At first I thought I had just encountered the normal hype/fear dichotomy that I’m faced with on a daily basis. But as I listened to attendees talk, a nervous creeping feeling started to churn my stomach. Watching startups raise downrounds and watching valuation conversations moving from bubbalicious to nervousness, I started to sense that what the tech sector was doing at Davos was putting on the happy smiling blinky story that they’ve been telling for so long, exuding a narrative of progress: everything that is happening, everything that is coming, is good for society, at least in the long run.

Shifting from “big data,” because it’s become code for “big brother,” tech deployed the language of “artificial intelligence” to mean all things tech, knowing full well that decades of Hollywood hype would prompt critics to ask about killer robots. So, weirdly enough, it was usually the tech actors who brought up killer robots, if only to encourage attendees not to think about them. Don’t think of an elephant. Even as the demo robots at the venue revealed the limitations of humanoid robots, the conversation became frothy with concern, enabling many in tech to avoid talking about the complex and messy social dynamics that are underway, except to say that “ethics is important.” What about equality and fairness?

We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.

Barlow’s dreams echoed in my head as I listened to the tech elite try to convince the other elites that they were their solution. We all imagined that the Internet would be the great equalizer, but it hasn’t panned out that way. Only days before the Annual Meeting began, news media reported that the World Bank found that the Internet has had a role in rising inequality.

Welcome to Babel

Conversations around tech were strangely juxtaposed with the broader social and fiscal concerns that rattled through the halls. Faced with a humanitarian crises and widespread anxieties about inequality, much of civil society responded to tech enthusiasm by asking if technology will destabilize labor and economic well-being. A fair question. The only problem is that no one knows, and the models of potential impact are so variable as to be useless. Not surprisingly, these conversations then devolved into sharply split battles, as people lost track of whether all jobs would be automated or whether automation would trigger a lot more jobs.

Not only did any nuance get lost in this conversation, but so did the messy reality of doing tech. It’s hard to explain to political actors why, just because tech can (poorly) target advertising, this doesn’t mean that it can find someone who is trying to recruit for ISIS. Just because advances in AI-driven computer vision are enabling new image detection capabilities, this doesn’t mean that precision medicine is around the corner. And no one seemed to realize that artificial intelligence in this context is just another word for “big data.” Ah, the hype cycle.

It’s going to be a complicated year geopolitically and economically. Somewhere deep down, everyone seemed to realize that. But somehow, it was easier to engage around the magnificent dreams of science fiction. And I was disappointed to watch as tech folks fueled that fire with narratives of tech that drive enthusiasm for it but are so disconnected from reality as to be a distraction on a global stage.

The Internet Is Us. Which Us?

When Barlow penned his declaration, he was speaking on behalf of cyberspace, as though we were all part of one homogeneous community. And, in some sense, we were. We were geeks and freaks and queers. But over the last twenty years, tech has become the underpinning of so many sectors, of so much interaction. Those of us who wanted cyberspace to be universal couldn’t imagine a world in which our dreams got devoured by Silicon Valley.

Tech is truly mainstream — and politically powerful — and yet many in tech still want to see themselves as outsiders. Some of Barlow’s proclamations feel a lot weirder in this contemporary light:

You claim there are problems among us that you need to solve. You use this claim as an excuse to invade our precincts. Many of these problems don’t exist. Where there are real conflicts, where there are wrongs, we will identify them and address them by our means. We are forming our own Social Contract.

There is a power shift underway and much of the tech sector is ill-equipped to understand its own actions and practices as part of the elite, the powerful. Worse, a collection of unicorns who see themselves as underdogs in a world where instability and inequality are rampant fail to realize that they have a moral responsibility.
They fight as though they are insurgents while they operate as though they are kings.

What makes me the most uncomfortable is the realization that most of tech seems to have forgotten the final statement that Barlow made:

May it be more humane and fair than the world your governments have made before.

We built the Internet hoping that the world would come. The world did, but the dream that drove so many of us in the early days isn’t the dream of those who are shaping the Internet today. Now what?

What If Social Media Becomes 16-Plus? New battles concerning age of consent emerge in Europe

At what age should children be allowed to access the internet without parental oversight? This is a hairy question that raises all sorts of issues about rights, freedoms, morality, skills, and cognitive capability. Cultural values also come into play full force on this one.

Consider, for example, that in the 1800s, the age of sexual (and marital) consent in the United States was between 10 and 12 (except Delaware, where it was seven). The age of consent in England was 12, and it’s still 14 in Germany. This is discomforting for many Western parents who can’t even fathom their 10- or 12-year-old being sexually mature. And so, over time, many countries have raised the age of sexual consent.

But the internet has raised new questions about consent. Is the internet more or less risky than sexual intercourse?
How can youth be protected from risks they cannot fully understand, such as the reputational risks associated with things going terribly awry? And what role should the state and parents have in protecting youth?

This ain’t a new battle. These issues have raged since the early days of the internet. In 1998, the United States passed a law known as the Children’s Online Privacy Protection Act (COPPA), which restricts the kinds of data companies can collect from children under 13 without parental permission. Most proponents of the law argue that this intervention has stopped countless sleazy companies from doing inappropriate things with children’s data.
I have a more cynical view.

Watching teens and parents navigate this issue — and then surveying parents about it — I came to the conclusion that the law prompted companies to restrict access to under-13s, which then prompted children (with parental knowledge) to lie about their age. Worse, I watched as companies stopped innovating for children or providing services that could really help them.

Proponents often push back, highlighting that companies could get parental permission rather than just restrict children. Liability issues aside, why would they? Most major companies aren’t interested in 12-year-olds, so it’s a lot easier to comply with the law by creating a wall than going through a hellacious process of parental consent.

So here we are, with a U.S. law that prompts companies to limit access to 13-plus, a law that has become the norm around the globe. Along comes the EU, proposing a new law to regulate the flow of personal data, including a provision that would allow individual countries to restrict children’s access to the internet at any age (with a cap at age 16).

Implicitly, this means the European standard is to become 16-plus, because how else are companies going to build a process that gives Spanish kids access at 14, German kids at 16, and Italian kids at 12?
Many in the EU are angry at how American companies treat people’s data and respond to values of privacy. We saw this loud and clear when the European Court of Justice invalidated the “safe harbor” and in earlier issues, such as “the right to be forgotten.” Honestly? The Europeans have a right to be angry. They’re so much more thoughtful on issues of privacy, and many U.S. companies pretty much roll their eyes and ignore them. But the problem is that this new law isn’t going to screw American companies, even if it makes them irritable. Instead, it’s going to screw kids. And that infuriates me.

Implicit in this new law — and COPPA more generally — is an assumption that parents can and should consent on behalf of their children. I take issue with both. While some educated parents have thought long and hard about the flows of data, the identity work that goes into reputation, and the legal mechanisms that do or don’t protect children, they are few and far between.

Most parents don’t have the foggiest clue what happens to their kids’ data, and giving them the power to consent sure doesn’t help them become more informed. Hell, most parents don’t have enough information to make responsible decisions for themselves, so why are we trusting them to know enough to protect their children?
We’re doing so because we believe they should have control, that they have the right to control and protect their children, and that no company or government should take this away.

The irony is that this runs completely counter to the treaty that most responsible countries signed at the UN Convention on the Rights of the Child. Every European country committed to making sure that children have the right to privacy — including a right to privacy from their parents. Psychotically individualistic and anti-government, the United States decided not to sign onto this empowering treaty because it was horrifying to U.S. sensibilities that the government would be able to give children rights in opposition to parents. But European countries understood that kids deserved rights. So why is the EU now suggesting that kids can’t consent to using the internet?

This legislation is shaped by a romanticization of parent-child relationships and an assumption of parental knowledge that is laughable.

But what really bothers me are the consequences to the least-empowered youth. While the EU at least made a carve-out for kids who are accessing counseling services, there’s no consideration of how many LGBTQ kids are accessing sites that might put them in danger if their parents knew. There’s no consideration for kids who are regularly abused and using technology and peer relations to get support. There’s no consideration for kids who are trying to get health information, privately. And so on. The UN Rights of the Child puts vulnerable youth front and center in protections. But somehow they’ve been forgotten by EU policymakers.

Child advocates are responding critically. I’m also hearing from countless scholars who are befuddled by and unsure of why this is happening. And it doesn’t seem as though the EU process even engaged the public or experts on these issues before moving forward. So my hope is that some magical outcry will stymie this proposal sooner rather than later. But I’m often clueless when it comes to how lawmakers work.

What baffles me the most is the logic of this proposal given the likely outcomes. We know from the dynamics around COPPA that, if given the chance, kids will lie about their age. And parents will help them. But even if we start getting parental permission, this means we’ll be collecting lots more information about youth, going against the efforts to minimize information. Still, most intriguing is what I expect this will do to the corporate ecosystem.

Big multinationals like Facebook and Twitter, which operate in the EU, will be required to follow this law. All companies based in the EU will be required to comply with this law. But what about small non-EU companies that do not store data in the EU or work with EU vendors and advertisers? It’s unclear if they’ll have to comply because they aren’t within the EU’s reach. Will this mean that EU youth will jump from non-EU service to non-EU service to gain access? Will this actually end up benefiting non-EU startups who are trying to challenge the big multinationals? But doesn’t this completely undermine the EU’s efforts to build EU companies and services?

I don’t know, but that’s my gut feeling when reading the new law.
While I’m not a lawyer, one thing I’ve learned in studying young people and technology is that when there’s a will, there’s a way. And good luck trying to stop a 15-year-old from sharing photos with her best friend when her popularity is on the line.

I don’t know what will come from this law, but it seems completely misguided. It won’t protect kids’ data. It won’t empower parents. It won’t enhance privacy. It won’t make people more knowledgeable about data abuses. It will irritate but not fundamentally harm U.S. companies. It will help vendors that offer age verification become rich. It will hinder EU companies’ ability to compete. But above all else, it will make teenagers’ lives more difficult, make vulnerable youth more vulnerable, and invite kids to be more deceptive. Is that really what we want?

(This was originally posted on Bright on Medium.)

New book: Participatory Culture in a Networked Era by Henry Jenkins, Mimi Ito, and me!

In 2012, Henry Jenkins approached Mimi Ito and I with a crazy idea that he’d gotten from talking to the folks at Polity. Would we like to sit down and talk through our research and use that as the basis of a book? I couldn’t think of anything more awesome than spending time with two of my mentors and teasing out the various strands of our interconnected research. I knew that there were places where we were aligned and places where we disagreed or, at least, where our emphases provided different perspectives. We’d all been running so fast in our own lives that we hadn’t had time to get to that level of nuance and this crazy project would be the perfect opportunity to do precisely that.

We started by asking our various communities what questions they would want us to address. And then we sat down together, face-to-face, for two days at a time over a few months. And we talked. And talked. And talked. In the process, we started identifying themes and how our various areas of focus were woven together.

Truth be told, I never wanted it to end. Throughout our conversations, I kept flashing back to my years at MIT when Henry opened my eyes to fan culture and a way of understanding media that seeped deep inside my soul. I kept remembering my trips to LA where I’d crash in Mimi’s guest room, talking research late into the night and being woken in the early hours by a bouncy child who never understood why I didn’t want to wake up at 6AM. But above everything else, the sheer delight of brainjamming with two people whose ideas and souls I knew so well was ecstasy.

And then the hard part started. We didn’t want this project to be the output of self-indulgence and inside baseball. We wanted it to be something that helped others see how research happens, how ideas form, and how collaborations and disagreements strengthen seemingly independent work. And so we started editing. And editing. And editing. Getting help editing. And then editing some more.

The result is Participatory Culture in a Networked Era and it is unlike any project I’ve ever embarked on or read. The book is written as a conversation and it was the product of a conversation. Except we removed all of the umms and uhhs and other annoying utterances and edited it in an attempt to make the conversation make sense for someone who is trying to understand the social and cultural contexts of participation through and by media. And we tried to weed out the circular nature of conversation as we whittled down dozens of hours of recorded conversation into a tangible artifact that wouldn’t kill too many trees.

What makes this book neat is that it sheds light on all of the threads of conversation that helped the work around participatory culture, connected learning, and networked youth practices emerge. We wanted to make the practice of research as visible as our research and reveal the contexts in which we are operating alongside our struggles to negotiate different challenges in our work. If you’re looking for classic academic output, you’re going to hate this book. But if you want to see ideas in context, it sure is fun. And in the conversational product, you’ll learn new perspectives on youth practices, participatory culture, learning, civic engagement, and the commercial elements of new media.

OMG did I fall in love with Henry and Mimi all over again doing this project. Seeing how they think just tickles my brain in the best ways possible. And I suspect you’ll love what they have to say too.

The book doesn’t officially release for a few more weeks, but word on the street is that copies of this book are starting to ship. Check it out!

What World Are We Building?

This morning, I had the honor and pleasure of giving the Everett C. Parker Lecture in celebration of the amazing work he did to fight for media justice. The talk that I gave weaved together some of my work with youth (on racial framing of technology) and my more recent thoughts on the challenges presented by data analytics. I also pulled on work of Latanya Sweeney and Eric Horvitz and argued that those of us who were shaping social media systems “didn’t architect for prejudice, but we didn’t design systems to combat it either.” More than anything, I used this lecture to argue that “we need those who are thinking about social justice to understand technology and those who understand technology to commit to social justice.”

My full remarks are available here: “What World Are We Building?” Please let me know what you think!

Join me at the Parker Lecture on Oct. 20 in Washington DC

Every year, the media reform community convenes to celebrate one of the founders of the movement, to reflect on the ethical questions of our day, and to honor outstanding champions of media reform. This annual event, called the Parker Lecture, is in honor of Dr. Everett C. Parker, who is often called the founder of the media reform movement, and who died last month at the age of 102. Dr. Parker made incredible contributions from his post as the Executive Director of the United Church of Christ’s Office of Communication, Inc.. This organization is part of the progressive movement’s efforts to hold media accountable and to consider how best to ensure all people, no matter their income or background, benefit from new technology.

I am delighted to be part of this year’s events as one of the honorees. My other amazing partners in this adventure are:

  • Joseph Torres, senior external affairs director of Free Press and co-author of News for All the People: The Epic Story of Race and the American Media, will receive the Parker Award which recognizes an individual whose work embodies the principles and values of the public interest in telecommunications.
  • Wally Bowen, co-founder and executive director of the Mountain Area Information Network (MAIN), will receive the Donald H. McGannon Award in recognition of his dedication to bringing modern telecommunications to low-income people in rural areas.

The 33rd Annual Parker Lecture will be held Tuesday, October 20, 2015 at 8 a.m. at the First Congregational United Church of Christ, 945 G St NW, Washington, DC 20001. I will be giving a talk as part of this celebration and joined by Clayton Old Elk of the Crow Tribe who will offer a praise song.

Want to join us? Tickets are available here.

Which Students Get to Have Privacy?

There’s a fresh push to protect student data. But the people who need the most protection are the ones being left behind.

It seems that student privacy is trendy right now. At least among elected officials. Congressional aides are scrambling to write bills that one-up each other in showcasing how tough they are on protecting youth. We’ve got Congressmen Polis and Messer (with Senator Blumenthal expected to propose a similar bill in the Senate). Kline and Scott have a discussion draft of their bill out while Markey and Hatch have reintroduced the bill they introduced a year ago. And then there’s Senator Vitter’s proposed bill. And let’s not even talk about the myriad of state-level legislation.

Most of these bills are responding in some way or another to a 1974 piece of legislation called the Family Educational Rights and Privacy Act (FERPA), which restricted what schools could and could not do with student data.

Needless to say, lawmakers in 1974 weren’t imagining the world of technology that we live with today. On top of that, legislative and bureaucratic dynamics have made it difficult for the Department of Education to address failures at the school level without going nuclear and just defunding a school outright. And schools lack security measures (because they lack technical sophistication) and they’re entering into all sorts of contracts with vendors that give advocates heartburn.

So there’s no doubt that reform is needed, but the question — as always — is what reform? For whom? And with what kind of support?

The bills are pretty spectacularly different, pushing for a range of mechanisms to limit abuses of student data. Some are fine-driven; others take a more criminal approach. There are also differences in who can access what data under what circumstances. The bills give different priorities to parents, teachers, and schools. Of course, even though this is all about *students*, they don’t actually have a lot of power in any of these bills. It’s all a question of who can speak on their behalf and who is supposed to protect them from the evils of the world. And what kind of punishment for breaches is most appropriate. (Not surprisingly, none of the bills provide for funding to help schools come up to speed.)

As a youth advocate and privacy activist, I’m generally in favor of student privacy. But my panties also get in a bunch when I listen to how people imagine the work of student privacy. As is common in Congress as election cycles unfold, student privacy has a “save the children” narrative. And this forces me to want to know more about the threat models we’re talking about. What are we saving the children *from*?

Threat Models

There are four external threats that I think are interesting to consider. These are the dangers that students face if their data leaves the education context.

#1: The Stranger Danger Threat Model. It doesn’t matter how much data we have to challenge prominent fears, the possibly of creepy child predators lurking around school children still overwhelms any conversation about students, including their data.

#2: The Marketing Threat Model. From COPPA to the Markey/Hatch bill, there’s a lot of concern about how student data will be used by companies to advertise products to students or otherwise fuel commercial data collection that drives advertising ecosystems.

#3: The Consumer Finance Threat Model. In a post-housing bubble market, the new subprime lending schemes are all about enabling student debt, especially since students can’t declare bankruptcy when they default on their obscene loans. There is concern about how student data will be used to fuel the student debt ecosystem.

#4: The Criminal Justice Threat Model. Law enforcement has long been interested in student performance, but this data is increasingly desirable in a world of policing that is trying to assess risk. There are reasons to believe that student data will fuel the new policing architectures.

The first threat model is artificial (see: “It’s Complicated”), but it propels people to act and create laws that will not do a darn thing to address abuse of children. The other three threat models are real, but these threats are spread differently over the population. In the world of student privacy, #2 gets far more attention than #3 and #4. In fact, almost every bill creates carve-outs for “safety” or otherwise allows access to data if there’s concern about a risk to the child, other children, or the school. In other words, if police need it. And, of course, all of these laws allow parents and guardians to get access to student data with no consideration of the consequences for students who are under state supervision. So, really, #4 isn’t even in the cultural imagination because, as with nearly everything involving our criminal justice system, we don’t believe that “those people” deserve privacy.

The reason that I get grouchy is that I hate how the risks that we’re concerned about are shaped by the fears of privileged parents, not the risks of those who are already under constant surveillance, those who are economically disadvantaged, and those who are in the school-prison pipeline. #2-#4 are all real threat models with genuine risks, but we consistently take #2 far more seriously than #3 or #4, and privileged folks are more concerned with #1.

What would it take to actually consider the privacy rights of the most marginalized students?

The threats that poor youth face? That youth of color face? And the trade-offs they make in a hypersurveilled world? What would it take to get people to care about how we keep building out infrastructure and backdoors to track low-status youth in new ways? It saddens me that the conversation is constructed as being about student privacy, but it’s really about who has the right to monitor which youth. And, as always, we allow certain actors to continue asserting power over youth.

This post was originally published to The Message at Medium on May 22, 2015. Image credit: Francisco Osorio

I miss not being scared.

From the perspective of an adult in this society, I’ve taken a lot of stupid risks in my life. Physical risks like outrunning cops and professional risks like knowingly ignoring academic protocol. I have some scars, but I’ve come out pretty OK in the scheme of things. And many of those risks have paid off for me even as similar risks have devastated others.

Throughout the ten years that I was doing research on youth and social media, countless people told me that my perspective on teenagers’ practices would change once I had kids. Wary of this frame, I started studying the culture of fear, watching as parents exhibited fear of their children doing the same things that they once did, convinced that everything today is so much worse than it was when they were young or that the consequences would be so much greater. I followed the research on fear and the statistics on teen risks and knew that it wasn’t about rationality. There was something about how our society socialized parents into parenting that produced the culture of fear.

Now I’m a parent. And I’m in my late 30s. And I get to experience the irrational cloud of fear. The fear of mortality. The fear of my children’s well-being. Those quiet little moments when crossing the street where my brain flips to an image of a car plowing through the stroller. The heart-wrenching panic when my partner is late and I imagine all of the things that might have happened. The reading of stories of others’ pain and shuddering with fear that my turn is next. The moments of loss and misfortune in my own life when I close my eyes and hope my children don’t have to feel that pain. I can feel the haunting desire to avoid risks and to cocoon my children.

I know the stats. I know the ridiculousness of my fears. And all I can think of is the premise of Justine Larbalestier’s Magic or Madness where the protagonist must either use her magic or go crazy if she doesn’t use it. I feel like I am at constant war with my own brain over the dynamics of fear. I refuse to succumb to the fear because I know how irrational it is but in refusing, I send myself down crazy rabbit holes on a regular basis. For my kids’ sake, I want to not let fear shape my decision-making but then I’m fearing fear. And, well, welcome to the rabbit hole.

I miss not being scared. I miss taking absurd risks and not giving them a second thought. I miss doing the things that scare the shit out of most parents. I miss the ridiculousness of not realizing that I should be afraid in the first place.

In our society, we infantalize youth for their willingness to take risks that we deem dangerous and inappropriate. We get obsessed with protecting them and regulating them. We use brain science and biography to justify restrictions because we view their decision making as flawed. We look at new technologies or media and blame them for corrupting the morality of youth, for inviting them to do things they shouldn’t. Then we about face and capitalize on their risk taking when it’s to our advantage, such as when they go off to war on our behalf.

Is our society really worse off because youth take risks and adults don’t? Why are they wrong and us old people are right? Is it simply because we have more power? As more and more adults live long, fearful lives in Western societies, I keep thinking that we should start regulating our decision-making. Our inability to be brash is costing our society in all sorts of ways. And it will only get worse as some societies get younger while others get older. Us old people aren’t imagining new ways of addressing societal ills. Meanwhile, our conservative scaredy cat ways don’t allow youth to explore and challenge the status quo or invent new futures. I keep thinking that we need to protect ourselves and our children from our own irrationality produced from our fears.

I have to say that fear sucks. I respect its power, just like I respect the power of a hurricane, but it doesn’t make me like fear any more. So I keep dreaming of ways to eradicate fear. And what I know for certain is that statistical information won’t cut it. And so I dream of a sci-fi world in which I can manipulate my synapses to prevent those ideas from triggering. In the meanwhile, I clench my jaw and try desperately to not let the crazy visions of terrible things that could happen work their way into my cognitive perspective. And I wonder what it will take for others to recognize the impact that our culture of fear is having on all of us.

This post was originally published to The Message at Medium on May 4, 2015