Tag Archives: safety

Facebook’s Panic Button: Who’s panicking? And who’s listening?

First, I commend Facebook for taking child safety seriously. When I was working with them as part of the Internet Safety Technical Task Force, I was always impressed by their concern. I think that there’s a mistaken belief that Facebook doesn’t care about child safety. This was the message that many propagated when Facebook balked at implementing the “Panic Button” in the UK. As many news articles recently reported, Facebook finally conceded last week to implementing it after enormous pressure by safety advocates. Their slowness in agreeing to do so was attributed to their lack of caring, but this is simply not true. There are actually very good reasons to be wary of the “Panic Button.” My fear is that the lack of critical conversation about the “Panic Button” will result in people thinking it’s a panacea, rather than acknowledging its limitations and failings. Furthermore, touting it as a solution obscures the actual dangers that youth face.

The “Panic Button” is actually an App called “ClickCEOP”. Users must add the App and then they get a tab so that there’s a button there whenever they need to talk to the police’s Child Exploitation and Online Protection Centre. They’re encouraged to share the badge as a way of protecting their friends.

Pressure to create the “Panic Button” came after the horrific murder of 17-year-old Ashleigh Hall by a 33-year-old serial sex offender named Peter Chapman who approached the teen on Facebook. Reports suggest that he told her he was a teenage boy, although she also knew that none of her friends knew him. She lied to her parents to leave the house to meet up with him at which point he abducted, raped, and murdered her. Why she started conversing with him is unknown. Why – after being convicted of other sex crimes against minors – he was out on the streets and not being monitored is also unclear. All that is known is that this is the kind of tragedy that constitutes every parent’s worst nightmare.

Safety advocates used Hall’s terrible death to rally for a Panic Button. But what would this have done? She was clearly willing to converse with him and had no reservations about meeting up with him. None of her friends knew she was conversing with him. Nor did her parents. The heartbreaking reality of most rape and murder cases of this type is that the teen knowingly meets up with these men. When it involves teens, it’s usually because they believe that they’re in love, value the attention, and are not even thinking about the risks. Online Panic Buttons do absolutely nothing to combat the fundamental challenge of helping youth understand why such encounters are risky.

CEOP invites people to implement the ClickCEOP tab with the following questions:

Do you sometimes struggle to find answers to things that worry you online?
Had bad wall posts from people you don’t know?
Had a chat conversation that went sour?
Seen something written about you that isn’t true, or worse?
Has your account ever been hacked, even just as a joke?

These are serious questions and serious issues, the heart of bullying. They aren’t really about predation, but that doesn’t make them any less important. That said, how can the police help with every teen who is struggling with the wide range of bullying implied, from teasing to harassment? Even if every teen in the UK were to seriously add this and take it seriously, there’s no way that the UK police have a fraction of the resources to help teens manage challenging social dynamics. As a result, what false promises are getting made?

Many of the teens that I encounter truly need help. They need supportive people in their lives to turn to. I desperately want to see social services engage with these youth. But what I find over and over again is that social services do not have the resources to help even a fraction of the youth that come to them. So when we create a system where we tell youth that they have an outlet and then they use it and we don’t live up to our side of the bargain, then what? Many of the teens that I interviewed told me of their efforts to report problems to teachers, guidance counselors, parents, etc. only to no avail. That left them feeling more helpless and alone. What’s the risk of CEOP doing this to youth?

Finally, what’s the likelihood that kids (or adults) will click on this as a joke or just to get attention? How is CEOP going to handle the joke clicks vs. the real ones? How will they discern? One thing you learn from dealing with helplines is that kids often call in to talk about their friends when they’re really looking for help for themselves. It’s easier to externalize first to test the waters. The CEOP may get prank messages that are real cries for help. What happens when those go unanswered?

The press are all reporting this as being a solution to predation, but the teens who are at-risk for dangerous sexual encounters with adults are not going to click a Panic Button because they think that they know what they’re doing. CEOP is advertising this as a tool for bullying, but it’s not clear to me that they have the resources (or, for that matter, skillset) to handle the mental health issues they’re bound to receive on that end. And users may use this for a whole host of things for which it was never designed. The result, if anyone implements it at all, could be a complete disaster.

So why do I care that another well-intentioned technology is out there and will likely result in no change? I care because we need change. We need to help at-risk youth. And a tremendous amount of effort and energy is being expended to implement something that is unlikely to work but makes everyone feel as though the problem is solved. And I fear that there will be calls in the US to implement this without anyone ever evaluating the effectiveness of such an effort in the UK. So I respect Facebook’s resistance because I do think that they fully understand that this won’t help the most needy members of their community. And I think that the hype around it lets people forget about those most at-risk.

To my friends across the pond… Please help evaluate this “solution.” Please tell us what is and is not working, what kinds of cases the CEOP receives and how they handle them. Any data you can get your hands on would be greatly appreciated. I realize that it’s too late to stop this effort, but I really hope that people are willing to treat it as an experiment that must be evaluated. Cuz goddess knows the kids need us.

Risky Behaviors and Online Safety: A 2010 Literature Review

I’m pleased to announce a rough draft of Risky Behaviors and Online Safety: A 2010 Literature Review for public feedback. This Literature Review was produced for Harvard Berkman Center’s Youth and Media Policy Working Group Initiative, co-directed by John Palfrey, Urs Gasser, and myself and funded by the MacArthur Foundation. This Literature Review builds on the 2008 LitReview that Andrew Schrock and I crafted for the Internet Safety Technical Task Force. This document is not finalized, but we want to make our draft available broadly so that scholars working in this area can inform us of anything that we might be missing.

Risky Behaviors and Online Safety: A 2010 Literature Review

It’s been almost two years since the Internet Safety Technical Task Force completed its work. As a co-director of that project, I coordinated the Research Advisory Board to make certain that we included all of the different research that addressed online safety. When we shared our report, we were heavily criticized as being naive and clueless (or worse). Much of the criticism was directed at me and the researchers. We were regularly told that social network sites would radically change the picture of online safety and that we simply didn’t have new enough data to understand how different things would be in a few years. Those critiques continue. As researchers who were actively collecting data and in the field, many of us are frustrated because what we see doesn’t match what the politicians believe. It’s been two years since we put out that first Lit Review and I’m glad to be able to share an updated one with all sorts of new data. Not surprisingly (to us at least), not much has changed.

What you’ll find is that researchers have gone deeper, getting a better picture of some of the dynamics and implications. You’ll also find that the overarching picture has not changed much. Many of the core messages that we shared in the ISTTF report continue to hold. In this updated Lit Review, we interrogate the core issues raised in the ISTTF report and introduce new literature that complements, conflicts, or clarifies what was previously said. We bring in international data to provide a powerful comparison, most notably from the reports that came out in the EU and Australia. And we highlight areas where new research is currently underway and where more research is necessary.

This Literature Review does not include information on sexting, which can be found in Sexting: Youth Practices and Legal Implications. It also does not include some of the material on self-harm because we are working on a separate review of that material (to be released soon).

As I said, this is a draft version that we’re putting out for public commentary and critique. We will continue to modify this in the upcoming months. If you think we’re missing anything, please let us know!!

Sexting: Youth Practices and Legal Implications

Dena Sacco and her team have put together a fantastic document that maps out the legal and socio-legal issues surrounding sexting: Sexting: Youth Practices and Legal Implications. This is for the Berkman Center Youth and Media Policy Working Group that I’m coordinating with John Palfrey and Urs Gasser (funded by the MacArthur Foundation).

Sexting: Youth Practices and Legal Implications

This document addresses legal and practical issues related to the practice colloquially known as sexting. It was created by Harvard Law School’s Cyberlaw Clinic, based at the Berkman Center for Internet & Society, for the Berkman Center’s Youth and Media Policy Working Group Initiative. The Initiative is exploring policy issues that fall within three substantive clusters emerging from youth’s information and communications technology practices: Risky Behaviors and Online Safety; Privacy, Publicity and Reputation; and Youth Created Content and Information Quality. The Initiative is funded by the John D. and Catherine T. MacArthur Foundation and is co-directed by danah boyd, Urs Gasser, and John Palfrey. This document was created for the Risky Behaviors and Online Safety cluster, which is focused on four core issues: (1) sexual solicitation and problematic sexual encounters; (2) Internet-related bullying and harassment; (3) access to problematic content, including pornography and self-harm content; and (4) youth-generated problematic content, including sexting. The Initiative’s goal is to bring the best research on youth and media into the policy-making debate and to propose practical interventions based upon that research.

This document is intended to provide background for discussion of interventions related to sexting. It begins with a definition of sexting, and continues with overviews of research and media stories related to sexting. It then discusses the statutory and constitutional framework for child pornography and obscenity. It concludes with a description of current and pending legislation meant to address sexting.

Four Essays Addressing Risky Behaviors and Online Safety

At Harvard’s Berkman Center, John Palfrey, Urs Gasser, and I have been co-directing the Youth and Media Policy Working Group Initiative to investigate the role that policy can play in addressing core issues involving youth and media. John has been leading up the Privacy, Publicity, and Reputation track; Urs has been managing Youth Created Content and Information Quality track; and I have been coordinating the Risky Behaviors and Online Safety track. We’ll have a lot of different pieces coming out over the next few months that stem from this work. Today, I’m pleased to share four important essays that emerged from the work we’ve been doing in the Risky Behaviors and Online Safety track:

“Moving Beyond One Size Fits All With Digital Citizenship” by Matt Levinson and Deb Socia

This essay addresses some of the challenges that educators face when trying to address online safety and digital citizenship in the classroom.

“Evaluating Online Safety Programs” by Tobit Emmens and Andy Phippen

This essay talks about the importance of evaluating interventions that are implemented so as to not face dangerous unintended consequences, using work in suicide prevention as a backdrop.

“The Future of Internet Safety Education: Critical Lessons from Four Decades of Youth Drug Abuse Prevention” by Lisa M. Jones

This essay contextualizes contemporary internet safety programs in light of work done in the drug abuse prevention domain to highlight best practices to implementing interventions.

“Online Safety: Why Research is Important” by David Finkelhor, Janis Wolak, and Kimberly J. Mitchell

This essay examines the role that research can and should play in shaping policy.

These four essays provide crucial background information for understanding the challenges of implementing education and public health interventions in the area of online safety. I hope you will read them because they are truly mind-expanding pieces.

How COPPA Fails Parents, Educators, Youth

Ever wonder why youth have to be over 13 to create an account on Facebook or Gmail or Skype? It has nothing to do with safety.

In 1998, the U.S. Congress enacted the Children’s Online Privacy Protection Act (COPPA) with the best of intentions. They wanted to make certain that corporations could not collect or sell data about children under the age of 13 without parental permission, so they created a requirement to check age and get parental permission for those under 13. Most companies took one look at COPPA and decided that the process of getting parental consent was far too onerous so they simply required all participants to be at least 13 years of age. The notifications that say “You must be 13 years or older to use this service” and the pull-down menus that don’t allow you to indicate that you’re under 13 have nothing to do with whether or not a website is appropriate for a child; it has to do with whether or not the company thinks that it’s worth the effort to seek parental permission.

COPPA is currently being discussed by the Federal Trade Commission and the US Senate. Most of the conversation focuses on whether or not companies are abiding by the ruling and whether or not the age should be upped to 18. What is not being discussed is the effectiveness of this legislation or what it means to American families (let alone families in other countries who are affected by it). In trying to understand COPPA’s impact, my research led me conclude four things:

  1. Parents and youth believe that age requirements are designed to protect their safety, rather than their privacy.
  2. Parents want their children to have access to social media service to communicate with extended family members.
  3. Parents teach children to lie about their age to circumvent age limitations.
  4. Parents believe that age restrictions take away their parental choice.

How the Public Interprets COPPA-Prompted Age Restrictions

Most parents and youth believe that the age requirements that they encounter when signing up to various websites are equivalent to a safety warning. They interpret this limitation as: “This site is not suitable for children under the age of 13.” While this might be true, that’s not actually what the age restriction is about. Not only does COPPA fail to inform parents about the appropriateness of a particular site, but parental misinterpretations of the age restrictions mean that few are aware that this stems from an attempt to protect privacy.

While many parents do not believe that social network sites like Facebook and MySpace are suitable for young children, they often want their children to have access to other services that have age restrictions (email, instant messaging, video services, etc.). Often, parents cite that these tools enable children to connect with extended family; Skype is especially important to immigrant parents who have extended family outside of the US. Grandparents were most frequently cited as the reason why parents created accounts for their young children. Many parents will create accounts for children even before they are literate because the value of connecting children to family outweighs the age restriction. When parents encourage their children to use these services, they send a conflicting message that their kids eventually learn: ignore some age limitations but not others.

By middle school, communication tools and social network sites are quite popular among tweens who pressure their parents for permission to get access to accounts on these services because they want to communicate with their classmates, church friends, and friends who have moved away. Although parents in the wealthiest and most educated segments of society often forbid their children from signing up to social network sites until they turn 13, most parents support their children’s desires to acquire email and IM, precisely because of familial use. To join, tweens consistently lie about their age when asked to provide it. When I interviewed teens about who taught them to lie, the overwhelming answer was parents. I interviewed parents who consistently admitted to helping their children circumvent the age restriction by teaching them that they needed to choose a birth year that would make them over 13. Even in households where an older sibling or friend was the educator, parents knew their children had email and IM and social network sites accounts. Interestingly, in households where parents forbid Facebook but allow email, kids have started noting the hypocritical stance of their parents. That’s not a good outcome of this misinterpretation.

When I asked parents about how they felt about the age restrictions presented by social websites, parents had one of two responses. When referencing social network sites, parents stated that they felt that the restrictions were justified because younger children were too immature to handle the challenges of social network sites. Yet, when discussing sites and services that they did not believe were risky environments or that they felt were important for family communication, parents often felt as though the limitations were unnecessarily restrictive. Those who interpreted the restriction as a maturity rating did not understand why the sites required age confirmation. Some other parents felt as though the websites were trying to tell them how to parent. Some were particularly outraged by what they felt was a paternal attitude by websites, making statements like: “Who are they to tell me how to be a good parent?”

Across the board, parents and youth misinterpret the age requirements that emerged from the implementation of COPPA. Except for the most educated and technologically savvy, they are completely unaware that these restrictions have anything to do with privacy. More problematically, parents’ conflicting ways in which they address some age restrictions and not others sends a dangerous message.

Policy Literacy and the Future of COPPA

There’s another issue here that’s not regularly addressed. COPPA affects educators and social services in counterintuitive ways. While non-commercial services are not required to abide by COPPA, there are plenty of commercial education and health services out there who are seeking to help youth. Parental permission might be viable for an organization working to help kids learn arithmetic through online tutoring, but it is completely untenable when we’re thinking about suicide hotlines, LGBT programs, and mental health programs. (Keep in mind that many hospitals are for-profit even if their free websites are out there for general help.)

COPPA is well-intended but its implementation and cultural uptake have been a failure. The key to making COPPA work is not to making it stricter or to force the technology companies to be better at confirming that the kids on their site are not underage. Not only is this technologically infeasible without violating privacy at an even greater level, doing so would fail to recognize what’s actually happening on the ground. Parents want to be able to parent, to be able to decide what services are appropriate for their children. At the same time, we shouldn’t forget that not all parents are present and we don’t want to shut teens out of crucial media spaces because their parents are absent, as would often be the case if we upped the age to 18. The key to improving COPPA is to go back to the table and think about how children’s data is being used, whether it’s collected implicitly or explicitly.

In order for the underlying intentions of COPPA to work, we need both information literacy and policy literacy. We need to find ways to help digital citizens understand how their information is being used, what rights they have, and how the policies that exist affect their lives. If parents and educators don’t understand that the 13 limitation is about privacy, COPPA will continue to fail. It’s time that parents and educators learned more about COPPA and start sharing their own perspective, asking Congress to do a better job of addressing the privacy issues without taking away their rights to parent and educate. And without marginalizing those who aren’t fortunate enough to have engaged parents by their side.

John Palfrey, Urs Gasser, and I submitted a statement to the FTC and Senate called “How COPPA, as Implemented, is Misinterpreted by the Public: A Research Perspective. To learn more about COPPA or submit your own letter to the FTC and Senate, go to the FTC website.

This post was originally posted at the DML Central blog.

Image Credit: WarzauWynn

Deception + fear + humiliation != education

I hate fear-based approaches to education. I grew up on the “this is your brain on drugs” messages and watched classmates go from being afraid of drugs to trying marijuana to deciding that all of the messages about drugs were idiotic. (Crystal meth and marijuana shouldn’t be in the same category.) Much to my frustration, adults keep turning to fear to “educate” the kids with complete disregard to the unintended consequences of this approach. Sometimes, it’s even worse. I recently received an email from a friend of mine (Chloe Cockburn) discussing an issue brought before the ACLU. She gave me permission to share this with you:

A campus police officer has been offering programs about the dangers inherent in using the internet to middle and high school assemblies. As part of her presentation she displays pictures that students have posted on their Facebook pages. The idea is to demonstrate that anyone can have access to this information, so be careful. She gains access to the students’ Facebook pages by creating false profiles claiming to be a student at the school and asking to be “friended”, evidently in violation of Facebook policy.

An ACLU affiliate received a complaint from a student at a small rural high school. The entire assembly was shown a photo of her holding a beer. The picture was not on the complainant’s Facebook page, but on one belonging to a friend of hers, who allowed access to the bogus profile created by the police officer. The complainant was not “punished” as the plaintiff above was, but she was humiliated, and she is afraid that she will not get some local scholarship aid as a result.

So here we have a police officer intentionally violating Facebook’s policy and creating a deceptive profile to entrap teenagers and humiliate them to “teach them a lesson”??? Unethical acts + deception + fear + humiliation != education. This. Makes. Me. Want. To. Scream.

Call for descriptions: online safety programs

The Risky Behaviors and Online Safety track of the Youth and Media Policy Working Group Initiative at the Berkman Center for Internet & Society at Harvard University is creating a Compendium of youth-based Internet safety programs and interventions. We are requesting organizations, institutions, and individuals working in online youth safety to share descriptions of their effective programs and interventions that address risky behavior by youth online. We are particularly interested in endeavors that involve educators, social services, mentors and coaches, youth workers, religious leaders, law enforcement, mental health professionals, and those working in the field of public or adolescent health.

Program descriptions will be made publicly available. Exemplary programs will be spotlighted to policy makers, educators, and the public so that they too can learn about different approaches being tried and tested. Submissions also will be used to inform recommendations for future research and program opportunities.
Submissions should be documentations of solutions, projects, or initiatives that address at least one of the following four areas being addressed:

  • Sexual solicitation of and sex crimes involving minors
  • Bullying or harassment of minors
  • Access to problematic or illegal content (including pornographic and violent content)
  • Youth-generated problematic or illegal content (including sexting and self-harm sites)

We are especially keen to highlight projects that focus on underlying problems, risky youth behavior, and settings where parents cannot be relied upon to help youth. The ideal solution, project, or initiative will be grounded in research-driven knowledge about the risks youth face rather than generalized beliefs about online risks. Successful endeavors will most likely recognize that youth cannot simply be protected, but must be engaged as active agents in any endeavor that seeks to help youth.

Please forward this call along to any organizations and individuals you think would be able to share information about their successful experiences and programs.

Should you have any questions, please contact us: ymps-submissions@cyber.law.harvard.edu.

Seeking: Research Assistant/Intern for Online Safety Literature Review

The Youth Policy Working Group at Harvard’s Berkman Center for Internet and Society is looking for a research assistant intern to help update the Literature Review produced by the Internet Safety Technical Task Force. This project builds off of the Berkman Center’s work studying how youth interact with digital media and specifically seeks to draft policy prescriptions in three areas: privacy, safety, and content creation.

The ideal candidate would be a graduate student (or individual working towards entering a graduate program) who is fluent in quantitative methodologies and can interpret and evaluate statistical findings. The RA/intern would be working to extend the Lit Review from the ISTTF report to include international studies, new studies in the last year, and studies that cover a wider set of topics with respect to online safety. The products of this internship will be an updated Literature Review and a shorter white paper of the high points. Other smaller tasks may be required. This project should take 10-15 hours per week and will last at least the fall semester.

The RA/intern will work directly with Dr. danah boyd and will be a part of a broader team trying to build resources for understanding issues relating to online safety. The candidate should have solid research skills and feel confident reading scholarly research in a wide array of fields. The candidate must have library access through their own university. Before applying, the candidate should read the Literature Review and be confident that this is work that s/he could do.

Preference will be given to candidates in the Boston area, but other U.S. candidates may be considered if their skills and knowledge make them particularly ideal for this job. Unfortunately, we are unable to hire non-U.S. individuals for this job.

To apply, please send a copy of your resume and a cover letter to Catherine Bracy and danah boyd.

(See also: hiring Technical Research Assistant for Adhoc Tasks at MSR)

help me find innovative practitioners who address online safety issues

I need your help. One of our central conclusions in the Internet Safety Technical Task Force Report was that many of the online safety issues require the collective engagement of a whole variety of different groups, including educators, social workers, psychologists, mental health experts, law enforcement, etc. Through my work on online safety, I’ve met a lot of consultants, activists, and online safety experts. Through my work as a researcher, I’ve met a lot of practitioners who are trying to engage youth about these issues through outright fear that isn’t grounded in anything other than myth.

Unfortunately, I haven’t met a lot of people who are on the ground with youth dealing with the messiness of addressing online safety issues from a realistic point of view. I don’t know a lot of practitioners who are developing innovative ways of educating and supporting at-risk youth because they have to in their practices. I need your help to identify these people.

  • I want to know teachers. Who are the teachers who are trying to integrate online safety issues into their classroom by using a realistic model of youth risk?
  • I want to know school administrators. Who are the school administrators who are trying to build school policy that addresses online safety issues from a non-fear-driven approach?
  • I want to know law enforcement officers. Who are the law enforcement officers who are directly dealing with the crimes that occur?
  • I want to know people from social services. Who are the people in social services (like social workers) who are directly working with at-risk youth who engage in risky behavior online?
  • I want to know mental health practitioners. Who are the psychologists and mental health practitioners who are trying to help youth who engage in risky practices online? Or who help youth involved in self-harm deal with their engagement with self-harm websites?
  • I want to know youth ministers. Who are the youth pastors and ministers who are trying to help at-risk youth navigate risky situations?
  • I want to know other youth-focused practitioners. Who else is out there working with youth who is incorporating online safety issues into their practice?

I know that there are a lot of people out there who are speaking about what these partitioners should do, who are advising these practitioners, or who are trying to build curricula/tools to support these practitioners, but I want to learn more about the innovative practitioners themselves.

Please… who’s incorporating sensible online safety approaches into their daily practice with youth in the classrooms, in therapy, in social work, in religious advising, etc.? Who’s out there trying to wade through the myths, get a realistic portrait, and approach youth from a grounded point of view in order to directly help them, not as a safety expert but as someone who works with youth because of their professional role? Who do I need to know?

(Feel free to leave a comment or email me at zephoria [at] zephoria [dot] org.)