Risks vs. Harms: Youth & Social Media

Since the “social media is bad for teens” myth will not die, I keep having intense conversations with colleagues, journalists, and friends over what the research says and what it doesn’t. (Alice Marwick et. al put together a great little primer in light of the legislative moves.) Along the way, I’ve also started to recognize how slipperiness between two terms creates confusion — and political openings — and so I wanted to call them out in case this is helpful for others thinking about these issues.

In short, “Does social media harm teenagers?” is not the same question as “Can social media be risky for teenagers?”

The language of “harm” in this question is causal in nature. It is also legalistic. Lawyers look for “harms” to place blame on or otherwise regulate actants. By and large, in legal contexts, we talk about PersonA harming PersonB. As such, PersonA is to be held accountable. But when we get into product safety discussions, we also talk about how faulty design creates the conditions for people to be harmed due to intentional, malfeasant actions by the product designer. Making a product liability claim is much harder because it requires proving the link of harm and the intentionality to harm.

Risk is a different matter. Getting out of bed introduces risks into your life. Risk is something to identify and manage. Some environments introduce more potential risks and some actions reduce the risks. Risk management is a skill to develop. And while regulation can be used to reduce certain risks, it cannot eliminate them. And it can also backfire and create more risks. (This is the problem that María Angel and I have with techno-legal solutionism.)

Let’s unpack this a bit by shifting contexts and thinking about how we approach risks more generally.

Skiing is Risky.

Skiing is understood to be a risky sport. As we approach skiing season out here in the Rockies, I’m bracing myself for the uptick in crutches, knee wheelies, and people under 40 using the wheelchair services at the Denver airport. There is also a great deal of effort being put into trying to reduce the risk that someone will leave the slopes in this state. I’m fascinated by the care ski instructors take in trying to ensure that people who come to the mountains learn how to take care. There’s a whole program here for youngins designed to teach them a safety-first approach to skiing.

And there’s a whole host of messaging that will go out each day letting potential skiers know about the conditions. We will also get fear-mongering messages out here, with local news reporting on skiers doing stupid things and warnings of avalanches that too many folks will ignore. And there will be posters at the resorts telling people to not speed on the mountains because they might kill a kid. (I think these posters are more effective at scaring kids than convincing skiers to slow down.)

No matter what messaging goes out, people will still get hurt this season like they do every season. And so there are patrollers whose job it is to look for people in high-risk situations and medics who will be on hand to help people who have been injured. And there’s a whole apparatus structured to get them of the mountain and into long-term care.

Unless you’re off your rocker, you don’t just watch a few YouTube videos and throw yourself down a mountain on skis. People take care to learn how to manage the risks of skiing. Or they’re like me and take one look at that insanity and dream of a warm place by a fire or sitting in a hot tub instead of spending stupid amounts of money to introduce that kind of risk into their lives.

Crossing the Street is Risky.

The stark reality is that every social environment has risks. And one of the key parts of being socialized through childhood into adulthood is learning to assess and respond to risks.

Consider walking down the street in a busy city. As any NYC parent knows, there are countless near-heart attacks that occur when trying to teach a 2-year-old to stop at the corner of the sidewalk. But eventually they learn to stop. And eventually they learn to not bowl people over while riding their scooter down that sidewalk. And then the next stage begins — helping young people learn to look both ways before crossing the street, regardless of what is happening with the light, and convincing them to maintain constant awareness about their environment. And eventually that becomes so normal that you start to teach your child how to J-walk without getting a ticket. And eventually, the child turns into a teenager who wanders the city alone, J-walking with ease while blocking out all audio signals with their headphones. But then take that child — or an American adult — to a city like Hanoi and they’ll have to relearn how to cross a street because nothing one learns in NYC about crossing streets applies to Hanoi.

Is crossing the street risky? Of course. But there’s a lot we can do to make it less risky. Good urban design and functioning streetlights can really help, but they don’t make the risk disappear. And people can actually cross a street in Hanoi, even though I doubt anyone would praise the urban design of streets and there are no streetlights. While design can help, what really matters for navigating risk is rooted in socialization, education, and agency. Mixed into this is, of course, experience. The more that we experience crossing the street, the easier it gets, regardless of what you know about the rules. And still, the risk does not entirely disappear. People are still hit by cars while crossing the street every year.

The Risk of Social Media Can Be Reduced.

Can social media be risky for youth? Of course. So can school. So can friendship. So can the kitchen. So can navigating parents. Can social media be designed better? Absolutely. So can school. So can the kitchen. (So can parents?) Do we always know the best design interventions? No. Might those design interventions backfire? Yes.

Does that mean that we should give up trying to improve social media or other digital environments? Absolutely not. But we must also recognize that trying to cement design into law might backfire. And that, more generally, technologies’ risks cannot be managed by design alone.

Fixating on better urban design is pointless if we’re not doing the work to socialize and educate people into crossing digital streets responsibly. And when we age-gate and think that people can magically wake up on their 13th or 18th birthday and be suddenly able to navigate digital streets just because of how many cycles they took around the sun, we’re fools. Socialization and education are still essential, regardless of how old you are. (Psst to the old people: the September that never ended…)

In the United States, we have a bad habit of thinking that risks can be designed out of every system. I will never forget when I lived in Amsterdam in the 90s, and I remarked to a local about how odd I found it that there were no guardrails to prevent cars from falling into the canals when they were parking. His response was “you’re so American” which of course prompted me to say, “what does THAT mean?” He explained that, in the Netherlands, locals just learned not to drive their cars into the canals, but Americans expected there to be guardrails for everything so that they didn’t have to learn not to be stupid. He then noted out that every time he hears about a car ending up in the canal, it is always an American who put it there. Stupid Americans. (I took umbrage at this until, a few weeks later, I read a news story about a drunk American driving a rental into the canal.)

Better design is warranted, but it is not enough if the goal is risk reduction. Risk reduction requires socialization, education, and enough agency to build experience. Moreover, if we think that people will still get hurt, we should be creating digital patrols who are there to pick people up when they are hurt. (This is why I’ve always argued that “digital street outreach” would be very valuable.)

But What About Harms?

People certainly face risks when encountering any social environment, including social media. This then triggers the next question: Do some people experience harms through social media? Absolutely. But it’s important to acknowledge that most of these harms involve people using social media to harm others. It’s reasonable that they should be held accountable. It’s not reasonable to presume that you can design a system that allows people to interact in a manner where harms will never happen. As every school principal knows, you can’t solve bullying through the design of the physical building.

Returning to our earlier note on product liability, it is reasonable to ask if specific design choices of social media create the conditions for certain kinds of harms to be more likely — and for certain risks to be increased. Researchers have consistently found that bullying is more frequent and more egregious at school than on social media, even if it is more visible on the latter. This makes me wary of a product liability claim regarding social media and bullying. Moreover, it’s important to notice what schools have done in response to this problem. They’ve invested in social-emotional learning programs to strengthen resilience, improve bystander approaches, and build empathy. These interventions are making a huge difference, far more than building design. (If someone wants to tax social media companies to scale these interventions, have a field day.)

Of course, there are harms that I do think are product liability issues vis-a-vis social media. For example, I think that many privacy harms can be mitigated with a design approach that is privacy-by-default. I also think that regulations that mandate universal privacy protections would go a long way in helping people out. But the funny thing is that I don’t think that these harms are unique to children. These are harms that are experienced broadly. And I would argue that older folks tend to experience harms associated with privacy much more acutely.

But even if you think that children are especially vulnerable, I’d like to point out that while children might need a booster seat for the seatbelt to work, everyone would be better off if we put privacy seatbelts in place rather than just saying that kids can’t be in cars.

I have more complex feelings about the situations where we blame technology for societal harms. As I’ve argued for over a decade, the internet mirrors and magnifies the good, bad, and ugly. This includes bullying and harassment, but it also includes racism, xenophobia, sexism, homophobia, and anti-trans attitudes. I wish that these societal harms could be “fixed” by technology; that would be nice. But that is naive.

I get why parents don’t want to expose children to the uglier parts of the world. But if we want to raise children to be functioning adults, we also have to ensure that they are resilient. Besides, protecting children from the ills of society is a luxury that only a small segment of the population is able to enjoy. For example, in the US, Black parents rarely have the option of preventing their children from being exposed to racism. This is why white kids need to be educated to see and resist racism. Letting white kids live in “colorblind” la-la-land doesn’t enable racial justice. It lets racism fester and increases inequality.

As adults, we need to face the ugliness of society head on, with eyes wide open. And we need to intentionally help our children see that ugliness so that they can be agents of change. Social media does make this ugly side more visible, but avoiding social media doesn’t make it go away. Actively engaging young people as they are exposed to the world through dialogue allows them to be prepared to act. Turning on the spigot at a specific age does not.

I will admit that one thing that intrigues me is that many of those who propagate hate are especially interested in blocking children from technology for fear that allowing their children to be exposed to difference might make them more tolerant. (No, gender is not contagious, but developing a recognition that gender is socially and politically constructed — and fighting for a more just world — sure is.) There’s a long history of religious communities trying to isolate youth from kids of other faiths to maintain control.

There’s no doubt that media — including social media — exposes children to a much broader and more diverse world. Anyone who sees themselves as empowering their children to create a more just and equitable world should want to conscientiously help their children see and understand the complexity of the world we live in.

In the early days of social media, I was naive in thinking that just exposing people to people around the world to each other would fundamentally increase our collective tolerance. I had too much faith in people’s openness. I know now that this deterministic thinking was foolish. But I have also come to appreciate the importance of combining exposure with education and empathy.

Isolating people from difference doesn’t increase tolerance or appreciation. And it won’t help us solve the hardest problems in our world — starting with both inequity and ensuring our planet is livable for future generations. Instead, we need to help our children build the skills to live and work together.

Put another way, to raise children who can function in our complex world, we need to teach them how to cross the digital street safely. Skiing is optional.

Struggling with a Moral Panic Once Again

I have to admit that it’s breaking my heart to watch a new generation of anxious parents think that they can address the struggles their kids are facing by eliminating technology from kids’ lives. I’ve been banging my head against this wall for almost 20 years, not because I love technology but because I care so deeply about vulnerable youth. And about their mental health. And boy oh boy do I loathe moral panics. I realize they’re politically productive, but they cause so much harm and distraction.

I wish there was a panacea to the mental health epidemic we are seeing. I wish I could believe that eliminating tech would make everything hunky dory. (I wish I could believe many things that are empirically not true. Like that there is no climate crisis.) Sadly, I know that what young people are facing is ecological. As a researcher, I know that young people’s relationship with tech is so much more complicated than pundits wish to suggest. I also know that the hardest part of being a parent is helping a child develop a range of social, emotional, and cognitive capacities so that they can be independent. And I know that excluding them from public life or telling them that they should be blocked from what adults values because their brains aren’t formed yet is a type of coddling that is outright destructive. And it backfires every time. 

I’m also sick to my stomach listening to people talk about a “gender contagion” as if every aspect of how we present ourselves in this world isn’t socially constructed. (Never forget that pink was once the ultimate sign of masculinity.) Young people are trying to understand their place in this world. Of course they’re exploring. And I want my children to live in a world where exploration is celebrated rather than admonished. The mental health toll of forcing everyone to assimilate to binaries is brutal. I paid that price; I don’t want my kids to as well.

I have no way to combat the current wave of fear-mongering that’s working its way into schools under false pretenses of science. I don’t know how to stop a tidal wave of anxious parents seeking a quick fix. But I did decide to spend some time talking with some thoughtful reporters about “kids these days” in an effort to center youth instead of technology. 

Taylor Lorenz’s “Power User”

  • Episode: Is Social Media Destroying Kids’ Lives? (+ Elon’s Secret X Account)
  • You can watch in two ways: (video version) (podcast version)

I continue to be impressed with Taylor’s ability to stand up to the trolls and offer thoughtful and nuanced takes on our sociotechnical world. So I was super honored when she reached out to see if I would be willing to talk about the latest moral panic with her. Hopefully this conversation can be a source of calm for the generation of anxious parents out there.

Detroit Public Radio’s “Created Equal”

Stephen Henderson is genuinely curious to unpack why the focus on legislation isn’t the right approach to mental health. So we dove in together to talk this through. Hopefully his thoughtful questions and my responses will provide insights for those who are hoping that regulation can make a dent in this whole thing.

There is a Path Forward…

In both of these conversations, I offer some thoughts for different audiences out there, including parents, regulators, teachers, and even kids. I’ve said many of these before, but I want to highlight a few that are top of mine just in case you’re reading this but don’t have time to listen to our conversations. I’m going to keep them brief here, but I hope I can continue to unpack them more and more over time.

1. Parents: Ensure your kids have trusted adults in their lives. It really does take a village. Kids need to be able to turn to other adults, not just you, especially when they’re struggling. You can really help your kids by ensuring they have a trusted network of aunties and coaches and mentors and other such adults. Build those relationships early and allow your children to develop strong independent relationships with adults you trust.

2. Adults writ large: “Adopt” other youth into your life. Be a mentor, a supporter, a cheerleader, a trusted person that they can turn to. You can do this through formal mentoring programs or just being an auntie to friends’ kids. You can really make a difference.

3. Regulators: Fund universal mental health access, ffs. It should not be so hard to get access to quality care when you’re in a crisis. And it should not require parental permission to seek help. Make mental health care access easy! And not just crisis care – actual sustained mental health care. Kids’ lives depend on this.

4. Parents: Check your own tech use. You are norm-setting for kids out there. Create a household tech contract with your kids. Listen to their frustrations over YOUR tech use before you judge them. This starts with the tiny ones btw.

5. Philanthropy: Invest in a “digital street outreach” program. Remember when we used to reach out to young people who were on the streets and offer them clean needles, information, and resources? When young people are crying out online, who is paying attention to them? Who is holding them? Who is ensuring that they’re going to be AOK? The answer is ugly. We need responsible people to be poised to reach out to young people when they’re crying out in pain.

Please please please center young people rather than tech. They need our help. Technology mirrors and magnifies the good, bad, and ugly. It’s what makes the struggles young people are facing visible. But it is not the media effects causal force that people are pretending it is.

Degradation, Legitimacy Threats, and Isolation

New research on census, youth, mental health; a recent talk and an upcoming one

tl;dr: 

1. New paper with Janet Vertesi: “The Resource Bind: System Failure and Legitimacy Threats in Sociotechnical Organizations”

2. “Techno-legal Solutionism: Regulating Children’s Online Safety in the United States” (my paper with María Angel) was officially published as part of the ACM CS+Law symposium. 

3. Crisis Text Line’s report on what youth need to be more resilient is haunting but important

4. Watch Tressie McMillan Cottom, Janet Vertesi, and I riff on tech & society issues

5. Come hear me speak in DC on April 10!

We’ve come a long way to get back to trodden terrain…

Ten years ago – on March 17, 2014 to be precise – Data & Society hosted its first event: The Social, Cultural & Ethical Dimensions of “Big Tech.”  At that event, we brought together people from academia, civil society, government, industry, and others to start grappling with emergent challenges in a data-soaked world. It’s utterly surreal to realize that was 10 years ago. I went back and read the primers we created for that event and just smiled. The debates we elevated then are still with us today.  I also can’t thank all of those who helped make that event possible – in effect, they helped write Data & Society into being. 

(Side note: Data & Society is going to have many 10-year celebrations this year. Make sure to stay tuned to everything folks there are planning. And if you have the means to donate, that would be mighty nice. I continue to be in aw of all that D&S is doing!)

I’ve been thinking a lot about how far we’ve come in those ten years – and how many steps backwards we’ve also taken. On one hand, folks are much more willing to see the complexities and nuances of technology’s interactions with society. On the other, the techlash tends to be just as deterministic as the tech sector itself. And then there’s the tendency for policymakers to engage in techno-legal-solutionism which just makes me want to facepalm. (Congratulations to my co-author María Angel for an awesome presentation of our paper at the ACM CS+Law Conference last week!)

More and more, what’s going through my mind these days has to do with degradation. What happens when sociotechnical systems – and the organizational arrangements that rely on them – start to crumble? Not simply break or meltdown in a fatal sense. But, rather, just become shittier and shittier. Cory Doctorow has a term to describe this phenomenon in the context of technology platforms: enshittification (which, you have to admit, is just a damn good term). But the degradation and shittiness goes so far beyond platforms. For example, so many workers’ lives are becoming so much crappier. And this isn’t simply a story of AI. It’s a story of greed and oppression. Technology and law are just the tools to help aid and abet this configuration. 

What’s worse is that degradation is sometimes the goal. Janet Vertesi and I just published a comparative ethnography paper this week in a fabulous Sociologica special issue on failure. Throughout the organizational sociology literature, there are case studies of how technical failures lead to legitimacy crises. And that’s for sure true. But in examining how resources (e.g., time and money) are constrained in public-sector organizations like NASA and the Census Bureau, we noticed something else going on. We started to see how a resource bind can be manufactured to help trigger legitimacy crisis which can push sociotechnical projects to the brink of survival. To get at this, we examined how money was contorted inside NASA alongside the political dramas of manipulating time during the 2020 census. So check out our paper: “The Resource Bind: System Failure and Legitimacy Threats in Sociotechnical Organizations.”

(Also, if you’re reading this and you don’t know who Janet Vertesi is, you should. In addition to being an amazing ethnographer of NASA, she’s constantly engaging in opt-out experiments, which are kinda like breaching experiments to protect privacy in a surveillance society. Hell, you should see what efforts she went to in an effort to evade Disney’s data collection regime.  And yes, I was the friend who was convinced she’d hate Disney. Challenge accepted, right?)

Of course, it’s not just sociotechnical systems that are degrading. So too is our collective social fabric. And, with it, the mental health of young people. Last month, Crisis Text Line published some of its latest data about depression and suicide alongside what CTL is hearing from young people about what they need to thrive. (Hint: banning technology is not their top priority.) Young people are literally dying due to a lack of opportunities for social connection. This should break your heart. Teens are feeling isolated and alone. (My research consistently showed that this is why they turn to technology in the first place.) It’s also scary to see the lack of access to community resources. Communities are degrading. And there’s no quick technical fix. 

These issues were all on my mind when Tressie McMillan Cottom, Janet Vertesi, and I sat down for a “fireside chat” at the Knight Foundation’s Informed conference.  We kinda evaded the instructions we were given and, instead, decided to draw on the collective knowledge of our disciplines to offer theoretical insights that can help people think more holistically about tech and society. Along the way, we talked about how systems are degrading, how the technical fixes are harmful, and how we owe it to the future to address social issues in a more ecological fashion. 

If you happen to be in DC on Wednesday, April 10th, I will be offering up a new lecture that connects some of these issues to the public conversations we’re having about AI. This will be part of Georgetown’s Tech and Society week. (I’m the distinguished lecture with details forthcoming on the schedule.)  I hope you can join me there!

KOSA isn’t designed to help kids.

Congress is using kids to hold Big Tech accountable. Kids will get hurt in the process.

In a few minutes, the Senate Judiciary will start a hearing focused on “Big Tech and the Online Child Exploitation Crisis.” Like most such hearings, this will almost certainly go off the rails in a wide variety of directions that I can’t even predict. But almost certainly, given the committee, it will include references to the various efforts by Congress to purportedly protect children from the ill-intended motivations of social media companies.

Photo 78194516 © Petar Vician | Dreamstime.com

To be honest, I am pulling my hair out over “online safety” bills that pretend to be focused on helping young people when they’re really anti-tech bills that are using children for political agendas in ways that will fundamentally hurt the most vulnerable young people out there.

The Kids Online Safety Act (KOSA) continues to march through the halls of Congress as though it’s the best thing since sliced bread, even though one of the co-creators of this bill clearly stated that her intention is to protect children from “the transgender” and to prevent “indoctrination” from the LGBT community. I’m flabbergasted by how many Democrats shrug their shoulders and say that it’s still worth it to align with hateful politicians because it’ll help more kids. The thing is: it won’t.

Let me try to lay out a few pieces of my frustration with this kind of bill (although I’m trying to keep this brief…). In short,

1. These “safety” bills are based on a faulty understanding of children’s mental health.

2. Bills like KOSA are predicated on the same technological solutionism that makes the logics of the tech industry so infuriating.

3. Children are dying. They’re in crisis. And we’re not providing them with the support they most need.

4. Many aspects of the tech industry are toxic. It’s politically prudent to use children. But it doesn’t help children and it doesn’t address the core issues in tech.

Let’s unpack these dynamics.

Wrong Definition of the Problem.

Are young people in a crisis? ABSOLUTELY. Suicide ideation and completion rates are increasing. Depression and anxiety are escalating. Youth are crying out for help in countless ways, including turning to the internet in the hopes that they’ll find support.

Depression, anxiety, and suicidality can never be explained by singular forces. They reflect not only an ecological problem but our steadfast refusal to see it as such. For reasons that have baffled me since I was a kid and told that “this is your brain on drugs,” I’ve been dumbfounded about the tendency to identify one problem and blame it for children’s woes. As a student, I went down a rabbit hole studying “moral panics.” I got a crash course on this when the public blamed Columbine on video games. Twenty five years later, I continue to be stunned by how powerful “media effects” rhetoric is. Why are so many people comfortable blaming some genre of media for social ills? Why is that so satisfying?

Photo 51807252 © Cammeraydave | Dreamstime.com

People keep telling me that it’s clearly technology because the rise in depression, anxiety, and suicidality tracks temporally alongside the development of social media and cell phones. It also tracks alongside the rise in awareness about climate change. And the emergence of an opioid epidemic. And the increase in school shootings. And the rising levels of student debt. And so many pressures that young people have increasingly faced for the last 25 years. None of these tell the whole story. All of these play a role in what young people are going through. And yet, studies are commissioned to focus on one factor alone: technology. (And people get outraged when reports like the one from the National Academies show inclusive causality.)

I wrote an entire book called “It’s Complicated” to try to unpack the myths we have about young people and technology. One message I’ve been trying (and failing) to get across for almost 20 years is that: The internet mirrors and magnifies the good, bad, and ugly. We know that media exposure can be a trigger. If a teenager is already experiencing suicidal thoughts, watching a show like “13 Reasons Why” can allow young people to justify taking their lives. When I was at Crisis Text Line, we saw the cost of that show as a trigger firsthand. We know that when celebrities die by suicide, the copycat phenomenon is heartwrenching. We also know that when young people experience a climate disaster, mental health falls apart.

Social media and technology connect young people to information and people. They can absolutely be exposed to content that is triggering. But some of the worst content out there comes from the news. Should we be blocking young people’s access to information about wars, climate disasters, and death by police?

The problem is not: “Technology causes harm.” The problem is: “We live in an unhealthy society where our most vulnerable populations are suffering because we don’t invest in resilience or build social safety nets.”

Solutionism is Counter-Productive.

In technology studies, it is common to eyeroll at techno-optimism and other fantasies of technological saviorism. There are many labels for the endemic problem in the tech industry: the “technological fix,” technological determinism, and technological solutionism. Each means slightly different things but the basic story is: people who are obsessed with tech think that it will solve all.the.things™ and they are fools.

For the last few years, I’ve been stunned to watch how the techlash has evolved from attempting to call into question these foolish logics to outright blaming tech companies for intentionally causing harms. Somehow, we’ve shifted from “tech will save democracy” to “tech will destroy democracy.” (Hint: democracy was in deep shit before tech.) The weird thing about this framing is that it’s as technologically deterministic as the tech industry’s orientation.

So imagine my surprise when I came back from a three-month offline sabbatical to discover that politicians wanted to legally mandate technological solutionism “for good.” Bills like KOSA don’t just presume that tech caused the problems youth are facing; they presume that if tech companies were just forced to design better, they could fix the problems. María Angel pegged it right: this is techno-legal-solutionism. And it’s a fatally flawed approach to addressing systemic issues. Even if we did believe that tech causes bullying, the idea that they could design to stop it is delusional. Schools have every incentive in the world to prevent bullying; have they figured it out? And then there’s the insane idea that tech could be designed to not cause emotional duress. Sociality can cause emotional duress. The news causes emotional duress. Is the message here to go live in a bubble?

The solution is not “make tech fix society.” The intervention we need to an ecological problem is an ecological one. The real question is what we are centering.

If You Care About Children, CENTER THEM.

In all of these discussions, we keep centering technology. Technology is the problem, technology should be the solution. What if, instead, we focused on what challenges young people are facing? What if we actually invested in addressing the issues at the core of their anxiety, depression, and suicidality? What if we invested in helping those who are most vulnerable?

Let’s start at the top of the stack. Most people under the age of 26 years old in the United States do not have access to mental health services without involving their parents. And even if you can find a therapist (good luck these days!), the likelihood of having sustained affordable access to mental health services is minimal. Around the world, seeking mental health support is sometimes more available but it’s often more stigmatized. Young people cannot address mental health struggles alone. They need help. We need to ensure that young people have access to affordable, high quality mental health services. This is a critical safety net.

When young people don’t have access to professional services, they are looking for people around them to help. We know that when young people have access to a wide network of non-custodial adults (think: aunties, coaches, pastors, etc.), said adults are more likely to sense out when things are bad. Young people are also more likely to turn to those folks. Guess what? Our social fabric in the United States has been fraying for a long time for a myriad of reasons. But this all got much more acute during Covid. Just as workers’ weak ties disintegrated during Covid, I suspect young people’s connections to non-custodial adults fell apart. And many of the adults who should be there for young people are themselves struggling. How many teachers out there are unable to support kids in crisis cuz they’ve got too much going on? It scares me how many young people can’t count a single adult that they can turn to in a crisis. Everyone who is on the front line of this crisis is feeling it. Ask any professor what they’re facing with this current crop of incoming college students. Ask those who are providing afterschool care. So many adults are falling apart trying to provide mental health services that they’re not equipped to offer because there’s no alternative and they care so much that they’re continuing to burn out.

Now let’s look at some of the sources of anxiety. Reducing climate anxiety through sound approaches to combating climate change would certainly be constructive. So would ensuring that young women had reproductive rights. So would protecting students from being shot down at school or walking down the street. So would empowering motivated youth to get an education without become trapped in indentured servitude. So would providing food security for families. So would making sure that a parent could afford to be around to help them out. So would guaranteeing that young people are accepted in a society no matter their gender, sexuality, ability, race, religion, caste, etc. Y’know… the fundamentals.

But I get it… the fundamentals aren’t politically tractable. And everyone can agree that going after Big Tech is a good idea. It’s way easier than doing the collective work or taking the collective responsibility to address the problems our children have. Too bad increasing tools for parental surveillance, blocking young people from tech, or empowering attorneys general to blame tech for content they don’t like won’t actually help young people.

Spend some time hanging out on TikTok or scanning Instagram or perusing YouTube and you can find numerous young people who aren’t doing well. They’re seeking attention, validation, belonging. And that ranges from normal teen dramas to full throttle mental breakdowns. Who is reaching out to those young people? Who is making sure that they are ok? We need a digital street outreach program, not a law that tries to render them invisible. When I was a teenager trying to grapple with my identity, strangers in chatrooms gave me hope and encouragement. Today, it is toxic people with an ideological agenda who are reaching out to those crying out for help in online communities. This doesn’t get fixed by pushing youth to the darkest corners of the internet or outing them to their parents through surveillance tools. To the contrary, that makes it worse. We need more people who are willing to be there for the next generation, not shun them.

If You Wanna Go After Tech, Go For It. Just Don’t (Ab)Use Children In The Process.

I get why the public and politicians are annoyed with the tech companies. If this is news to you, check out Cory Doctorow’s The Internet ConHe offers an impassioned account of how infuriating big tech can be. And he has choice words for Facebook in particular because he sees it as “uniquely bad.”

I have no interest in defending tech companies. I’ve spent years lambasting their abuses of privacy, their vulnerabilities towards algorithmic manipulation, their toxic dependence on advertising, and their arrogance. What irks me is not the idea that tech should be regulated, but the tendency by politicians to (ab)use children in their pursuit of regulating tech.

Part of why we are where we are is because politicians continue to fail to pass general data privacy laws. Anti-trust efforts have not had the teeth that anti-monopolists desire. And so many other efforts to curb the power and toxicity of tech companies have failed. Somehow, time immemorial, the answer to gridlock on important issues is to reposition them as “for the children.” After all, children can’t vote. And increasing parental controls is politically fruitful in the only nation that is a member of the UN but has not ratified the UN Rights of the Child. (Dear foreigners: the United States treats children as property of parents in so many different ways, starting with how we allocate political power.)

By all means, go after big tech. Regulate advertising. Create data privacy laws. Hold tech accountable for its failure to be interoperable. But for the love of the next generation, don’t pretend that it’s going to help vulnerable youth. And when the problem is sociotechnical in nature, don’t expect corporations to be able to solve it.

I Am Frustrated.

While politicians politic, young people struggle. The services that can meaningfully help young people are underfunded and drowning. Teachers and parents are burnt out. Access to mental health care is limited. And kids are turning to the internet in hopes of finding connection, community, and help. For some, going down online rabbit holes makes things worse for sure. But the fact is that many have nowhere else to go. That should scare all of us. Young people need social infrastructure to hold them. They don’t benefit from new tools for surveillance. And trying to block young people’s access to community and the online tools they use in pursuit of mental health support will not magically make the problems go away. Their pain will just become less visible.

Over the last year, I’ve struggled with whether or not to get involved with this fight. I promised myself that when I became a parent, I’d stop studying youth so that my children did not become research subjects. For the last decade, I’ve kept tabs on the research focused on young people and social media but I’ve focused my energies elsewhere. In addition to my work on privacy and the politics of data, I also devoted the last 10 years to addressing the mental health crisis through volunteering to support Crisis Text Line based on all that I learned studying youth. There, I’ve had a front row seat to the pain that many young people are facing.

A year ago, friends started asking me to engage on these political fights given my experience with an earlier round. But I also struggled to find my voice. Every time I tried to speak up, I was told that my expertise has no value for the simple reason that I currently work for the research arm of a technology company. It doesn’t matter that my research on young people pre-dated my employment or that my volunteer mental health work isn’t connected to the company. I was told time and time again that I am nothing more than an apologist for tech whenever I raise concerns about how we are approaching the relationship between young people and tech. I’ve been called a sellout for objecting to bills like KOSA.

At this point, I’m boiling over with deep frustration. I am a researcher. I don’t speak on behalf of my employer or any organization I’ve dedicated my time towards. I’m also a parent. But I don’t speak on behalf of my kids either. Nor do I think that my kids are representative of the kids that I met doing fieldwork or the conversations that I witnessed doing mental health work. I speak as someone who wants everyone to stop centering tech and start centering youth.

I’m tired of having my expertise regularly ignored. I’m also sick and tired of watching peers in the research community be harassed whenever they raise concerns about KOSA or question the dominant narrative the “real” problem is tech. Even those who have nothing to do with tech are being publicly shamed or harassed at meetings. People don’t get how shittily researchers who challenge a political message that’s supposedly “for the children” get treated. This is especially painful when we are doing it precisely to support the most vulnerable young people in society.

I learned this lesson hardcore fifteen years ago when I naively provided a literature review on the risks young people faced to the then-attorney general of Connecticut. He didn’t like what the summation of hundreds of studies showed; he barked at me to find different data. A few months later, I learned that a Frontline reporter was tasked with “proving” that I was falsifying data. After investigating me, she warned me that I had pissed off a lot of powerful people. Le sigh.

I am frustrated. Bills like KOSA will not help young people. They are rooted in a political agenda to look like they’re holding big tech accountable. But they pretend like they will make a difference and it’s not politically prudent to challenge the failed logic. Still, human rights and LGBT organizations see through this agenda. They are worried because these bills will be weaponized to harm those who are already at risk. And still, politicians are moving forward editing this bill as though something good will come for it. Why on earth do we allow politicians to use children in their agendas?

I’m scared. I’m scared for the vulnerable youth out there who don’t have parents that they can trust. I’m scared for the kids who are struggling and don’t have a safety net. I’m scared for the LGBT kids who are being targeted by politicians. I’m scared for the pregnant teenagers who don’t have the right to control their bodies. I’m scared for those who see no future with a planet that’s heating up. I’m scared for those who are struggling with wars. I’m scared for the children who are being abused. None of these young people will be served by wagging a finger at Meta and telling them to design better. More likely, more and more young people will be shunted from services that are their lifeline while their cries for help go unheeded.

I’m sick and tired of politicians using young people for spectacle. I get why well-meaning people are hoping that this imperfect bill will at least move the needle in the right direction. I get that parents are anxious about their kids’ tech use. But the stark reality is that bills like this will do more harm to vulnerable youth at the very moment when so many young people need help. They need investment, attention, support. What will it take for people to realize that focusing on tech isn’t the path forward to helping youth? Sadly, I know the answer. More dead kids.

Brain Candy, STS Opportunities, and Girl Scouts

I know I’ve been doing a crap job of sharing updates or juicy blog posts. Sorry! Here are some varied updates. And hopefully I’ll pen a proper commentary shortly.

  1. New paper alert: María Angel and I just posted a pre-print of our upcoming ACM CS+Law paper “Techno-legal Solutionism: Regulating Children’s Online Safety in the United States.”
  2. I am joining the board of trustees of the Computer History Museum.
  3. If Girl Scout Cookies are your thing, my kiddo would love if you ordered from her.
  4. STS Graduate students: Submit your paper to the Hacker-Mullins Student Paper Award by 3/15!
  5. Scholars: Janet Vertesi and I are coordinating an EASST/STS panel on “The Implications of Institutional Breakdown for Science and Technology.” Apply by Feb 12.
  6. Interested in data science? DJ Patil and I riff about “Data Impact” for LinkedIn Learning.

I’m (hopefully) going to come back later this week with more thoughts on the Kids Online Safety Act (KOSA) and what it takes to support young people facing a mental health crisis, but in the meantime, I wanted folks to see the paper that María and I just wrapped up. We focused on a very specific aspect of KOSA and bills like it. I’ve complained about deterministic thinking before. But here we are again, only one step further. Now the law is embracing the worst of tech companies’ deterministic logics and demanding that they be solutionistic “for good.” ::shaking head:: More on this shortly.

I’m ecstatic to be joining the board of trustees of the Computer History Museum. Not only am I a big fan of the work CHM has been doing, but I also find it good to anchor myself in history whenever I’m struggling with the present. The tech industry didn’t come from nowhere. Its story is messy and complicated – and it’s important that we collectively learn from that. And, besides, you can visit the museum and see esoteric things like the Utah teapot (which is meaningful for graphics geeks) or the interpreter source tape for Altair BASIC (which is meaningful for programmers) or a replica of the 1890 Hollerith machine (which is meaningful for census geeks). Sometimes, it’s super valuable to bask in the joys of computing.

I’ve been thinking a lot about multi-level marketing schemes lately. Much to my chagrin. And now I feel like I’m part of one. Don’t get me wrong – all across the internet, I’m being told that Girl Scouts is emphatically not a real MLM and I get that. Still, I feel like I’ve been enrolled by my kiddo to help her sell cookies so she can get prizes (and donate to amazing organizations and raise money for her troop) under the guise of female empowerment. And there’s an entrepreneurship badge for doing it. She’s not humored by my rants about capitalism or unhealthy incentives. Then again, she’s 6. But if you feel like helping me out, you can order cookies online and they’ll be sent to you. (Or if you live in the front range in Colorado, we can deliver.) (Or you can even just outright donate to her troop.) All through her online website.

Anyhow, I hope everyone out there is holding up ok. {{hug}}

The Screens are the Symptom.

I decided to re-read Fahrenheit 451 with my eldest this last week. I don’t think that I have read this classic Bradbury text since high school. What I had remembered about the book was that it described a world in which books were banned and the job of firefighters was to track down books and burn them. Written in 1953, I also remembered it was a response to the moral panics of the McCarthy era and the book burning activities in Nazi Germany. I remember being horrified to learn that book burning was a thing. Thirty years later, I still treat books like precious objects.

What I had forgotten over these last decades is that the book is also a story about screens, described in the book as “parlor walls.” In Bradbury’s dystopic world, screens are not the attractor but the substitute for other things that are intentionally restricted. Books, poetry, plays, and arts are suppressed in this world because they invite people to feel, think, and question – and this is seen as problematic. Screens are nearly mandated as an opiate for the masses, meant to pacify people. Kids are expected to be staring at screens and not asking questions. In other words, the badness of screens are not about screens themselves but the social configuration that made screens the dominant social, entertainment, and interactive outlet.

It’s also notable how social fabrics are narrated in this text. The main character is a firefighter named Montag, but his wife Mildred spends her days engaging with her “family” on these parlor walls in a constant stream of meaningless chatter about nothing in particular. To talk about anything of substance and merit is verboten: the goal is to never upset anyone in this society. Only niceties will do. This “family” includes various neighbors who are presumably friends, but also celebrities available to everyone. Notably, Montag pays extra so that these celebrities’ speech acts directly address Mildred by name in a personalized fashion that makes her feel more connected to the celebrity. Oh, parasociality and algorithms as imagined in the 1950s. This society has not devolved into trolling. Instead, it is a screen world of such boringness that the government can use the high speed robot chase at the end of the tale to direct the energy of everyone.

I had also completely forgotten how this book sees children. In short, children are treated with disdain as a problem that society must manage. It reflects an attitude that was commonplace in the 1950s where children were seen as a danger that must be managed rather than a vulnerable population that needed support.  This book is a stark reminder of how far we’ve shifted from being afraid OF children to being afraid FOR them even as the same source of fears remain. And so in Bradbury’s world, children are plugged into screens all day not for their benefit, but for the benefit of adults. (Side note: don’t forget that compulsory high school was created only a few decades before as a jailing infrastructure to benefit adults and protect the morality of adolescents.) 

The role of medication is also intriguing in this world. Mildred is addicted to sleeping pills, which she needs to separate herself from her parlor walls at night. And medicine is easily available to deal with the side effects by eliminating memories and increasing the checked out state of everyone. Of course, the opening scene of the book centers on Mildred overdosing and not even realizing the gravity of that. Indeed, the medics in this world accept that they must regularly revive people from overdosing on sleeping pills. 

All of this is to say that the plot of Fahrenheit 451 centers both on Montag’s attempt to reckon with censorship as well as how he is unable to extract Mildred from her mundane and unhealthy relationship to her way of life, even when she’s on the brink of death. It is about seeing screens as the product of disturbing political choices, not the thing that drives them. I couldn’t help but be fascinated by how inverted this is to today’s conversation.

Over the last two years, I’ve been intentionally purchasing and reading books that are banned. I wanted to re-read Fahrenheit 451 because of the contemporary resurgence of book banning.  But in actually rereading this book, I couldn’t help but marinate on the entanglement between fears about screens, repression of knowledge, disgust towards children, and conflicted visions of happiness. I also kept thinking about how different the theory of change is in this book compared with how these conversations go in the present. In short, Montag (and the various foils he works with) aren’t really focused on destroying the screens – they are wholly focused on embracing, saving, and sharing knowledge from books. Here, I’m reminded of an era in which education was seen as a path forward not simply a site to be controlled.

The people in Bradbury’s world aren’t happy. They are zombies. But Bradbury recognizes that they are structurally configured, a byproduct of a world that was designed to prevent them from thinking, connecting, questioning, and being physically engaged. Instead, he offers us Clarisse – his sole child character – who teaches Montag how to see the world differently. How to ask questions, how to engage with the physical world, how to not take for granted the social configuration. She invites him to open his eyes. She’s also the one and only character who is actively willing to challenge the status quo.

The counter to Clarisse is Montag’s boss, a character who clearly knows how the society has been configured. He fully recognizes that the banning of books is a ruse for political control. He has no qualms with reinforcing the status quo. So his job as a firefighter is to repress resistance. Books and screens aren’t the real enemy to an authoritarian state – knowledge is. 

Fahrenheit 451 is unquestionably a tale about the caustic consequences of banning books and repressing knowledge. But it’s also an invitation to see the structural conditions that enable and support such repression. It’s easy to want to fight the symptoms, but Bradbury invites us to track the entanglements. Little did I realize just how much I would value rereading this book at this moment in time and with my kiddo. Thank you Ray Bradbury.

Relatedly… 

For better or worse, I’m spending a bit too much time thinking about the rise in efforts to oppress, sanction, and harm youth under the deeply disturbing trends towards parental control, parental surveillance, and state paternalism. I’ll come back to these topics this fall. 

In the meantime, apropos of Fahrenheit 451, I hope folks are tracking how conservative states are now rejecting support from the American Library Association, accusing librarians of exposing children to books that include content they don’t like. ::shaking head::  Next week is “Banned Books Week.” Support the ALA.

Researchers are also increasingly under attack by those who disagree with their findings or for otherwise producing knowledge that is uncomfortable or inconvenient. While this is happening in multiple domains right now (ranging from scholars focused on climate change to youth mental health), scholars working on topics related to disinformation are facing this acutely at the moment. Around the country, researchers are being sued and their institutions are being pressured to turn over communications to Congressional committees. This is starting to feel a lot like the McCarthy era for scholars, especially with universities being ill-equipped (or actively unmotivated) to support researchers. 

See why Bradbury’s book felt really poignant right now?

Still Trying to Ignore the Metaverse

Perhaps surprisingly, I don’t particularly like technology. And certainly not technology for technology’s sake. My brother was always the one who picked up every new gadget to see what it did. I tended to shrug and go back to reading a book. I still do. 

That said.. Like most people, I enjoy technologies that improve my world in some way. I’m fond of technologies that become invisible infrastructure in my life. Technologies that just work without me noticing – like the toilet. When it comes to digital tech, I’m grateful for systems that make me smile. Not the ones that make me vomit. Literally.

Wagner James Au has been trying to get me to engage with virtual reality since we first met in the mid-aughts. You name the iteration, he’s been excited about it. Each time, he tries to convince me that this particular instantiation is cooler, more accessible, more appealing. Each time, I politely explain that I have zero interest in any aspect of this. Still, I like James. And I appreciate his enthusiasm. There’s part of me that wishes I would sparkle that way at the site of a new piece of tech. 

The funny thing is that James knows why I have zero interest in engaging on things related to virtual reality. In fact, it’s precisely because my first research project was an attempt to unpack my hatred of virtual reality that he keeps pushing me to jump into the fray. But I haven’t done work in this area in 25 years. So it cracked me up to no end that James decided to feature my antagonism towards the metaverse in his new book, “Making a Metaverse That Matters.”  He thinks that I owe it to all who are excited about this tech to talk more about this early work. So let me share the way that he told my story and offer some additional context and flare to it just for fun. 

In “Making a Metaverse That Matters,” James opens one section referencing an essay I wrote a decade ago when Facebook obtained Oculus. The essay was provocatively titled “Is the Oculus Rift Sexist?” This was an intentionally provocative essay. But I really meant it with this question. I wanted to know: was Oculus fundamentally designed in a way that was prejudiced based on sex and, therefore, was it going to be an inherently discriminatory piece of tech? My question wasn’t coming out of nowhere. It was something that had plagued me since my first encounter with a VRM system as an undergraduate student. As James quotes from my essay: 

Ecstatic at seeing a real-life instantiation of the Metaverse, the virtual world imagined in Neal Stephenson’s Snow Crash, I donned a set of goggles and jumped inside.

And then I promptly vomited.

(Side note: I hadn’t remembered that I complained about the Metaverse-ness of Oculus back before Meta was Meta so I laughed re-reading this. Also, as an additional side note: it never ceases to amaze me that tech bros want to build worlds created in dystopian novels and expect a different outcome. As a reminder, this is actually the definition of insanity.)

I first encountered VR because my beloved undergraduate advisor – Andy van Dam – had invested in building a new fangled immersive virtual reality system called a CAVE. I was excited for him so I checked it out. My reaction to my first experience with this piece of tech was not joy, but nausea. I told Andy his system was stupid. (If memory serves, I was far more crass in my language.) I also told him the system discriminated against women. He told me to prove it.

At that point in my career, I still wanted to understand tech that I loathed. And I wanted to prove my accusation to Andy. So I started to ask why this piece of tech that made so many men I knew so happy made me so miserable. I tracked down military reports about gender bias in simulator sickness, much of which dated back to the 1960s. I ended up spending time at a gender clinic where people who were on hormone replacement therapy regimens participated in scientific studies about things like spatial rotation. This led me to run a series of psych experiments where my data suggested that people’s ability to be able to navigate 3D VR seems to be correlated with the dominance of certain sex hormones in their system. Folks with high levels of estrogen and low levels of testosterone – many of whom would identify as women – were more likely to get nauseous navigating VR than those who have high levels of testosterone streaming through their body. What was even stranger was that changes to hormonal levels appeared to shape how people respond to these environments. 

(Side note: When I was conducting this work 25 years ago, the language people used to discuss gender was quite different than today. Many of my informants actively hated the term “transgender” and were adamant that I use the word “transsexual” and clearly identify them as male-to-female or female-to-male in my study. In today’s parlance, this latter language is viewed as deeply problematic while transgender is widespread.  Because my older work uses the emic language of the day, I regularly get nastygrams accusing me of transphobia.)

I did this work as an undergraduate but never published it because much work was needed for it to be publishable. But I always hoped that someone would pick up the baton and keep on going. In fact, that’s what motivated me to write the Oculus essay in the first place. And I will always be grateful to Thomas Stoffregen and his team who confirmed that I was not crazy with my early findings – and continued on to do fantastic work. That said, as James notes, it’s depressing how little work has been done in this area ever since. Truth is, I haven’t been tracking it, but I’m not surprised to hear that. I walked away from this world because I had no desire to embrace a technology that wants me to come in a different hormonal arrangement. 

But James is more outraged on my behalf, in no small part I suspect because he does see the joy in this technology and I think he wants me to find it as well. I had to smile when he highlighted the sexist realities of a business culture of tech-for-tech sake. 

[Meta] paid $2 billion for a piece of consumer-facing technology that reputable research suggests tends to make half the population literally vomit.

Then spent tens of billions more to bring it to market anyway.

Then Silicon Valley followed suit, investing tens of billions still further, an entire industry sprung up around it, nearly all of it ignoring evidence that the whole enterprise was built on sand. Usually it seems impossible to calculate the opportunity cost of unconscious gender bias, but in this specific case, the price tag approaches $100 billion.

I know I should be indignant. It is indeed seriously depressing to think about all of the technology that is created out there with little regard for people and practices. It’s exhausting to go through hype cycles of how yet another new technology built based on a dystopian novel is rolling out regardless of the harms or bias that it might trigger. It’s painful to think about how much capital is spent chasing pyramid schemes and illusions rather than solving actual problems. It’s also really depressing to realize that findings that I uncovered 25 years ago were validated by better scholars but were never addressed by industry. But this is not my problem. 

I have no interest in the Metaverse. I am not sitting around dreaming of wearing gobsmackingly expensive ski glasses. Although I’m not a fan of the various aches and pains in my body, I don’t think my life will be better in avatar form. I really really really don’t get why this is a piece of tech that excites people. But I know that there are a lot of people out there like James who want an inclusive and joyful Metaverse to exist. If you’re one of those people, do check out his book. His message is a good one: let’s collectively work towards a version of virtual reality that gives more people joy than pain. 

As for me, even the act of putting one toe into the water to support an old friend was enough for me to remind myself that I don’t have to like or study every new technology there is. And so, with all gratitude to James, I’m going to happily return to my attempt to ignore the Metaverse. Perhaps someone out there who is excited by the technology will want to build on earlier work and address the systemic bias issues. That would be great. But this tech isn’t for me, at least. not in its current form. And so it goes, so it goes.

Dear Alt-Twitter Designers: It’s about the network!

Last week, tech commentators were flush with stories about the speed of new users on Threads. Unprecedented downloads! A sign that Meta is stronger than ever! Networks born in one service can transfer to another! This week? There’s a lot of speculation that Threads is crashing. Folks keep asking me what my take is on Threads (and Mastodon and Bluesky and …) and I keep responding with the same story: we’ll see. And every time I do, I’m reminded of talking to historians who, when you ask them about the last hundred years, they say “we’ll see.”

As I watch these various alt-Twitters emerge, I can’t help but think about some crucial lessons I learned almost twenty years ago when a bazillion social media sites popped up that I struggled to help others see. The tl;dr? It all comes down to nurturing the network dynamics, not the technical features.

In the early days of social media, founders invited their friends. Who invited their friends. And on it went. The networks grew slowly, organically, and with a level of density that is under-appreciated. When someone joined, there was a high probability that that person knew a bunch of people on the site. After all, these things rolled out across pre-exiting social graphs. The density mattered.

Some existing network graphs were better suited to this dynamics. There’s a reason that almost all early social media consisted of geeks, freaks, and queers. Using technology to strengthen bonds was already a part of these communities. And this is why, a few years later, focusing on students was powerful. Some networks are better positioned to leverage technical mediation.

But the graph of connections was not the only relevant graph. The other critical graph was the graph of norms. Founders were, unsurprisingly, hyper enthusiastic about the thing they created. They posted a lot of content — and they encouraged the people they invited to do the same. So there was an enthusiasm from the getgo. And as new people came on, they got creative, they pushed at the norms, they expanded their networks. Divergent norms sat alongside one another. Geek Friendster was different than queer Friendster. The kernel of all of this was vibrancy. These dense norm-infused networks felt vibrant to those who were a part of them.

As social media became A.Thing^TM, people joined because they felt they had to join. FOMO. The graph filled out faster. But this complicated vibrancy. Fast adoption wasn’t inherently a good thing because the norm setting around what to post, how to interact didn’t play out at the same speed. People joined and they were the equivalent of a blank egg. They didn’t know why they were there. Getting them engaged required a different kind of nurturing of networks. Not all new services were up for that — they built tools, not communities. Most social media in this world came and went. There’s a graveyard of dead social media sites out there, lingering on Archive.org for future historians to view 100 years from now.

Twitter came of age in the fast-network growth phase. What made Twitter so interesting was the directed graph dynamics of it all, which altered how vibrancy unfolded. Ironically, the fake accounts helped a lot. Folks knew their follower graph was fake, but there was enough interaction, enough signal that people were being listened to, that real people felt it was vibrant enough. The illusion of vibrance prompted people to be more vibrant, keeping the thing alive at scale across many networks. But it was also the first site where I saw a ton of subgraphs never hatch, never find their footing.

And then there was Google+, may it RIP. This was birthed out of the arrogance of a major company who believed that they could leverage their scale to dominate social media. The launch of this was an example of blitzscaling where the sudden fast scaling (thanks to the behemoth power of Google) triggered a blitz. But not the kind where a military feels emboldened, the kind where those on the ground feel destroyed by aerial bombardment. No matter how not-evil they were, Google simply couldn’t bomb its audience into sociality.

Cuz that’s the thing about social media. For people to devote their time and energy to helping enable vibrancy, they have to gain something from it. Something that makes them feel enriched and whole, something that gives them pleasure (even if at someone else’s pain). Social media doesn’t come to life through military tactics. It comes to life because people devote their energies into making it vibrant for those that are around them. And this ripples through networks.

One thing that complicates people’s willingness to devote their energy to vibrancy is context collapse, a term that Alice Marwick and I coined long ago. When a social media site grows slow and steady, it starts out for each user as a coherent context. Things get dicy over time as people struggle to figure out how to navigate divergent networks. But people find strategies, renegotiate the context, carve off specific worlds and narrow the context for themselves for libidinal joy. However, when you blitzscale a new social media site into being, the audience arrives with context collapse already in play. They don’t know if this is a site to joke around with friends, to be professional, to bitch about politics, or what. And without having already set out to build vibrancy and negotiate norms, the vast majority of people who arrive to a blitzscaled context collapsed site sit around waiting for someone to norm-set for them.

I can’t help but think of this in terms of Twitter’s early design language cuz those designers really understood this at a certain level. Consider the unhatched egg. This is a bird that isn’t a bird yet, not even a fledgling. It might become a bird. Or it might become a egg that someone eats for lunch. Worse, it might rot in place. Alt-Twitter sites are creating blank eggs everywhere. But they aren’t being nurtured to hatch. They’re just sitting there, waiting. And most of them are going to rot. Cuz you can’t take an egg that’s been sitting around for a long time and suddenly add a heat lamp and hope that a baby chick will form. Twitter was left with a lot of dead eggs. (And to push this a bit too far… adversarial actors realized that and decided to gobble up a bunch of them and turn them into Zombie chicks. Which was a major problem.)

Early norm-setting, vibrancy, and slow but dense network formation help breathe life into a social media site. And the early period is critical because it’s when habituation forms. People need to make visiting a site to be a part of their daily practice early on if it’s going to sustain. Many sites have been tried out and then faded into smithereens because people never habituated to them. Prompting people to name-squat on a site so that you get a media blast of The.Most.Downloads.Ever sounds like a tech-driven marketing strategy rather than one that understands the essence of social media.

I should note that blitzscaling is not the only approach we’re seeing right now. The other (and I would argue wiser) approach to managing dense network formation is through invitation-based mechanisms. Heighten the desire, the FOMO, make participating feel special. Actively nurture the network. When done well, this can get people to go deeper in their participation, to form community. This is a way to manage growth through networks in ways that were easy when these things weren’t cool but became harder when they became cool. However, nurturing early adoption thoughtfully matters immensely in this approach. Unlike 20 years ago, the people poised to be early adopters today are those who are most toxic, those who get pleasure from making others miserable. This means that the rollout has to be carefully nurtured so that the Zombie chicks don’t eat up the other eggs before they can even hatch.

Managing the growth of a social media site now looks soooooo different than it did 20 years ago. The growth curve and context collapse issues are real. But there’s also the feature roll-out issue. It was completely reasonable to function in a perpetual beta “features will come soon!” mode then. But that doesn’t work as well in this context. And that means that trying to co-construct features with your audience today is much much more complicated.

Of course, the “death” of social media sites also look different today than they did in the past. Today, many social media devolve into platforms dominated by big personalities, celebrities, and can’t-look-away listacle junk content. Twenty years ago, social media was more local, more dense in networks. Entertainment media was all lopsided. Today, these two worlds are much more blurred. There was a time when Justin Bieber’s outsized audience on Twitter was shocking and weird and fascinating. Today, most major social media platforms have influencers who dominate and overwhelm the norms of the average people trying to connect. So many platforms devolve to being sites for a narrow subset of the population to build audience. But that’s a topic for a different rant.

Social media will survive. Something will come out of this moment. But a LOT of money is going to be wasted relearning the lessons from the last 20 years. When Alice and I were playing around with the concept of “context collapse,” I never realized just how relevant it would continue to be. And when I was riffing about network formation back in the days of Orkut, I never thought that we would need to relearn this over and over again. Rather than being bitter as I shake my head like an old person, I’m going to enjoy my popcorn.

Too Big to Challenge?

Photo 148941930 / Robot Economy © Andrey Popov | Dreamstime.com

I find it deeply disturbing that the tech industry represents 9% of the U.S. GDP and only five Big Tech companies account for 25% of the S&P 500. Prior to Covid, most of the growth in stock market came from Big Tech (not the Trump Administration…). Now, as the U.S. economy is all sorts of wacky, Big Tech is what is keeping the stock market’s chin above water. In the process, Big Tech is accounting for more and more of the stock market. ::gulp::

If capitalism and stock markets aren’t your thing, it’s easy to shrug your shoulders at this. But the stock market is infrastructural in profound (and disturbing) ways to American life. Professors: university endowments depend on the stock market staying strong. So do the few remaining pension plans (hiiii government workers!). The S&P 500 is also important for nearly all retirement plans and, much to our collective chagrin, the stability of the banking world itself. Economists tend to scare the heck out of me whenever they talk about how many things are connected to the “overall health of the economy” which is increasingly dependent on a small number of Big Tech companies. And oh boy do they feel the heat to keep the economy chugging along.

Inside the tech industry, there’s another strange calculation. High status employee compensation in tech is also tethered to the stock market. Because of talent wars, tech companies panic when their stocks fall because their employees have no incentive to stay since that’s so much of their compensation. The talent wars have all sorts of other perverse incentives. For example, companies have little incentive to invest in training people for fear that they will go elsewhere. And it shouldn’t be surprising that tech companies conspired to wage-fix in an effort to cap the salaries of certain classes of workers so as to not be in a perennial talent war with each other.

Given how intense the talent wars have been in recent years, I can’t help but be fascinated by the mass layoffs happening now in tech. Over the last few months, companies are coming forward with their tails between their legs saying that they over-hired which is why they needed to lay people off. But did they all really do the exact same thing? Or is there more going on here?

In a classic text in sociology, Paul DiMaggio and Woody Powell mapped out an idea called “institutional isomorphism” where they highlighted how corporations and other large institutional arrangements move in alignment with one another. They describe practices of coordination, mimicry, and normative pressures. In other words, there are structural reasons why companies in entire sectors tend to do the same darn thing.

This might explain the collective over-hiring, but I can’t help but wonder if we’re also watching an inversion of the wage-fixing dynamic. By collectively moving at once, the tech companies are also putting a big pause on the talent wars (outside of the tiny number of very specific roles). Right now, there is widespread fear of job loss across the tech industry. In response, tech workers are staying put. They’re not going anywhere. Unless they’ve been forced out. (Random aside: will we see a massive influx in startups in a few years due to layoffs?)

I wonder if tech leaders think that this hovering threat of more and more layoffs will prompt workers into working harder, faster, more in-line with the company’s goals. Fear is a motivator. Do tech leaders believe that’s effective? Moreover, is it? Are remaining workers helping build the value of these companies at faster rates that benefits the economy? Or is fear creating all sorts of externalities within these companies? I honestly don’t know. I’m waiting for the b-school research!

Amidst the chaos inside the tech industry, we have AI. AI is often described as the cause of the chaos, but I can’t help but wonder if it’s just the hook. AI offers all sorts of imaginaries. And imaginaries are necessary to keeping stock markets going up up up. People want to imagine that this new technology will transform society. They want to imagine that this new technology will strengthen the economy as a whole (even if a few companies have to die).

Many social scientists and historians are critics of AI for reasons that make total sense to me. Technologies have historically reified existing structural inequities, for example. However, the fear-mongering that intrigues me is that coming from within the technical AI community itself. The existential threat conversation is a topic of a different rant, but one aspect of it is relevant here.

Many in the AI tech community believe that self-coding AIs will code humans out of existence and make humans subordinate to AIs. This is fascinating on soooo many levels. The a-historic failure to recognize how humans have repeatedly made other humans subordinate is obviously my first groan. Yet, more specifically to this situation is the failure of extraordinarily high status, high net-worth individuals to reckon with how the tech industry has made people subordinate in a capitalistic context already.

Poke around a bit and these folks will talk about how programmers are doomed. And I can’t help but be fascinated by their angst. At the center of this existential threat is a threat to their own status, power, and domination. They’re afraid that they will become subordinate to the machine (?or other political arrangements?). But they’re projecting this onto all of humanity without appreciating the ways in which so many people already feel subordinate to a machine, namely a particular arrangement of capital and power that is extraordinarily oppressive.

So I keep coming back to this question: How much of the computer science panic of an AI robot takeover is actually coming from an anxiety that their status, power, and wealth is under threat? (And, as such, their agency… But that’s a topic for another rant.)

I keep trying to turn over rocks and make sense of the hype-fear continuum of AI that’s unfolding and what is really standing out to me are the layers and layers of anxiety. Anxiety from tech workers about job precarity and existential risk. Anxiety from tech leaders about the competitiveness of their organizations. Anxieties from national security experts about geopolitical arrangements. Anxieties from climate scientists about the cost of the GPU fights surpassing the crypto mining. Anxieties from economists and politicians about the fundamentals of the economy.

So I keep wondering… what are going to be the outcomes of an anxiety-driven social order at the cornerstone of the economy, the social fabric, and the (geo)political arrangements? History is not comforting here. So help me out… How else should I be thinking of this arrangement? And what else in tangled up in this mess? Cuz more and more, I’m thinking that obsessing over AI is a strategic distraction more than an effective way of grappling with our sociotechnical reality.

Deskilling on the Job

Photo 216171598 / Robots Jobs © Victor Moussa | Dreamstime.com

When it comes to AI’s potential future impact on jobs, Camp Automation tends to jump to the conclusion that most jobs will be automated away into oblivion. The progressive arm of Camp Automation then argues for the need for versions of universal basic income and other social services to ensure survival in a job-less world. Of course, this being the US… most in Camp Automation tend to panic and refuse to engage with how their views might intersect with late-stage capitalism, structural inequality, xenophobia, and political polarization.

The counterweight to Camp Automation is Camp Augmentation, of which I am far more analytically aligned. Some come to Camp Augmentation because they think that Camp Automation is absolutely nutsoid. But there are also plenty of folks who have studied enough history to have watched how fantasies of automation repeatedly turn into an augmented reality sans ugly headwear.

Mixed into Camp Automation and Camp Augmentation is a cultural panic about what it means to be human anyways. I find this existential angst-ing exhausting for its failure to understand how this is at the core of philosophy. It’s also a bit worrying given how most attempts throughout history of resolving this have involved inventing new religions. Oh, the ripples of history.

While getting into what it means to be human is likely to be a topic of a later blog post, I want to take a moment to think about the future of work. Camp Automation sees the sky as falling. Camp Augmentation is more focused on how things will just change. If we take Camp Augmentation’s stance, the next question is: what changes should we interrogate more deeply? The first instinct is to focus on how changes can lead to an increase in inequality. This is indeed the most important kinds of analysis to be done. But I want to noodle around for a moment with a different issue: deskilling.

Moral Crumple Zones

Years ago, Madeleine Elish decided to make sense of the history of automation in flying. In the 1970s, technical experts had built a tool that made flying safer, a tool that we now know as autopilot. The question on the table for the Federal Aviation Administration and Congress was: should we allow self-flying planes? In short, folks decided that a navigator didn’t need to be in the cockpit, but that all planes should be flown by a pilot and copilot who should be equipped to step in and take over from the machine if all went wrong. Humans in the loop.

Think about that for a second. It sounds reasonable. We trust humans to be more thoughtful. But what human is capable of taking over and helping a machine in a fail mode during a high-stakes situation? In practice, most humans took over and couldn’t help the plane recover. The planes crashed and the humans got blamed for not picking up the pieces left behind by the machine. This is what Madeleine calls the “moral crumple zone.” Humans were placed into the loop in the worst possible ways.

This position for the pilots and copilots gets even dicier when we think about their skilling. Pilots train extensively to fly a plane. And then they get those jobs, where their “real” job is to babysit a machine. What does that mean in practice? It means that they’re deskilled on the job. It means that those pilots who are at the front of every commercial plane are less skilled, less capable of taking over from the machine as the years go by. We depend structurally on autopilot more and more. Boeing took this to the next level by overriding the pilots with their 737 MAX, to their detriment.

To appreciate this in full force, consider what happened when Charles “Sully” Sullenberger III landed a plane in the Hudson River in 2009. Sully wasn’t just any pilot. In his off-time, he retrained commercial pilots how to fly if their equipment failed. Sully was perhaps the best positioned pilot out there to take over from a failing system. But he didn’t just have to override his equipment — he had to override the air traffic controllers. They wanted him to go to Teterboro. Their models suggested he could make it. He concluded he couldn’t. He chose to land the plane in the Hudson instead.

Had Sully died, he would’ve been blamed for insubordination and “pilot error.” But he lived. And so he became an American hero. He also became a case study because his decision to override air traffic control turned out to be justified. He wouldn’t have made it. Moreover, computer systems that he couldn’t override prevented him from a softer impact.

Sully is an anomaly. He’s a pilot who hasn’t been deskilled on the job. Not even a little bit. But that’s not the case for most pilots.

And so here’s my question for our AI futures: How are we going to prepare for deskilling on the job?

How are Skills Developed?

My grandfather was a pilot for the Royal Air Force. When he signed up for the job, he didn’t know how to fly. Of course not. He was taught on the job. And throughout his career, he was taught a whole slew of things on the job. Training was an integral part of professional development in his career trajectory. He was shipped off for extended periods to learn management training.

Today, you are expected to come to most jobs with skills because employers don’t see the point of training you on the job. This helps explain a lot of places where we have serious gaps in talent and opportunity. No one can imagine a nurse trained on the job. But sadly, we don’t even build many structures to create software engineers on the job.

However, there are plenty of places where you are socialized into a profession through menial labor. Consider the legal profession. The work that young lawyers do is junk labor. It is dreadfully boring and doesn’t require a law degree. Moreover, a lot of it is automate-able in ways that would reduce the need for young lawyers. But what does it do to the legal field to not have that training? What do new training pipelines look like? We may be fine with deskilling junior lawyers now, but how do we generate future legal professionals who do the work that machines can’t do?

This is also a challenge in education. Congratulations, students: you now have tools at your disposal that can help you cut corners in new ways (or outright cheat). But what if we deskill young people through technology? How do we help them make the leap into professions that require more advanced skills?

There’s also a delicate balance regarding skills here. I remember a surgeon telling me that you wanted to get scheduled surgery on a Tuesday. Why? Because on Monday, a surgeon is refreshed but a tad bit rusty. By Tuesday, they’re back in the groove but not exhausted. Moreover, there was a fine line between practice and exhaustion — the more that surgeons are expected to do each week, the higher the number of jobs that they’ll do badly at. (Where that holds up to evidence-based scrutiny, I don’t know, but it seems like a sensible myth of the profession.)

Seeing Beyond Efficiency

Efficiency isn’t simply about maximizing throughput. It’s about finding the optimum balance between quality and quantity. I’m super intrigued by professions that use junk work as a buffer here. Filling out documentation is junk work. Doctors might not have to do that in a future scenario. But is the answer to schedule more surgeries? Or is the answer to let doctors have more downtime? Much to my chagrin, we tend to optimize towards more intense work schedules whenever we introduce new technologies while downgrading the status of the highly skilled person. Why? And at what cost?

The flipside of it is also true. When highly trained professionals now babysit machines, they lose their skills. Retaining skills requires practice. How do we ensure that those skills are not lost? If we expect humans to be able to take over from machines during crucial moments, those humans must retain strong skills. Loss of knowledge has serious consequences locally and systemically. (See: loss of manufacturing knowledge in the US right now…)

There are many questions to be asking about the future of work with new technologies on the horizon, many of which are floating around right now. Asking questions about structural inequity is undoubtedly top priority, but I also want us to ask questions about what it means to skill — and deskill — on the job going forward.

Whether you are in Camp Augmentation or Camp Automation, it’s really important to look holistically about how skills and jobs fit into society. Even if you dream of automating away all of the jobs, consider what happens on the other side. How do you ensure a future with highly skilled people? This is a lesson that too many war-torn countries have learned the hard way. I’m not worried about the coming dawn of the Terminator, but I am worried that we will use AI to wage war on our own labor forces in pursuit of efficiency. As with all wars, it’s the unintended consequences that will matter most. Who is thinking about the ripple effects of those choices?