Author Archives: zephoria

The Screens are the Symptom.

I decided to re-read Fahrenheit 451 with my eldest this last week. I don’t think that I have read this classic Bradbury text since high school. What I had remembered about the book was that it described a world in which books were banned and the job of firefighters was to track down books and burn them. Written in 1953, I also remembered it was a response to the moral panics of the McCarthy era and the book burning activities in Nazi Germany. I remember being horrified to learn that book burning was a thing. Thirty years later, I still treat books like precious objects.

What I had forgotten over these last decades is that the book is also a story about screens, described in the book as “parlor walls.” In Bradbury’s dystopic world, screens are not the attractor but the substitute for other things that are intentionally restricted. Books, poetry, plays, and arts are suppressed in this world because they invite people to feel, think, and question – and this is seen as problematic. Screens are nearly mandated as an opiate for the masses, meant to pacify people. Kids are expected to be staring at screens and not asking questions. In other words, the badness of screens are not about screens themselves but the social configuration that made screens the dominant social, entertainment, and interactive outlet.

It’s also notable how social fabrics are narrated in this text. The main character is a firefighter named Montag, but his wife Mildred spends her days engaging with her “family” on these parlor walls in a constant stream of meaningless chatter about nothing in particular. To talk about anything of substance and merit is verboten: the goal is to never upset anyone in this society. Only niceties will do. This “family” includes various neighbors who are presumably friends, but also celebrities available to everyone. Notably, Montag pays extra so that these celebrities’ speech acts directly address Mildred by name in a personalized fashion that makes her feel more connected to the celebrity. Oh, parasociality and algorithms as imagined in the 1950s. This society has not devolved into trolling. Instead, it is a screen world of such boringness that the government can use the high speed robot chase at the end of the tale to direct the energy of everyone.

I had also completely forgotten how this book sees children. In short, children are treated with disdain as a problem that society must manage. It reflects an attitude that was commonplace in the 1950s where children were seen as a danger that must be managed rather than a vulnerable population that needed support.  This book is a stark reminder of how far we’ve shifted from being afraid OF children to being afraid FOR them even as the same source of fears remain. And so in Bradbury’s world, children are plugged into screens all day not for their benefit, but for the benefit of adults. (Side note: don’t forget that compulsory high school was created only a few decades before as a jailing infrastructure to benefit adults and protect the morality of adolescents.) 

The role of medication is also intriguing in this world. Mildred is addicted to sleeping pills, which she needs to separate herself from her parlor walls at night. And medicine is easily available to deal with the side effects by eliminating memories and increasing the checked out state of everyone. Of course, the opening scene of the book centers on Mildred overdosing and not even realizing the gravity of that. Indeed, the medics in this world accept that they must regularly revive people from overdosing on sleeping pills. 

All of this is to say that the plot of Fahrenheit 451 centers both on Montag’s attempt to reckon with censorship as well as how he is unable to extract Mildred from her mundane and unhealthy relationship to her way of life, even when she’s on the brink of death. It is about seeing screens as the product of disturbing political choices, not the thing that drives them. I couldn’t help but be fascinated by how inverted this is to today’s conversation.

Over the last two years, I’ve been intentionally purchasing and reading books that are banned. I wanted to re-read Fahrenheit 451 because of the contemporary resurgence of book banning.  But in actually rereading this book, I couldn’t help but marinate on the entanglement between fears about screens, repression of knowledge, disgust towards children, and conflicted visions of happiness. I also kept thinking about how different the theory of change is in this book compared with how these conversations go in the present. In short, Montag (and the various foils he works with) aren’t really focused on destroying the screens – they are wholly focused on embracing, saving, and sharing knowledge from books. Here, I’m reminded of an era in which education was seen as a path forward not simply a site to be controlled.

The people in Bradbury’s world aren’t happy. They are zombies. But Bradbury recognizes that they are structurally configured, a byproduct of a world that was designed to prevent them from thinking, connecting, questioning, and being physically engaged. Instead, he offers us Clarisse – his sole child character – who teaches Montag how to see the world differently. How to ask questions, how to engage with the physical world, how to not take for granted the social configuration. She invites him to open his eyes. She’s also the one and only character who is actively willing to challenge the status quo.

The counter to Clarisse is Montag’s boss, a character who clearly knows how the society has been configured. He fully recognizes that the banning of books is a ruse for political control. He has no qualms with reinforcing the status quo. So his job as a firefighter is to repress resistance. Books and screens aren’t the real enemy to an authoritarian state – knowledge is. 

Fahrenheit 451 is unquestionably a tale about the caustic consequences of banning books and repressing knowledge. But it’s also an invitation to see the structural conditions that enable and support such repression. It’s easy to want to fight the symptoms, but Bradbury invites us to track the entanglements. Little did I realize just how much I would value rereading this book at this moment in time and with my kiddo. Thank you Ray Bradbury.

Relatedly… 

For better or worse, I’m spending a bit too much time thinking about the rise in efforts to oppress, sanction, and harm youth under the deeply disturbing trends towards parental control, parental surveillance, and state paternalism. I’ll come back to these topics this fall. 

In the meantime, apropos of Fahrenheit 451, I hope folks are tracking how conservative states are now rejecting support from the American Library Association, accusing librarians of exposing children to books that include content they don’t like. ::shaking head::  Next week is “Banned Books Week.” Support the ALA.

Researchers are also increasingly under attack by those who disagree with their findings or for otherwise producing knowledge that is uncomfortable or inconvenient. While this is happening in multiple domains right now (ranging from scholars focused on climate change to youth mental health), scholars working on topics related to disinformation are facing this acutely at the moment. Around the country, researchers are being sued and their institutions are being pressured to turn over communications to Congressional committees. This is starting to feel a lot like the McCarthy era for scholars, especially with universities being ill-equipped (or actively unmotivated) to support researchers. 

See why Bradbury’s book felt really poignant right now?

Still Trying to Ignore the Metaverse

Perhaps surprisingly, I don’t particularly like technology. And certainly not technology for technology’s sake. My brother was always the one who picked up every new gadget to see what it did. I tended to shrug and go back to reading a book. I still do. 

That said.. Like most people, I enjoy technologies that improve my world in some way. I’m fond of technologies that become invisible infrastructure in my life. Technologies that just work without me noticing – like the toilet. When it comes to digital tech, I’m grateful for systems that make me smile. Not the ones that make me vomit. Literally.

Wagner James Au has been trying to get me to engage with virtual reality since we first met in the mid-aughts. You name the iteration, he’s been excited about it. Each time, he tries to convince me that this particular instantiation is cooler, more accessible, more appealing. Each time, I politely explain that I have zero interest in any aspect of this. Still, I like James. And I appreciate his enthusiasm. There’s part of me that wishes I would sparkle that way at the site of a new piece of tech. 

The funny thing is that James knows why I have zero interest in engaging on things related to virtual reality. In fact, it’s precisely because my first research project was an attempt to unpack my hatred of virtual reality that he keeps pushing me to jump into the fray. But I haven’t done work in this area in 25 years. So it cracked me up to no end that James decided to feature my antagonism towards the metaverse in his new book, “Making a Metaverse That Matters.”  He thinks that I owe it to all who are excited about this tech to talk more about this early work. So let me share the way that he told my story and offer some additional context and flare to it just for fun. 

In “Making a Metaverse That Matters,” James opens one section referencing an essay I wrote a decade ago when Facebook obtained Oculus. The essay was provocatively titled “Is the Oculus Rift Sexist?” This was an intentionally provocative essay. But I really meant it with this question. I wanted to know: was Oculus fundamentally designed in a way that was prejudiced based on sex and, therefore, was it going to be an inherently discriminatory piece of tech? My question wasn’t coming out of nowhere. It was something that had plagued me since my first encounter with a VRM system as an undergraduate student. As James quotes from my essay: 

Ecstatic at seeing a real-life instantiation of the Metaverse, the virtual world imagined in Neal Stephenson’s Snow Crash, I donned a set of goggles and jumped inside.

And then I promptly vomited.

(Side note: I hadn’t remembered that I complained about the Metaverse-ness of Oculus back before Meta was Meta so I laughed re-reading this. Also, as an additional side note: it never ceases to amaze me that tech bros want to build worlds created in dystopian novels and expect a different outcome. As a reminder, this is actually the definition of insanity.)

I first encountered VR because my beloved undergraduate advisor – Andy van Dam – had invested in building a new fangled immersive virtual reality system called a CAVE. I was excited for him so I checked it out. My reaction to my first experience with this piece of tech was not joy, but nausea. I told Andy his system was stupid. (If memory serves, I was far more crass in my language.) I also told him the system discriminated against women. He told me to prove it.

At that point in my career, I still wanted to understand tech that I loathed. And I wanted to prove my accusation to Andy. So I started to ask why this piece of tech that made so many men I knew so happy made me so miserable. I tracked down military reports about gender bias in simulator sickness, much of which dated back to the 1960s. I ended up spending time at a gender clinic where people who were on hormone replacement therapy regimens participated in scientific studies about things like spatial rotation. This led me to run a series of psych experiments where my data suggested that people’s ability to be able to navigate 3D VR seems to be correlated with the dominance of certain sex hormones in their system. Folks with high levels of estrogen and low levels of testosterone – many of whom would identify as women – were more likely to get nauseous navigating VR than those who have high levels of testosterone streaming through their body. What was even stranger was that changes to hormonal levels appeared to shape how people respond to these environments. 

(Side note: When I was conducting this work 25 years ago, the language people used to discuss gender was quite different than today. Many of my informants actively hated the term “transgender” and were adamant that I use the word “transsexual” and clearly identify them as male-to-female or female-to-male in my study. In today’s parlance, this latter language is viewed as deeply problematic while transgender is widespread.  Because my older work uses the emic language of the day, I regularly get nastygrams accusing me of transphobia.)

I did this work as an undergraduate but never published it because much work was needed for it to be publishable. But I always hoped that someone would pick up the baton and keep on going. In fact, that’s what motivated me to write the Oculus essay in the first place. And I will always be grateful to Thomas Stoffregen and his team who confirmed that I was not crazy with my early findings – and continued on to do fantastic work. That said, as James notes, it’s depressing how little work has been done in this area ever since. Truth is, I haven’t been tracking it, but I’m not surprised to hear that. I walked away from this world because I had no desire to embrace a technology that wants me to come in a different hormonal arrangement. 

But James is more outraged on my behalf, in no small part I suspect because he does see the joy in this technology and I think he wants me to find it as well. I had to smile when he highlighted the sexist realities of a business culture of tech-for-tech sake. 

[Meta] paid $2 billion for a piece of consumer-facing technology that reputable research suggests tends to make half the population literally vomit.

Then spent tens of billions more to bring it to market anyway.

Then Silicon Valley followed suit, investing tens of billions still further, an entire industry sprung up around it, nearly all of it ignoring evidence that the whole enterprise was built on sand. Usually it seems impossible to calculate the opportunity cost of unconscious gender bias, but in this specific case, the price tag approaches $100 billion.

I know I should be indignant. It is indeed seriously depressing to think about all of the technology that is created out there with little regard for people and practices. It’s exhausting to go through hype cycles of how yet another new technology built based on a dystopian novel is rolling out regardless of the harms or bias that it might trigger. It’s painful to think about how much capital is spent chasing pyramid schemes and illusions rather than solving actual problems. It’s also really depressing to realize that findings that I uncovered 25 years ago were validated by better scholars but were never addressed by industry. But this is not my problem. 

I have no interest in the Metaverse. I am not sitting around dreaming of wearing gobsmackingly expensive ski glasses. Although I’m not a fan of the various aches and pains in my body, I don’t think my life will be better in avatar form. I really really really don’t get why this is a piece of tech that excites people. But I know that there are a lot of people out there like James who want an inclusive and joyful Metaverse to exist. If you’re one of those people, do check out his book. His message is a good one: let’s collectively work towards a version of virtual reality that gives more people joy than pain. 

As for me, even the act of putting one toe into the water to support an old friend was enough for me to remind myself that I don’t have to like or study every new technology there is. And so, with all gratitude to James, I’m going to happily return to my attempt to ignore the Metaverse. Perhaps someone out there who is excited by the technology will want to build on earlier work and address the systemic bias issues. That would be great. But this tech isn’t for me, at least. not in its current form. And so it goes, so it goes.

Dear Alt-Twitter Designers: It’s about the network!

Last week, tech commentators were flush with stories about the speed of new users on Threads. Unprecedented downloads! A sign that Meta is stronger than ever! Networks born in one service can transfer to another! This week? There’s a lot of speculation that Threads is crashing. Folks keep asking me what my take is on Threads (and Mastodon and Bluesky and …) and I keep responding with the same story: we’ll see. And every time I do, I’m reminded of talking to historians who, when you ask them about the last hundred years, they say “we’ll see.”

As I watch these various alt-Twitters emerge, I can’t help but think about some crucial lessons I learned almost twenty years ago when a bazillion social media sites popped up that I struggled to help others see. The tl;dr? It all comes down to nurturing the network dynamics, not the technical features.

In the early days of social media, founders invited their friends. Who invited their friends. And on it went. The networks grew slowly, organically, and with a level of density that is under-appreciated. When someone joined, there was a high probability that that person knew a bunch of people on the site. After all, these things rolled out across pre-exiting social graphs. The density mattered.

Some existing network graphs were better suited to this dynamics. There’s a reason that almost all early social media consisted of geeks, freaks, and queers. Using technology to strengthen bonds was already a part of these communities. And this is why, a few years later, focusing on students was powerful. Some networks are better positioned to leverage technical mediation.

But the graph of connections was not the only relevant graph. The other critical graph was the graph of norms. Founders were, unsurprisingly, hyper enthusiastic about the thing they created. They posted a lot of content — and they encouraged the people they invited to do the same. So there was an enthusiasm from the getgo. And as new people came on, they got creative, they pushed at the norms, they expanded their networks. Divergent norms sat alongside one another. Geek Friendster was different than queer Friendster. The kernel of all of this was vibrancy. These dense norm-infused networks felt vibrant to those who were a part of them.

As social media became A.Thing^TM, people joined because they felt they had to join. FOMO. The graph filled out faster. But this complicated vibrancy. Fast adoption wasn’t inherently a good thing because the norm setting around what to post, how to interact didn’t play out at the same speed. People joined and they were the equivalent of a blank egg. They didn’t know why they were there. Getting them engaged required a different kind of nurturing of networks. Not all new services were up for that — they built tools, not communities. Most social media in this world came and went. There’s a graveyard of dead social media sites out there, lingering on Archive.org for future historians to view 100 years from now.

Twitter came of age in the fast-network growth phase. What made Twitter so interesting was the directed graph dynamics of it all, which altered how vibrancy unfolded. Ironically, the fake accounts helped a lot. Folks knew their follower graph was fake, but there was enough interaction, enough signal that people were being listened to, that real people felt it was vibrant enough. The illusion of vibrance prompted people to be more vibrant, keeping the thing alive at scale across many networks. But it was also the first site where I saw a ton of subgraphs never hatch, never find their footing.

And then there was Google+, may it RIP. This was birthed out of the arrogance of a major company who believed that they could leverage their scale to dominate social media. The launch of this was an example of blitzscaling where the sudden fast scaling (thanks to the behemoth power of Google) triggered a blitz. But not the kind where a military feels emboldened, the kind where those on the ground feel destroyed by aerial bombardment. No matter how not-evil they were, Google simply couldn’t bomb its audience into sociality.

Cuz that’s the thing about social media. For people to devote their time and energy to helping enable vibrancy, they have to gain something from it. Something that makes them feel enriched and whole, something that gives them pleasure (even if at someone else’s pain). Social media doesn’t come to life through military tactics. It comes to life because people devote their energies into making it vibrant for those that are around them. And this ripples through networks.

One thing that complicates people’s willingness to devote their energy to vibrancy is context collapse, a term that Alice Marwick and I coined long ago. When a social media site grows slow and steady, it starts out for each user as a coherent context. Things get dicy over time as people struggle to figure out how to navigate divergent networks. But people find strategies, renegotiate the context, carve off specific worlds and narrow the context for themselves for libidinal joy. However, when you blitzscale a new social media site into being, the audience arrives with context collapse already in play. They don’t know if this is a site to joke around with friends, to be professional, to bitch about politics, or what. And without having already set out to build vibrancy and negotiate norms, the vast majority of people who arrive to a blitzscaled context collapsed site sit around waiting for someone to norm-set for them.

I can’t help but think of this in terms of Twitter’s early design language cuz those designers really understood this at a certain level. Consider the unhatched egg. This is a bird that isn’t a bird yet, not even a fledgling. It might become a bird. Or it might become a egg that someone eats for lunch. Worse, it might rot in place. Alt-Twitter sites are creating blank eggs everywhere. But they aren’t being nurtured to hatch. They’re just sitting there, waiting. And most of them are going to rot. Cuz you can’t take an egg that’s been sitting around for a long time and suddenly add a heat lamp and hope that a baby chick will form. Twitter was left with a lot of dead eggs. (And to push this a bit too far… adversarial actors realized that and decided to gobble up a bunch of them and turn them into Zombie chicks. Which was a major problem.)

Early norm-setting, vibrancy, and slow but dense network formation help breathe life into a social media site. And the early period is critical because it’s when habituation forms. People need to make visiting a site to be a part of their daily practice early on if it’s going to sustain. Many sites have been tried out and then faded into smithereens because people never habituated to them. Prompting people to name-squat on a site so that you get a media blast of The.Most.Downloads.Ever sounds like a tech-driven marketing strategy rather than one that understands the essence of social media.

I should note that blitzscaling is not the only approach we’re seeing right now. The other (and I would argue wiser) approach to managing dense network formation is through invitation-based mechanisms. Heighten the desire, the FOMO, make participating feel special. Actively nurture the network. When done well, this can get people to go deeper in their participation, to form community. This is a way to manage growth through networks in ways that were easy when these things weren’t cool but became harder when they became cool. However, nurturing early adoption thoughtfully matters immensely in this approach. Unlike 20 years ago, the people poised to be early adopters today are those who are most toxic, those who get pleasure from making others miserable. This means that the rollout has to be carefully nurtured so that the Zombie chicks don’t eat up the other eggs before they can even hatch.

Managing the growth of a social media site now looks soooooo different than it did 20 years ago. The growth curve and context collapse issues are real. But there’s also the feature roll-out issue. It was completely reasonable to function in a perpetual beta “features will come soon!” mode then. But that doesn’t work as well in this context. And that means that trying to co-construct features with your audience today is much much more complicated.

Of course, the “death” of social media sites also look different today than they did in the past. Today, many social media devolve into platforms dominated by big personalities, celebrities, and can’t-look-away listacle junk content. Twenty years ago, social media was more local, more dense in networks. Entertainment media was all lopsided. Today, these two worlds are much more blurred. There was a time when Justin Bieber’s outsized audience on Twitter was shocking and weird and fascinating. Today, most major social media platforms have influencers who dominate and overwhelm the norms of the average people trying to connect. So many platforms devolve to being sites for a narrow subset of the population to build audience. But that’s a topic for a different rant.

Social media will survive. Something will come out of this moment. But a LOT of money is going to be wasted relearning the lessons from the last 20 years. When Alice and I were playing around with the concept of “context collapse,” I never realized just how relevant it would continue to be. And when I was riffing about network formation back in the days of Orkut, I never thought that we would need to relearn this over and over again. Rather than being bitter as I shake my head like an old person, I’m going to enjoy my popcorn.

Too Big to Challenge?

Photo 148941930 / Robot Economy © Andrey Popov | Dreamstime.com

I find it deeply disturbing that the tech industry represents 9% of the U.S. GDP and only five Big Tech companies account for 25% of the S&P 500. Prior to Covid, most of the growth in stock market came from Big Tech (not the Trump Administration…). Now, as the U.S. economy is all sorts of wacky, Big Tech is what is keeping the stock market’s chin above water. In the process, Big Tech is accounting for more and more of the stock market. ::gulp::

If capitalism and stock markets aren’t your thing, it’s easy to shrug your shoulders at this. But the stock market is infrastructural in profound (and disturbing) ways to American life. Professors: university endowments depend on the stock market staying strong. So do the few remaining pension plans (hiiii government workers!). The S&P 500 is also important for nearly all retirement plans and, much to our collective chagrin, the stability of the banking world itself. Economists tend to scare the heck out of me whenever they talk about how many things are connected to the “overall health of the economy” which is increasingly dependent on a small number of Big Tech companies. And oh boy do they feel the heat to keep the economy chugging along.

Inside the tech industry, there’s another strange calculation. High status employee compensation in tech is also tethered to the stock market. Because of talent wars, tech companies panic when their stocks fall because their employees have no incentive to stay since that’s so much of their compensation. The talent wars have all sorts of other perverse incentives. For example, companies have little incentive to invest in training people for fear that they will go elsewhere. And it shouldn’t be surprising that tech companies conspired to wage-fix in an effort to cap the salaries of certain classes of workers so as to not be in a perennial talent war with each other.

Given how intense the talent wars have been in recent years, I can’t help but be fascinated by the mass layoffs happening now in tech. Over the last few months, companies are coming forward with their tails between their legs saying that they over-hired which is why they needed to lay people off. But did they all really do the exact same thing? Or is there more going on here?

In a classic text in sociology, Paul DiMaggio and Woody Powell mapped out an idea called “institutional isomorphism” where they highlighted how corporations and other large institutional arrangements move in alignment with one another. They describe practices of coordination, mimicry, and normative pressures. In other words, there are structural reasons why companies in entire sectors tend to do the same darn thing.

This might explain the collective over-hiring, but I can’t help but wonder if we’re also watching an inversion of the wage-fixing dynamic. By collectively moving at once, the tech companies are also putting a big pause on the talent wars (outside of the tiny number of very specific roles). Right now, there is widespread fear of job loss across the tech industry. In response, tech workers are staying put. They’re not going anywhere. Unless they’ve been forced out. (Random aside: will we see a massive influx in startups in a few years due to layoffs?)

I wonder if tech leaders think that this hovering threat of more and more layoffs will prompt workers into working harder, faster, more in-line with the company’s goals. Fear is a motivator. Do tech leaders believe that’s effective? Moreover, is it? Are remaining workers helping build the value of these companies at faster rates that benefits the economy? Or is fear creating all sorts of externalities within these companies? I honestly don’t know. I’m waiting for the b-school research!

Amidst the chaos inside the tech industry, we have AI. AI is often described as the cause of the chaos, but I can’t help but wonder if it’s just the hook. AI offers all sorts of imaginaries. And imaginaries are necessary to keeping stock markets going up up up. People want to imagine that this new technology will transform society. They want to imagine that this new technology will strengthen the economy as a whole (even if a few companies have to die).

Many social scientists and historians are critics of AI for reasons that make total sense to me. Technologies have historically reified existing structural inequities, for example. However, the fear-mongering that intrigues me is that coming from within the technical AI community itself. The existential threat conversation is a topic of a different rant, but one aspect of it is relevant here.

Many in the AI tech community believe that self-coding AIs will code humans out of existence and make humans subordinate to AIs. This is fascinating on soooo many levels. The a-historic failure to recognize how humans have repeatedly made other humans subordinate is obviously my first groan. Yet, more specifically to this situation is the failure of extraordinarily high status, high net-worth individuals to reckon with how the tech industry has made people subordinate in a capitalistic context already.

Poke around a bit and these folks will talk about how programmers are doomed. And I can’t help but be fascinated by their angst. At the center of this existential threat is a threat to their own status, power, and domination. They’re afraid that they will become subordinate to the machine (?or other political arrangements?). But they’re projecting this onto all of humanity without appreciating the ways in which so many people already feel subordinate to a machine, namely a particular arrangement of capital and power that is extraordinarily oppressive.

So I keep coming back to this question: How much of the computer science panic of an AI robot takeover is actually coming from an anxiety that their status, power, and wealth is under threat? (And, as such, their agency… But that’s a topic for another rant.)

I keep trying to turn over rocks and make sense of the hype-fear continuum of AI that’s unfolding and what is really standing out to me are the layers and layers of anxiety. Anxiety from tech workers about job precarity and existential risk. Anxiety from tech leaders about the competitiveness of their organizations. Anxieties from national security experts about geopolitical arrangements. Anxieties from climate scientists about the cost of the GPU fights surpassing the crypto mining. Anxieties from economists and politicians about the fundamentals of the economy.

So I keep wondering… what are going to be the outcomes of an anxiety-driven social order at the cornerstone of the economy, the social fabric, and the (geo)political arrangements? History is not comforting here. So help me out… How else should I be thinking of this arrangement? And what else in tangled up in this mess? Cuz more and more, I’m thinking that obsessing over AI is a strategic distraction more than an effective way of grappling with our sociotechnical reality.

Deskilling on the Job

Photo 216171598 / Robots Jobs © Victor Moussa | Dreamstime.com

When it comes to AI’s potential future impact on jobs, Camp Automation tends to jump to the conclusion that most jobs will be automated away into oblivion. The progressive arm of Camp Automation then argues for the need for versions of universal basic income and other social services to ensure survival in a job-less world. Of course, this being the US… most in Camp Automation tend to panic and refuse to engage with how their views might intersect with late-stage capitalism, structural inequality, xenophobia, and political polarization.

The counterweight to Camp Automation is Camp Augmentation, of which I am far more analytically aligned. Some come to Camp Augmentation because they think that Camp Automation is absolutely nutsoid. But there are also plenty of folks who have studied enough history to have watched how fantasies of automation repeatedly turn into an augmented reality sans ugly headwear.

Mixed into Camp Automation and Camp Augmentation is a cultural panic about what it means to be human anyways. I find this existential angst-ing exhausting for its failure to understand how this is at the core of philosophy. It’s also a bit worrying given how most attempts throughout history of resolving this have involved inventing new religions. Oh, the ripples of history.

While getting into what it means to be human is likely to be a topic of a later blog post, I want to take a moment to think about the future of work. Camp Automation sees the sky as falling. Camp Augmentation is more focused on how things will just change. If we take Camp Augmentation’s stance, the next question is: what changes should we interrogate more deeply? The first instinct is to focus on how changes can lead to an increase in inequality. This is indeed the most important kinds of analysis to be done. But I want to noodle around for a moment with a different issue: deskilling.

Moral Crumple Zones

Years ago, Madeleine Elish decided to make sense of the history of automation in flying. In the 1970s, technical experts had built a tool that made flying safer, a tool that we now know as autopilot. The question on the table for the Federal Aviation Administration and Congress was: should we allow self-flying planes? In short, folks decided that a navigator didn’t need to be in the cockpit, but that all planes should be flown by a pilot and copilot who should be equipped to step in and take over from the machine if all went wrong. Humans in the loop.

Think about that for a second. It sounds reasonable. We trust humans to be more thoughtful. But what human is capable of taking over and helping a machine in a fail mode during a high-stakes situation? In practice, most humans took over and couldn’t help the plane recover. The planes crashed and the humans got blamed for not picking up the pieces left behind by the machine. This is what Madeleine calls the “moral crumple zone.” Humans were placed into the loop in the worst possible ways.

This position for the pilots and copilots gets even dicier when we think about their skilling. Pilots train extensively to fly a plane. And then they get those jobs, where their “real” job is to babysit a machine. What does that mean in practice? It means that they’re deskilled on the job. It means that those pilots who are at the front of every commercial plane are less skilled, less capable of taking over from the machine as the years go by. We depend structurally on autopilot more and more. Boeing took this to the next level by overriding the pilots with their 737 MAX, to their detriment.

To appreciate this in full force, consider what happened when Charles “Sully” Sullenberger III landed a plane in the Hudson River in 2009. Sully wasn’t just any pilot. In his off-time, he retrained commercial pilots how to fly if their equipment failed. Sully was perhaps the best positioned pilot out there to take over from a failing system. But he didn’t just have to override his equipment — he had to override the air traffic controllers. They wanted him to go to Teterboro. Their models suggested he could make it. He concluded he couldn’t. He chose to land the plane in the Hudson instead.

Had Sully died, he would’ve been blamed for insubordination and “pilot error.” But he lived. And so he became an American hero. He also became a case study because his decision to override air traffic control turned out to be justified. He wouldn’t have made it. Moreover, computer systems that he couldn’t override prevented him from a softer impact.

Sully is an anomaly. He’s a pilot who hasn’t been deskilled on the job. Not even a little bit. But that’s not the case for most pilots.

And so here’s my question for our AI futures: How are we going to prepare for deskilling on the job?

How are Skills Developed?

My grandfather was a pilot for the Royal Air Force. When he signed up for the job, he didn’t know how to fly. Of course not. He was taught on the job. And throughout his career, he was taught a whole slew of things on the job. Training was an integral part of professional development in his career trajectory. He was shipped off for extended periods to learn management training.

Today, you are expected to come to most jobs with skills because employers don’t see the point of training you on the job. This helps explain a lot of places where we have serious gaps in talent and opportunity. No one can imagine a nurse trained on the job. But sadly, we don’t even build many structures to create software engineers on the job.

However, there are plenty of places where you are socialized into a profession through menial labor. Consider the legal profession. The work that young lawyers do is junk labor. It is dreadfully boring and doesn’t require a law degree. Moreover, a lot of it is automate-able in ways that would reduce the need for young lawyers. But what does it do to the legal field to not have that training? What do new training pipelines look like? We may be fine with deskilling junior lawyers now, but how do we generate future legal professionals who do the work that machines can’t do?

This is also a challenge in education. Congratulations, students: you now have tools at your disposal that can help you cut corners in new ways (or outright cheat). But what if we deskill young people through technology? How do we help them make the leap into professions that require more advanced skills?

There’s also a delicate balance regarding skills here. I remember a surgeon telling me that you wanted to get scheduled surgery on a Tuesday. Why? Because on Monday, a surgeon is refreshed but a tad bit rusty. By Tuesday, they’re back in the groove but not exhausted. Moreover, there was a fine line between practice and exhaustion — the more that surgeons are expected to do each week, the higher the number of jobs that they’ll do badly at. (Where that holds up to evidence-based scrutiny, I don’t know, but it seems like a sensible myth of the profession.)

Seeing Beyond Efficiency

Efficiency isn’t simply about maximizing throughput. It’s about finding the optimum balance between quality and quantity. I’m super intrigued by professions that use junk work as a buffer here. Filling out documentation is junk work. Doctors might not have to do that in a future scenario. But is the answer to schedule more surgeries? Or is the answer to let doctors have more downtime? Much to my chagrin, we tend to optimize towards more intense work schedules whenever we introduce new technologies while downgrading the status of the highly skilled person. Why? And at what cost?

The flipside of it is also true. When highly trained professionals now babysit machines, they lose their skills. Retaining skills requires practice. How do we ensure that those skills are not lost? If we expect humans to be able to take over from machines during crucial moments, those humans must retain strong skills. Loss of knowledge has serious consequences locally and systemically. (See: loss of manufacturing knowledge in the US right now…)

There are many questions to be asking about the future of work with new technologies on the horizon, many of which are floating around right now. Asking questions about structural inequity is undoubtedly top priority, but I also want us to ask questions about what it means to skill — and deskill — on the job going forward.

Whether you are in Camp Augmentation or Camp Automation, it’s really important to look holistically about how skills and jobs fit into society. Even if you dream of automating away all of the jobs, consider what happens on the other side. How do you ensure a future with highly skilled people? This is a lesson that too many war-torn countries have learned the hard way. I’m not worried about the coming dawn of the Terminator, but I am worried that we will use AI to wage war on our own labor forces in pursuit of efficiency. As with all wars, it’s the unintended consequences that will matter most. Who is thinking about the ripple effects of those choices?

Protect Elders! Ban Television!!

(Some thoughts on the efforts to regulate children’s use of social media)

Picture of an older white man staring at an oldTV with rabbit ears, holding a remote, and looking like a zombie.
Getty Images

Have you noticed how many people ages 65+ watch television every day? According to the Bureau of Labor Statistics, almost 90% of those in this age bracket watch TV Every.Single.Day!!!! ::gasp:: And that data was collected before the pandemic! By <hand-waving logics of a moral panic>, it must be so much worse now!

And they don’t just watch a little bit. According to Nielsen, before the pandemic, these elders were watching over 7 hours a day of television! Our elders are glued to their boob tubes.

Television is a serious problem! Their brains are wasting away. Their health is suffering. Their ability to maintain friends is declining. They’re unable to recognize disinformation. Elders’ brains are not fully baked anymore; they can’t handle television. This foolish medium is making fools out of our elders, making them unable to participate responsibly in a democratic society. We must put a stop to this. We must stop TV! And if we can’t stop TV, we must prevent them from watching it!

It is time that we protect our elders by unplugging them. Clearly they won’t do it themselves. And clearly we can’t figure out how to regulate television. So we must regulate our elders’ use of television. For their own good!

Going forward: Only those under 65 should be allowed to watch television.

Now, I know that our elders won’t see how important it is that we do this to them for their own good so we need to develop newfangled surveillance technology to ensure that no one over 65 can turn on their television set. Sure, that technology might be a little creepy, but how bad can it be? It’s not like age verification face scanning technologies could be racist, right!?! And sure, some of those sneaky elders might think that they can trick the system by getting plastic surgery or wearing makeup, but we can put a stop to that too, right? We just need to collect more data from them. I mean, what could go wrong if we collected their name, date of birth, and social security number? That way we’ll know that they’re really who they say they are. Those sneaky elders.

Le sigh.

What is New is Old

Cover from danah boyd’s book “It’s Complicated: The Social Lives of Networked Teens.” Includes the title and some random letters coming from a girl’s computer.

For over a decade, I studied how teenagers use social media. I had a front row seat to multiple moral panics. I even wrote an entire book dedicated to unpacking said moral panics: It’s Complicated: The Social Life of Networked Teens. I was also a co-lead on a task force on internet safety where we were asked to identify the dangers that youth were facing and identify interventions that would help. With the help of an amazing advisory board, Andrew Schrock and I scoured the research space trying to map out the different risks teens faced vis-a-vis solicitation, harassment, and problematic content. Little did I understand at the time that my real job was to “prove” that age verification technologies were the “solution” to all online safety problems. I learned that lesson the hard way when our research led us to a different set of recommended interventions. This lesson was pounded into me when a state Attorney General yelled at me to “go find different data” and when a Frontline reporter told me that she was encouraged to investigate my efforts to show that I was falsifying data to defend tech companies. (She concluded I was not falsifying data and the story never happened.)

But here we are again. A new moral panic is unfolding around teenagers’ use of social media. And once again, the “solution” seems to be to restrict access and use age verification technologies to enforce this approach. A few weeks ago, Utah took the first stab with a law that prohibits minors from accessing social media “without parental permission.” At first blush, this looks like an extension of the federal Children’s Online Privacy Protection Act (COPPA) that requires for-profit websites that collect data about children under 13 to get permission from parents.

COPPA seemed like a good idea at the time it was passed back in the 1990s. In practice, COPPA is the reason why all sites require you to be 13+ to join. Of course, every social media company knows children under the age of 13 are lying about their age to get access to their sites. A decade ago, Eszter Hargittai, Jason Schultz, John Palfrey, and I decided to figure out what parents thought about this. We quickly discovered that parents teach their kids to lie to get access to social media. So much for that being effective. (Full disclosure: I created dozens of accounts for my kids for different sites during the pandemic. Over and over, I’ve been stymied by the processes of parental approval and just given up and given them a false birthdate.)

Utah’s law goes beyond COPPA because it’s not just worried about data privacy and advertisement. It’s centered on a purported mental health crisis that kids are facing, supposedly because of social media.

All of this seems to connect back to a dubious interpretation of a Centers for Disease Control report on “Youth Risk Behavior.” The report is super interesting. It turns out that teenagers are having less sex (although those who are might be engaged in more risky sex). It also turns out that bullying at school declined during the pandemic (duh) but that bullying online didn’t go up or down even then.

But the thing that caught the eye of regulators was that mental health seems to have skyrocketed in recent years. I looked at this data and shook my head. My head swirled thinking about the pandemic, the rise in financial instability and food scarcity in some communities, the rising costs of college, the rise in visible hate speech, anti-trans and anti-abortion legislation, the fear kids have of a mass shooter at school, and a slew of other trends that I hear young people angst about. But apparently regulators preferred a different interpretation. They looked at this and went: “blame social media!!!”

Jessica Grose took many of these interpretations to task in her op-ed “Stop Treating Adolescent Girls as Emotionally Abnormal.” I want to call particular attention to her colleague’s remark: “The most predictable thing in the world is for people to respond to this article with their own reasons for why this is taking place based entirely on their own specific hills on which they have decided to die.”

Still, most news coverage of these stories were full of sheer panic. WashPo responded to this study with a story titled “The crisis of student mental health is much vaster than we realize.” Their editorial board followed up with a note that “America’s teens are in crisis. States are racing to respond.” My immediate thought was: are they? They’re looking to ban social media as though that’s the cause of this crisis. As though if social media goes away, the problem will go away. A Financial Times reporter took it to the next level, conflating correlation and causation with the headline that “Smartphones and social media are destroying children’s mental health” (note: the story itself is full of hedge language like “may”). And then a writer at The New Yorker penned a piece entitled “The case for banning children from social media” that hinges on his own experiences as a parent.

I cringed. One basic rule of research is never to take one’s personal experiences as extrapolate based on them. And one thing I learned as a researcher of young people is that parents will always look for something to blame in a way that minimizes their own agency. And I get it. Parenting is haaaard. And emotionally exhausting. And guilt-inducing. It’s soooo much safer to justify the situation that’s frustrating you by blaming structural conditions that you can’t do anything about. But it’s not honest. And it doesn’t hold up empirically.

The CDC survey offers sound empirical evidence that young people are currently reporting higher levels of duress. There’s also a lot of other empirical signal that mental health struggles are on the rise. Those who follow these trends over decades aren’t be surprised. Adults are also more anxious and more depressed right now. It turns out that tends to impact kids. Financial instability, political polarization, food scarcity, geopolitical conflict, and many other factors tend to correlate with anxiety and depression, even if causality is messier. Lots of trend lines are all over the place right now on lots of different measures.

Two “new” factors are harder to evaluate. One is the pandemic. Researchers generally expect this to have negative repercussions within community but it’ll take a lot of work to tease out what is the pandemic directly and what are ripple effects (e.g., financial instability). The other new one that has become the modern day boogie many is whatever the new technology is. Social media (and, more recently, mobile phones) have been favorites for the last decade.

My research consistently found that teens turn to these technologies to connect with others, especially when they were struggling. Surprise surprise, when kids were stuck at home during the pandemic, they wanted to talk with their friends via phone, social media, and in video games. When young people feel isolated, they look for others like them in various online fora. And so, yes, there will be a correlation between certain kinds of online behaviors and mental health states.

Where things get dicier concerns causality. Chicken and egg. Does social media cause mental health problems? Or is it where mental health problems become visible? I can guarantee you that there are examples of both. But here’s the thing…. Going to school and church are often a “cause” of mental health duress. Parents and siblings are often a source of mental health duress. No one in their right mind would argue that we need to prevent all youth from attending school or church or living with their parents or siblings. We take a more tempered approach because there are also very real situations in which we need to remove some children from some environments (namely abusive ones).

So why do we want to remove ALL children from social media?

This is a story of control, not a story of protecting the well-being of children.

A century ago, we forced teenagers into compulsory high school to prevent them from being able to fraternize with older adults because we were afraid that 16yos would compete with adults for jobs as the Great Depression was unfolding. Fifty years ago, moral panics around comic books normalized a world in which we restricted children’s access to content. Can we admit that much of this content was political in nature and those who restricted it opposed those politics? Now we’re back to book banning and “don’t say gay” frames. This is not about children’s mental health. This is about preventing children from being active members of our contemporary political polis. This is about using rhetoric around children’s “innocence” to ensure that they don’t encounter views that politicians don’t want them to have. This isn’t new. This is as old of a strategy as it gets.

I care deeply about children’s mental health. And there’s a lot that can and should be done. Let’s start with giving every child access to mental healthcare. Let’s make talking to a counselor free. Let’s ensure that children can talk to a trained therapist without being surveilled by their parents (or even needing parental permission).

I am deeeeeeply worried about social and structural conditions that increase mental health crises. Let’s eradicate food scarcity. Let’s make it possible for parents to stay home with newborns and sick children without being docked pay or losing their jobs. Let’s build a social safety net.

I also fully know how frustrating it is to see your own child struggling and escaping into a zombie state in front of a screen. But parents, please take a deep breath and look at the situation more holistically. Why is this giving them pleasure? What are they escaping? What social itch are they scratching? And are you able to create other paths for pleasure, escape, and socialization?

Revisiting Our Collective Habits

I began this post satirically by focusing on elders and television. But let’s also be real. Many elders do have a seriously unhealthy relationship with television at this point. We know that the answer is not to ban elders from accessing TV (even if some of us might really really want that). But what we can see in this unhealthy dynamic is an important lesson about habits, a lesson that applies to all of us.

Many elders got into the habit of watching TV years ago. It may have started out with the nightly news or prime time TV, an opportunity to escape after an exhausting day of work. And it expanded from there. For many, the pandemic made it much worse. And as they watched more TV, it got harder to do other things. Other things were exhausting physically. Or mentally.

This is not the only bad habit we’ve seen adults develop over time. We have a better framework for talking about what happens when a glass of wine after work turns into a bottle of wine a day habit.

What we do know is that breaking habits is HARD. And it’s hard for everyone. This is why, as parents, we don’t want to see our kids develop bad habits. And, especially after the acute phase of the pandemic, many of us recognize that we — and our kids — have gotten into bad habits around technology. We used technology as a babysitter while we were trying to work from phone. And we haven’t broken that habit at all. But block our kids from accessing social media through regulation will not produce a healthy response to technology overnight. If we want to change our habits in relationship to technology because we don’t like them, we need to be thoughtful about them.

When I was spending lots of time with teenagers, one of the things that they always told me was that parents were the real addicts. They couldn’t let go of their phone (or Twitter or … ). I looked around and realized how true this is. Go to a kids’ sports game or playground and you’ll see a bunch of parents staring into their devices. So, parents, here’s a thing you can do. Every time you pick up your device in front of your kids, verbalize what you’re doing. “I’m looking up directions” will be easy to say out loud. “You’re annoying me so I’m going to look at TikTok” will be far more uncomfortable. Set a new habit. Be visible about why you are using technology and ask your kids to do the same. Talk with them about your bad habits and ask them to hold you accountable. Then you can build trust and ask the same of them.

These bills aren’t tools to empower parents or address a very real mental health crisis. They’re a mechanism to control youth, enrich age verification vendors, and turn our kids into political pawns.

These laws sound good because we are worried about our kids and because there is deep and reasonable animosity towards Big Tech. (The geopolitical fight over TikTok is adding to the chaos.) Let’s pass data privacy laws that protect all of us (including our elders who are an identity theft nightmare!). Let’s build mental health infrastructure. Let’s increase our social safety net. But please please, let us not ban children from social media “for their own safety.” Cuz that’s just not what this is about.

Resisting Deterministic Thinking

AI is here and it will change everything! OMG the sky is falling! Programmers are now obsolete. No, they are needed more than ever before. Large language models will destroy journalism, democracy, society. No, they will free us from drudgery and we will all be happier. Cancer will be solved because of AI. AI’s hallucinations will usher in a new era of disinformation.

xkcd

I just returned from a three month sabbatical spent mostly offline diving through history and I feel like I’ve returned to an alien planet full of serious utopian and dystopian thinking swirling simultaneously. I find myself nodding along because both the best case and worst case scenarios could happen. But also cringing because the passion behind these declarations has no room for nuance. Everything feels extreme and fully of binaries. I am truly astonished by the the deeply entrenched deterministic thinking that feels pervasive in these conversations.

Deterministic thinking is a blinkering force, the very opposite of rationality even though many people who espouse deterministic thinking believe themselves to be hyper rational. Deterministic thinking tends to seed polarization and distrust as people become entrenched in their vision of the future. Returning to the modern world, I’m finding myself frantically waving my hands in a desperate attempt to get those around me to take a deep breath (or maybe 100 of them). Given that few people can see my hand movements from my home office, I’m going to pretend like it’s 2004 and blog my thoughts in the hopes that this post might calm at least one person down. Or, maybe, if I’m lucky, it’ll be fed into an AI model and add a tiny bit of nuance.

What is deterministic thinking?

Simply put, determinism is “if x, then y.” It is the notion that if we do something (x), a particular outcome (y) is inevitable. Determinisms are not inherently positive or negative. It is just as deterministic to say “if we build social media, the world will be a more connected place” as it is to say “social media will destroy democracy.”

It is extraordinarily common for people who are excited about a particular technology to revert to deterministic thinking. And they’re often egged on to do so. Venture capitalists want to hear deterministic thinking in sales pitches. Same with the National Science Foundation. In many social science fields, these futuristic speech acts gets labeled in a pejorative manner as “technological determinism.” Inventors and corporations are often accused of being deterministic, which is a shorthand intended to dismiss their rhetoric as ahistoric and socially oblivious. (Academics who study technological determinism are often more nuanced in this scholarship than their blog posts and op-eds.)

Meanwhile, however, tech critics often fall into the same trap. Many professional critics (including academics, journalists, advocates) are incentivized to do so because such rhetoric appeals to funders and makes for fantastic opinion pieces. Their rhetoric is rarely labeled deterministic because that’s not the language of futurists. Rather, they are typically dismissed as being clueless about technology and anti-progress. They’re regularly not invited to “the party” (where the technology is being debated by those involved in creating it) because they’re seen as depressing.

Determinism’s ugly step-sibling is “solutionism.” Solutionism is the belief that x will be a solution not just to achieve y but to achieve all possibly y’s. Solutionists tend to be so enamored with x that they cannot engage with any criticism of x.

In a world where technologies are given power, authority, and funding, those with positive deterministic views (and solutionistic mindsets) often have more resources and power, leading to a mutually self-destructive polar vortex rich with righteousness that is anything but rational. This is often visible through shouting matches. Right now, the cacophony is overwhelming. And it’s breaking my brain.

xkcd

Embracing Probabilistic Futures

The counter to determinism is not indeterminism. It is unsatisfying to throw our hands up in the air and say “any future is possible” because, well, that’s kinda bullshit. Some futures are more likely to occur than others. Some outcomes are more likely because of a particular technical intervention than others.

The key to understanding how technologies shape futures is to grapple holistically with how a disruption rearranges the landscape. One tool is probabilistic thinking. Given the initial context, the human fabric, and the arrangement of people and institutions, a disruption shifts the probabilities of different possible futures in different ways. Some futures become easier to obtain (the greasing of wheels) while some become harder (the addition of friction). This is what makes new technologies fascinating. They help open up and close off different possible futures.

To be clear, this way of thinking isn’t unique to technology. You can treat laws and regulations the same way, for example. Policies are often implemented with a vision of a new future; the reality of a new law is that things change but it’s not always as predictable as politicians might hope.

At both the macro and micro level, a wide range of interventions into a system rearrange possible futures. Bridges rearrange the social fabric of a city while chemotherapy rearranges the health futures of a patient. Neither determine the future. They just make some futures more likely and other futures less likely.

Context matters here. A bridge to nowhere (oh, thank you pork barrels) doesn’t have as much impact on futures as a bridge connecting two metropolises for the first time. Chemotherapy might be a general all-purpose intervention for cancer patients but the trade-offs between its potential upsides and downsides vary tremendously based on the particular cancer and the particular patient, and thus the probabilistic futures made possible are not consistent or always constructive.

When we’re dealing with medical interventions, we’re always living and breathing probabilistic futures. We hope that an intervention has a desired outcome but no responsible doctor is willing to be deterministic when treating a patient. Wise doctors think holistically about the patient and enroll them into probabilistic thinking to decide how to intervene in order to move towards desired futures. They take context into account and make probabilistically driven recommendations. (Shitty doctors can also be solutionistic in mindset, which sucks.)

Strangely, however, when it comes to a lot of other technologies, probabilistic thinking goes out the window. I find that especially odd when it comes to discussing artificial intelligence given that most AI models are the sheer manifestation of probabilistic thinking. There is no one future of LLMs (and many of those closest to these developments get this). And yet, hot damn is the rhetoric nutsoid.

xkcd

The Blessings and Curses of Projectories

Even though deterministic thinking can be extraordinarily problematic, it does have value. Studying the scientists and engineers at NASA, Lisa Messeri an Janet Vertesi describe how those who embark on space missions regularly manifest what they call “projectories.” In other words, they project what they’re doing now and what they’re working on into the future in order to create for themselves a deterministic-inflected roadplan. Within scientific communities, Messeri and Vertesi argue that projectories serve a very important function. They help teams come together collaboratively to achieve majestic accomplishments. At the same time, this serves as a cognitive buffer to mitigate against uncertainty and resource instability. Those of us on the outside might reinterpret this as the power of dreaming and hoping mixed with outright naiveté.

We can dismiss these scientists’ projectories (and the projectories of the institution) as delusional, but a lot of creativity comes from delusional thinking. This is why I often have a larger tolerance for projectories and deterministic fantasy-making that many social critics.

Where things get dicy is where delusional thinking is left unchecked. Guard rails are important. NASA has a lot of guardrails, starting with resource constraints and political pressure. But one of the reasons why the projectories of major AI companies is prompting intense backlash is because there are fewer other types of checks within these systems. (And it’s utterly fascinating to watch those deeply involved in these systems beg for regulation from a seemingly sincere place.)

Right now, the check that is most broadly accessible is a reputational check. And so, in a fascinating twist of fate, those who are trying to push back against the development of AI systems have reverted to the same act of generating projectories that are daaaaaark and dystopic. (And to be clear, many of those who are most engaged in these alternative projectories are themselves AI researchers and developers.) The scramble for thought leadership and control of the narrative is overwhelming.

Projectories have power. Power for those who are trying to invent new futures. Power for those who are trying to mobilize action to prevent certain futures. And power for those who are trying to position themselves as brokers, thought leaders, controllers of future narratives in this moment of destabilization. But the downside to these projectories is that they can also veer way off the railroad tracks into the absurd. And when the political, social, and economic stakes are high, they can produce a frenzy that has externalities that go well beyond the technology itself. That is precisely what we’re seeing right now.

A Different Path Forward

Rather than doubling down on deterministic thinking by creating projectories as guiding lights (or demons), I find it far more personally satisfying to see projected futures as something to interrogate. That shouldn’t be surprising since I’m a researcher and there’s nothing more enticing to a social scientist than asking questions about how a particular intervention might rearrange the social order.

But I also get the urge to grab the moment by the bull’s horns and try desperately to shape it. This is the fascinating thing about a disruption like what we’re seeing with AI technologies. It rearranges power, networks, and institutions. And we’re watching a scramble by individuals and organizations to obtain power in this insanity. In these moments, there is little space for deeply reflexive thinking, for nuanced analysis, which is especially unfortunate because that’s precisely what we need right now. Not to predict futures (or to prevent them) but to build the frameworks of resilience.

Consider the best case scenario of a doctor and patient navigating a medical intervention. In such a situation, there are mechanisms for data collection (ranging from bio-specimens to verbal reflections) and an iterative process for choosing how to proceed. Each medical treatment is viewed as an intervention that needs to be considered with future steps taken based on information gleaned in the process.

How do we do this same activity at scale? How do create significant structures to understand and evaluate the transformations that unfold as they unfold and feed those back into the development cycle? How do we build assessment protocols for evaluating new AI models?

Consider the rhetoric that surrounds how AI will disrupt XYZ industry. Some researchers, executives, and workers who know those industries are shifting their energies to ask these questions. But we don’t have mechanisms in place to really “see” let alone evaluate the disruptions. Why not? If pundits are predicting such disruptions, shouldn’t we be building the mechanisms to see if those futures are unfolding the way that determinists think they will?

Those who are building new AI systems are talking extensively about the potential for disruption (with both enthusiasm and fear) but I see very little scaffolding outside of the sciences to even reflect on how these disruptions will unfold. I know that researchers are scrambling to jump in (often with few dedicated resources) and organizational leaders are convening meetings to discuss their postures vis-a-vis these new systems, but I’m surprised by how little scaffolding there is to ensure that there are evaluations along the way.

And that brings me back to a question about all of these disruptions. Is the goal to disrupt and then let a free-for-all happen because that’s how change should occur? Or is the goal to disrupt to actually drive towards futures that people imagine? Cuz right now, even with the rhetoric of the latter, the former seems more at play.

Deterministic rhetoric lacks nuance. It also lacks an appreciation for human agency or a recognition that disruptions are situated within complex ecosystems that will shapeshift along the way. But what bothers me most about the deterministic framing that is hanging over all things AI right now is that it’s closing out opportunities for deeper situated thinking about the transformations that might unfold over the next few years.

How can we push past our current love affair with determinism so that we can have a more nuanced, thoughtful, reflexive account of technology and society?

I, for one have no clue what’s coming down the pike. But rather than taking an optimistic or a pessimistic stance, I want to start with curiosity. I’m hoping that others will too.

xkcd

What if failure is the plan?

I’ve been thinking a lot about failure lately. Failure comes in many forms, but I’m especially interested in situations in which people *perceive* something as failing (or about to fail) and the contestations over failure that often arise in such situations. Given this, it’s hard not to be fascinated by all that’s unfolding around Twitter. At this point in the story of Musk’s takeover, there’s a spectrum of perspectives about Twitter’s pending doom (or lack thereof). But there’s more to failure than the binary question of “will Twitter fail or won’t it?” Here’s some thoughts on how I’m thinking about the failure question…

A kid covered in dirt with a face in shock
8633780 © Andrey Kiselev

1. Failure of social media sites tends to be slow then fast.

I spent a ridiculous amount of time in the aughts trying to understand the rise and fall of social network sites like Friendster and MySpace. I noticed something fascinating. If a central node in a network disappeared and went somewhere else (like from MySpace to Facebook), that person could pull some portion of their connections with them to a new site. However, if the accounts on the site that drew emotional intensity stopped doing so, people stopped engaging as much. Watching Friendster come undone, I started to think that the fading of emotionally sticky nodes was even more problematic than the disappearance of segments of the graph.

With MySpace, I was trying to identify the point where I thought the site was going to unravel. When I started seeing the disappearance of emotionally sticky nodes, I reached out to members of the MySpace team to share my concerns and they told me that their numbers looked fine. Active uniques were high, the amount of time people spent on the site was continuing to grow, and new accounts were being created at a rate faster than accounts were being closed. I shook my head; I didn’t think that was enough. A few months later, the site started to unravel.

A gravestone for MySpace
Flickr: Carla Lynn Hall

On a different project, I was talking with a cis/hetero dating site that was struggling with fraud. Many of its “fake” accounts were purportedly “women” but they were really a scam to entice people into paying for a porn site. But when the site started removing these profiles, they found that the site as a whole was unraveling. Men didn’t like these fake women, but their profiles enticed them to return. Moreover, attractive women saw these profiles and felt like it was a site full of people more attractive than them so they came. When the fake women disappeared, the real women disappeared. And so did the men.

Network effects intersect with perception to drive a sense of a site’s social relevance and interpersonal significance.

I don’t have access to the Twitter social graph these days, but I’d bet my bottom dollar that it would indicate whether or not the site was on a trajectory towards collapse. We are certainly seeing entire sub-networks flock to Mastodon, but that’s not as meaningful as people might think because of the scale and complexity of the network graph. You can lose whole segments and not lose a site. However, if those departing are creating Swiss cheese into the network graph, then I would worry.

The bigger question concerns those emotionally sticky nodes. What constitutes a “can’t be missed” account or post varies. What draws someone to a service like Twitter varies. For some, it is the libidinal joy of seeing friends and community, the posts that provide light touch pleasure and joy. For others, it’s a masochistic desire for seeing content that raises one’s blood pressure. Still others can’t resist the drama of a train wreck.

The funny thing about Twitter’s feed algorithms is that they were designed to amplify the content that triggered the most reaction, those emotionally sticky posts. This is why boring but informative content never has a chance against that which prompts fury. But it also means that we’re all watching how our little universe of content is changing (or not). Are you still seeing the things that give you pleasure? Or just the stuff that makes you angry? Why can’t you resist looking away from the things that give you pain? (That question isn’t a new one… it’s the question that underlies our toxic social media ecology more generally.)

I have to give Musk and gang some credit for knowing that drama brings traffic. The drama that unfolds in the World Cup is wholesome compared to the drama of watching public acts of humiliation, cruelty, and hate. We’re in a modern day Coliseum watching a theater of suffering performed for the king under the rubric of “justice.” And just like the ancient Romans, we can’t look away.

But how long can the spectacle last? Even the Roman Empire eventually collapsed, but perhaps the theater of the absurd can persist for a while. Still, there are other factors to consider.

2. Failure can be nothing more than a normal accident that tears down the infrastructure.

Nearly everyone I talk with is surprised that the actual service of Twitter is mostly still working. What that says to me is that the engineering team was far more solid than I appreciated. Any engineering team worth its salt is going to build redundancy and resilience into the system. Exceptions that are thrown should be caught and managed. But that doesn’t mean that a system can persist indefinitely without maintenance and repair.

Think of it in terms of a house. If you walk away from your home for a while, the pipes will probably keep working fine on their own. Until a big freeze comes. And then, if no one is looking, they’ll burst, flood the house, and trigger failure after failure. The reason for doing maintenance is to minimize the likelihood of this event. And the reason to have contingencies built in is to prevent a problem from rippling across the system.

What happens when Twitter’s code needs to be tweaked to manage an iOS upgrade? Or if a library dependency goes poof? What happens when a security vulnerability isn’t patched?

One interesting concept in organizational sociology is “normal accidents theory.” Studying Three Mile Island, Charles Perrow created a 2×2 grid before b-schools everywhere made this passé.

Charles Perrow’s 2-by-2 described in text, with examples.

One axis represented the complexity of interactions in a system; the other axis reflected the “coupling” of a system. A loosely coupled system has little dependencies, but a tightly coupled system has components that are highly dependent on others. Perrow argued that “normal accidents” were nearly inevitable in a complex, tightly coupled system. To resist such an outcome, systems designers needed to have backups and redundancy, safety checks and maintenance. In the language of computers, resilience requires having available “buffer” to manage any overflow.

Having dozens of engineers working around the clock to respond to crises can temporarily prevent failure. But those engineers will get tired, mistakes will happen, and maintenance will get kicked down the road. Teams need buffer as much as systems do.

I’m concerned about the state of the team at Twitter, not just because so many people were laid off. If my hunch is right, many of the engineers who are keeping Twitter going fall into four groups. There are immigrants on H1Bs who are effectively indentured servants, many of whom would leave if they could, but the industry is falling apart which makes departures unlikely. There are also apolitical engineers who need a job and there are few jobs to be found in the industry right now. Neither of these groups will want to drive themselves to the bone in the long term. Then there are Musk fanboys who want to ride this rollercoaster for whatever personal motivation. And there are goons on loan from other public companies that Musk owns. (Side note: how how how is it legal for Musk to use employees from public companies for his private project!?!? Is this something that the Delaware courts are going to influence?)

Fail Whale, an internet icon

In the early days of Twitter, moments of failure were celebrated with a Fail Whale, the iconic image that Twitter posted when something went terribly awry in the system, requiring it to be shut down and, effectively, rebooted. It’s been a long time since we saw the Fail Whale because there was a strong infrastructure team who worked to bake resilience into the system. In other words, Twitter grew up.

How long can the resilience of the system allow it to keep functioning? It could be quite a while. But I also can’t help but think of a video I saw years ago about what would happen to New York City if the humans suddenly disappeared overnight. First the pipes burst and the rats invaded. But without humans leaving behind trash, the rats eventually died. The critters that remained? The cockroaches of course.

3. Failure is entangled with perception.

If you searched for “miserable failure” (or even just “failure”) on September 29, 2006, the first result was the official George W. Bush biography. This act of “Google bombing” made the internet lol. But it also hinted at a broader dynamic related to failure. There are failures that everyone can agree are failures (e.g. the explosion of the Challenger), but most failures are a matter of perception.

Politicians, policies, companies, and products are often deemed a “failure” rhetorically by those who oppose them, regardless of any empirical measure one might use. George W. Bush was deemed a failure by those who were opposed to his “War on Terrorism.” Declaring something a failure is a way to delegitimize it. And when something is delegitimized, it can become a failure.

Glasses that turn colorful tulips into black-and-white. A commentary on perception.
Photo 182315403 © mariavonotna

I often think back to MySpace’s downfall. In 2007, I penned a controversial blog post noting a division that was forming as teenagers self-segregated based on race and class in the US, splitting themselves between Facebook and MySpace. A few years later, I noted the role of the news media in this division, highlighting how media coverage about MySpace as scary, dangerous, and full of pedophiles (regardless of empirical evidence) helped make this division possible. The news media played a role in delegitimizing MySpace (aided and abetted by a team at Facebook, which was directly benefiting from this delegitimization work).

Perception (including racism and classism) have shaped the social media landscape since the beginning.

A lot has changed about our news media ecosystem since 2007. In the United States, it’s hard to overstate how the media is entangled with contemporary partisan politics and ideology. This means that information tends not to flow across partisan divides in coherent ways that enable debate. In general, when journalists/advocates/regular people on the left declare conservative politicians/policies to be failures, this has little impact on the right because it is actively ignored by the media outlets consumed by those on the right. But interestingly, when journalists/advocates/regular people on the right declare progressive politicians/policies to be failures, both mainstream media and the left obsessively amplify falsehoods and offensive content in an attempt to critique and counteract them. (Has anyone on the left managed to avoid hearing about the latest round of celebrity anti-Semitism?)

I’m especially fascinated by how the things that are widely deemed failures are deemed failures for different reasons across the political spectrum. Consider the withdrawal in Afghanistan. The right did a fantastic job of rhetorically spinning this as a Biden failure, while the left criticized aspects of the mission. This shared perception of failure landed in the collective public consciousness; there was no need to debate why individual groups saw it as failure. Of course, this also meant that there was no shared understanding of what led to that point, no discussion of what should’ve been done other than it should’ve been done better. Perceptions of failure don’t always lead to shared ideas of how to learn from these lessons.

The partisan and geopolitical dimensions of perception related to Twitter are gobsmacking. Twitter has long struggled to curb hate, racism, anti-Semitism, transphobia, and harassment. For a long time, those on the right have labeled these efforts censorship. Under the false flag of freedom of speech, the new Twitter has eradicated most safeguards, welcoming in a new era of amplified horrors, with the news media happily covering this spectacle. (This is what led Joan Donovan and I to talk about the importance of strategic silence.)

Musk appears to be betting that the spectacle is worth it. He’s probably correct in thinking that large swaths of the world will not deem his leadership a failure either because they are ideologically aligned with him or they simply don’t care and aren’t seeing any changes to their corner of the Twitterverse.

He also appears to believe that the advertising community will eventually relent because they always seem to do so when an audience is lingering around. And with a self-fashioned Gladiator torturing his enemies for sport in front of a live audience, there are lots of dollars on the table. Musk appears convinced that capitalistic interests will win out.

So the big question in my mind is: how effective will the perception that Twitter is failing be in the long run, given how it is not jumping across existing ideological divisions? Perception of failure can bring about failure, but it doesn’t always. That’s the story of many brands who resist public attacks. Perception of failure can also just fade into the background, reifying existing divisions.

Of course, a company needs money and the only revenue stream Twitter has stems from advertising. This is one of the reasons that activism around the advertisers matters. If advocates can convince advertisers to hold out, that will starve a precarious system. That is a tangible way to leverage perception of failure. Same can be said if advocates manage to convince Apple or Google to de-list. Or if perception can be leveraged into court fights, Congressional battles, or broader policy sanctions. But right now, it seems as though perception has gotten caught in the left/right cultural war that is unfolding in the United States.

4. Failure is an end state.

There are many ways in which the Twitter story could end, but it’s important to remember that most companies do eventually end (or become unrecognizable after 100+ years). The internet is littered with failed companies. And even though companies like Yahoo! still have a website, they are in a “permanently failing” status. Most companies fail when they run out of money. And the financials around Twitter are absurd. As a company, it has persisted almost entirely on a single profit stream: advertising. That business strategy requires eyeballs. As we’ve already witnessed, a subscription plan for salvation is a joke.

The debt financing around Twitter is gob-smacking. I cannot for the life of me understand what the creditors were thinking, but the game of finance is a next level sport where destroying people, companies, and products to achieve victory is widely tolerated. Historical trends suggest that the losers in this chaos will not be Musk or the banks, but the public.

For an anchor point, consider the collapse of local news journalism. The myth that this was caused by Craigslist or Google drives me bonkers. Throughout the 80s and 90s, private equity firms and hedge funds gobbled up local news enterprises to extract their real estate. They didn’t give a shit about journalism; they just wanted prime real estate that they could develop. And news organizations had it in the form of buildings in the middle of town. So financiers squeezed the news orgs until there was no money to be squeezed and then they hung them out to dry. There was no configuration in which local news was going to survive, no magical upwards trajectory of revenue based on advertising alone. If it weren’t for Craigslist and Google, the financiers would’ve squeezed these enterprises for a few more years, but the end state was always failure. Failure was the profit strategy for the financiers. (It still boggles my mind how many people believe that the loss of news journalism is because of internet advertising. I have to give financiers credit for their tremendous skill at shifting the blame.)

Photo 55254243 © Romolo Tavani

I highly doubt that Twitter is going to be a 100-year company. For better or worse, I think failure is the end state for Twitter. The question is not if but when, how, and who will be hurt in the process?

Right now, what worries me are the people getting hurt. I’m sickened to watch “journalists” aid and abet efforts to publicly shame former workers (especially junior employees) in a sadistic game of “accountability” that truly perverts the concept. I’m terrified for the activists and vulnerable people around the world whose content exists in Twitter’s databases, whose private tweets and DMs can be used against them if they land in the wrong hands (either by direct action or hacked activity). I’m disgusted to think that this data will almost certainly be auctioned off.

Frankly, there’s a part of me that keeps wondering if there’s a way to end this circus faster to prevent even greater harms. (Dear Delaware courts, any advice?)

No one who creates a product wants to envision failure as an inevitable end state. Then again, humans aren’t so good at remembering that death is an inevitable end state either. But when someone doesn’t estate plan, their dependents are left with a mess. Too many of us have watched the devastating effects of dementia and, still, few of us plan for all that can go wrong when our minds fall apart and we lash out at the ones we love. Few companies die a graceful death either. And sadly, that’s what I expect we’re about to see. A manic, demented creature hurting everyone who loved it on its way out the door.

Closing Thoughts

I’m not omniscient. I don’t know where this story ends. But after spending the last few years obsessing over what constitutes failure, I can’t help but watch this situation with a rock in my stomach.

Failure isn’t a state, but a process. It can be a generative process. After all, some plants only grow after a forest fire. (And yes, yes, tech is currently obsessed with “fail fast.” But frankly, that’s more about a status game than actually learning.)

Failure should not always be the end goal. There’s much to be said about the journey, about living a worthy life, about growing and learning and being whole. Yet, what keeps institutions, systems, companies, and products whole stems from how they are configured within a network of people, practices, and perception. Radical shifts in norms, values, and commitments can rearrange how these networks are configured. This is why transitions are hard and require a well-thought through strategy to prevent failure, especially if the goal is to be whole ethically.

Watching this situation unfold, a little voice keeps nagging in my head. How should our interpretation of this situation shift if we come to believe that failure is the desired end goal? There’s a big difference between a natural forest fire and one that stems from the toxic mixture of arson and climate change.

Dead bird

Differential Perspectives

This update is to let you know about a new essay that’s now online in in-press form: “Differential Perspectives: Epistemic Disconnects Surrounding the US Census Bureau’s Use of Differential Privacy.” Click here to read the full essay.

When the U.S. Census Bureau announced its intention to modernize its disclosure avoidance procedures for the 2020 Census, it sparked a controversy that is still underway. The move to differential privacy introduced technical and procedural uncertainties, leaving stakeholders unable to evaluate the quality of the data. More importantly, this transformation exposed the statistical illusions and limitations of census data, weakening stakeholders’ trust in the data and in the Census Bureau itself.

Jayshree Sarathy and I have been trying to make sense of the epistemic currents of this controversy. In other words, how do divergent ways of sense-making shape people’s understanding of census data – and what does that tell us about how people deal with census data controversies.

We wrote an essay for an upcoming special issue of Harvard Data Science Review that will focus on differential privacy and the 2020 Census. While the special issue is not yet out, we were given permission to post our in-press essay online. And so I thought I’d share it here for those of you who relish geeky writings about census, privacy, politics, and controversies. This paper draws heavily on Science and Technology Studies (STS) theories and is based on ethnographic fieldwork. In it, we analyze the current controversy over differential privacy as a battle over uncertainty, trust, and legitimacy of the Census. We argue that rebuilding trust will require more than technical repairs or improved communication; it will require reconstructing what we
identify as a ‘statistical imaginary.’ Check out our full argument here.

For those who prefer the tl;dr video version, I sketched out some of these ideas at the Microsoft Research Summit in the fall.

We are still continuing to work through these ideas so by all means, feel free to share feedback or critiques; we relish them.

Crisis Text Line, from my perspective

Like everyone who cares about Crisis Text Line and the people we serve, I have spent the last few days reflecting on recent critiques about the organization’s practices. Having spent my career thinking about and grappling with tech ethics and privacy issues, I knew that – had I not been privy to the details and context that I know – I would be outraged by what folks heard this weekend. I would be doing what many of my friends and colleagues are doing, voicing anger and disgust. But as a founding board member of Crisis Text Line, who served as board chair from June 2020 until the beginning of January 2021, I also have additional information that shaped how I thought about these matters and informed my actions and votes over the last eight years. 

As a director, I am currently working with others on the board and in the organization to chart a path forward. As was just announced, we have concluded that we were wrong to share texter data with Loris.ai and have ended our data-sharing agreement, effective immediately. We had not shared data since we changed leadership; the board had chosen to prioritize other organizational changes to support our staff, but this call-to-action was heard loud and clear and shifted our priorities. But that doesn’t mean that the broader questions being raised are resolved. 

Texters come to us in their darkest moments. What it means to govern the traces they leave behind looks different than what it means to govern other types of data. We are always asking ourselves when, how, and should we leverage individual conversations borne out of crisis to better help that individual, our counselors, and others who are suffering. These are challenging ethical questions with no easy answer. 

What follows is how I personally thought through, balanced, and made decisions related to the trade-offs around data that we face every day at Crisis Text Line. This has been a journey for me and everyone else involved in this organization, precisely because we care so deeply. I owe it to the people we serve, the workers of Crisis Text Line, and the broader community who are challenging me to come forward to own my decisions and role in this conversation. This is my attempt to share both the role that I played and the framework that shaped my thinking. Since my peers are asking for this to be a case study in tech ethics, I am going into significant detail. For those not seeking such detail, I apologize for the length of this. 

Most of the current conversation is focused on the ethics of private-sector access to messages from texters in crisis. These are important issues that I will address, but I want to walk through how earlier decisions influenced that decision. I also want to share how the ethical struggles we face are not as simple as a binary around private-sector access. There are ethical questions all the way down.

What follows here is, I want to emphasize, my personal perspective, not the perspective of the organization or the board. As a director of Crisis Text Line, I have spent the last 8 years trying to put what I know about tech ethics into practice. I am grateful that those who care about tech ethics are passionate about us doing right by our texters. We have made changes based on what we have heard from folks this weekend. But those changes are not enough. We need to keep developing and honing guiding principles to govern our work. My goal has been and continues to be ensuring ethical practices while navigating the challenges of governing both an organization and data. Putting theory into practice continues to be more challenging than I ever imagined. Given what has unfolded, I would also love advice from those who care as I do about both mental health and tech ethics.

First: Why data?

Even before we launched the CTL service, I knew that data would play a significant role in the future of the organization. My experience with tech and youth culture was why I was asked to join the board. Delivering a service that involved asynchronous interactions via text would invariably result in the storage of data. Storing data would be needed to deliver the service; the entire system was necessarily designed to enable handoffs between counselors and to allow texters to pick up conversations hours (or days) later.

Storing data immediately prompted three key questions:

  1. How long would we store the data that users provided to us?
  2. Could we create a secure system?
  3. Under what conditions would we delete data?

As a board, we realized the operational necessity of stored data, which meant an investment in the creation of a secure system and deep debate over our data retention policies. We decided that anyone should have the right to remove their data at any point, a value I strongly agreed with. The implementation of this policy relied on training all crisis counselors how to share this info with texters if they asked for it; we chose to implement the procedure by introducing a codeword that users could share to trigger a deletion of their data. (This was also documented as part of the terms of service, which texters were pointed to when they first contacted us. I know that no one in crisis reads lawyer-speak to learn this, which is why I was more interested in ensuring that our counselors knew this.)

Conducting the service would require storing data, but addressing the needs of those in crises required grappling with how data would be used more generally. Some examples of how data are used in the service: 

  • When our counselors want to offer recommendations for external services, they pull on outside data to bring into the conversation; this involves using geographic information texters provide to us.
  • Our supervisors review conversations both to support counselors real-time and give feedback later with an eye towards always improving the quality of conversations.

Our initial training program was designed based on what we could learn from other services, academic literature, and guidance from those who had been trained in social work and psychology. Early on, we began to wonder how the conversations that took place on our platform could and should inform the training itself. We knew that counselors gained knowledge through experience, and that they regularly mentored new counselors on the platform. But could we construct our training so that all counselors got to learn from the knowledge developed by those who came before them? 

This would mean using texter data for a purpose that went beyond the care and support of that individual. Yes, the Terms of Service allowed this, but this is not just a legal question; it’s an ethical question. Given the trade-offs, I made a judgment call early on that not only was using texter data to strengthen training of counselors without their explicit consent ethical, but that to not do this would be unethical. Our mission is clear: help people in crisis. To do this, we need to help our counselors better serve texters. We needed to help counselors learn and grow and develop skills with which they can help others. I supported the decision to use our data in this way.

A next critical turning point concerned scale. My mantra at Crisis Text Line has always been to focus on responsible scaling, not just scaling for scaling sake. But we provide a service that requires a delicate balance of available counselors to meet the needs of incoming texters. This meant that we had to think about how to predict the need and how to incentivize counselors to help out at spike moments. And still, there were often spikes where the need exceeded the availability of counselors. This led us to think about our ethical responsibilities in these moments. And this led to another use of data: 

  • When there are spikes in the service without enough counselors, we triage incoming requests to ensure that those most at physical risk get served fastest; this requires analyzing the incoming texts even before a conversation starts.

This may not seem like a huge deal, but it’s an ethical decision that I’ve struggled with for years. How do you know who is in most need from just intake messages? Yes, there are patterns, but we’ve also learned over the years that these are not always predictable. More harrowingly, we know retrospectively that these signals can be biased. Needless to say, I would simply prefer for us to serve everyone, immediately. But when that’s not possible, what’s our moral and ethical responsibility? Responding to incoming requests in order might meet some people’s definition of “fair,” but is that ethical? Especially when we know that when people are in the throes of a suicide attempt, time is of the essence? I came to the conclusion that we have an ethical responsibility to use our data to work to constantly improve the triage algorithm, to do the best we can to identify those for whom immediate responses can save a life. This means using people’s data without their direct consent, to leverage one person’s data to help another. 

Responsible scaling has introduced a series of questions over the years. I’ve reflected in my head on one for years that we’ve never implemented: Should we attempt to match need to expertise? In other words, should our counselors specialize? To date, we haven’t, but it’s something I think a lot about. But there are also questions that have been raised that we have intentionally abandoned. For example, there was once a board meeting where the question of automation came up. We already use some automation tools in training and for intake; should some conversations be automated? This was one of those board meetings where I put my foot down. Absolutely not. Data could be used to give our counselors superpowers, but centering this service on humans was essential. In this context, my mantra has always been augmentation not automation. The board and organization embraced this mantra, and I’m glad for it.

Next: Data for Research

From early on, researchers came to Crisis Text Line asking for access to data. This prompted even more reflection. We had significant data and we were seeing trends that had significant implications for far more than our service. We started reporting out key trends, highlighting patterns that we then published on our website. I supported this effort because others in the ecosystem told us it helped them to learn from the patterns that we were seeing. This then led to the more complicated issue of whether or not to allow external researchers to study our data with an eye towards scholarship. 

I’m a scholar. I know how important research is and can be. I knew how little data exists in the mental health space, how much we had tried to learn from others, how beneficial knowledge could be to others working in the mental health ecosystem. I also knew that people who came to us in crisis were not consenting to be studied. Yes, there was a terms of service that could contractually permit such use, but I knew darn straight that no one would read it, and advised everyone involved to proceed as such. 

I have also tracked the use of corporate data for research for decades, speaking up against some of Facebook’s experiments. Academic researchers often want to advance knowledge by leveraging corporate data, but they do not necessary grapple with the consequences of using data beyond IRB requirements. There have been heated debates in my field about whether or not it is ethical to use corporate trace data without the consent of users to advance scientific knowledge. I have had a range of mixed feelings about this, but have generally come out in opposition to private trace data being used for research. 

So when faced with a similar question at Crisis Text Line, I had to do a lot of soul searching. Our mission is to help people. Our texters come to us in their darkest hours. Our data was opening up internal questions right and left about how to best support them. We don’t have the internal resources to analyze the data to answer all of our questions, to improve our knowledge base in ways that can help texters. I knew that having additional help from researchers could help us learn in ways that would improve training of counselors and help people down the line. I also knew that what we were learning internally might be useful to other service providers in the mental health space and I felt queasy that we were not sharing what we had learned to help others.

Our organization does not exist for researchers to research. Our texters do not come to us to be research subjects. But our texters do come to us for help. And we do help them by leveraging what we learn helping others, including researchers. Texters may not come to us to pay it forward for the next person in need, but in effect, that’s what their engagement with us was enabling. I see that as an ethical use of data, one predicated on helping counselors and texters through experience mediated by data. The question in my mind then was: what is the relationship of research to this equation?

I elected to be the board member overseeing the research efforts. We have explored – and continue to explore – the right way to engage researchers in our work. We know that they are seeking data for their own interests, but our interest is clear: can their learnings benefit our texters and counselors, in addition to other service providers and the public health and mental health ecosystem. To this end, we have always vetted research proposals and focused on research that could help our mission, not just satisfy researcher curiosity. 

Needless to say, privacy was a major concern from day one. Privacy was a concern even before we talked about research; we built privacy processes even for internal analyses of data. But when research is involved, privacy concerns are next-level. Lots of folks have accused us of being naive about reidentification over the last few days, which I must admit has been painful to hear given how much time I spend thinking about and dealing with reidentification in other contexts. I know that reidentification is possible and that was at the heart and soul of our protocols. Researchers have constrained access to scrubbed data under contract precisely because there’s a possibility that, even with our scrubbing procedures, reidentification might be possible. But, we limited data to minimize reidentification risks and added contractual procedures to explicitly prevent reidentification.

When designing these protocols, my goal was to create the conditions where we could learn from people in crisis to help others in crisis without ever, in any way, adding to someone’s crisis. And this means privacy-first.

More generally though, the research question opened up a broader set of issues in my mind. Our service can directly help individuals. What can and should we do to advance mental health more generally? What can and should we be providing to the field? What is our responsibility to society outside our organization?

Next: Training as a Service

Our system is based on volunteers who we train to give counsel. As is true in any volunteer-heavy contexts, volunteers come and go. Training is resource intensive, but essential for the service. Repeatedly, volunteers approached us as a board to tell us about the secondary benefits of the training. Yes, the training was designed to empower a counselor to communicate with a person who was in crisis, but these same skills were beneficial at work and in personal relationships. Our counselors kept telling us that crisis management training has value in the world outside our doors. This prompted us to reflect on the potential benefit of training far more people to manage crises, even if they did not want to volunteer for our service.

The founder of Crisis Text Line saw an opportunity and came to the board. We did not have the resources to simply train anyone who was interested. But HR teams at companies had both the need for, and the resources for, larger training systems. The founder proposed building a service that could provide us with a needed revenue stream. I don’t remember every one of the options we discussed, but I do know that we talked about building a separate unit in the organization to conduct training for a fee. This raised the worry that this would be a distraction to our core focus. We did all see training as mission-aligned, but we needed to focus on the core service CTL was providing. 

We were also struggling, as all non-profits do, with how to be sustainable. Non-profit fundraising is excruciating and fraught. We were grateful for all of the philanthropic organizations who made starting the organization possible, but sustaining philanthropic funding is challenging and has significant burdens. Program officers always want grantees to find other sources of money. There are traditional sources: foundations, individual donors, corporate social responsibility donations. In some contexts, there’s government funding, though at that time, government was slashing funding not increasing it. Funding in the mental health space is always scarce. And yet, as a board, we always had a fiduciary responsibility to think about sustainability.  

Many of the options in front of us concerned me deeply. We could pursue money by billing insurance companies, but this had a lot of obvious downsides to it. Many of the people we serve do not have access to insurance. Moreover, what insurers really want is our data, which we were strongly against. They weren’t alone – many groups wanted to buy our data outright. We were strongly against those opportunities as well. No selling of data, period. 

Big tech companies and other players were increasingly relying on CTL as their first response for people in crisis, without committing commensurate (or sometimes, any) resources to help offset that burden. This was especially frustrating because they had the resources to support those in crisis but had chosen not to, preferring to outsource the work but not support it. They believed that traffic was a good enough gift.

This was why we, as a board, were reflecting on whether or not we could build a revenue stream out of training people based on what we learned from training counselors. In the end, we opted not to run such an effort from within Crisis Text Line, to reduce the likelihood of distracting from our mission. Instead, we gave the founder of Crisis Text Line permission to start a new organization, with us retaining a significant share in the company; we also retained the right to a board seat. This new entity was structured as a for-profit company designed to provide a service to businesses, leveraging what we had learned helping people. This company is called Loris.ai.

Loris.ai planned on learning from us to build training tools for people who were not going to serve as volunteers for our service. Yet, the company was a separate entity and the board rejected any plan that involved full access to our systems. Instead, we opted to create a data-sharing agreement that paralleled the agreement we had created with researchers: controlled access to scrubbed data solely to build models for training that would improve mental health more broadly. We knew that it did not make sense for them to directly import our training modules; they would be training people in a different context. Yet, both they and we believed that there were lessons to be learned from our experiences, both qualitatively and quantitatively.

I struggled with this decision at the time and ever since. I could see both benefits and risks in sharing our data with another organization, regardless of how mission-aligned we were. We debated this in the boardroom; I pushed back around certain proposals. In the end, some of the board members at the time saw this decision through the lens of a potential financial risk reduction. If the for-profit company did well, we could receive dividends or sell our stake in order to fund the crisis work we were doing. I voted in favor of creating Loris.ai for a different reason.  If another entity could train more people to develop the skills our crisis counselors were developing, perhaps the need for a crisis line would be reduced. After all, I didn’t want our service to be needed; the fact that it is stems from a system that is deeply flawed. If we could build tools that combat the cycles of pain and suffering, we could pay forward what we were learning from those we served. I wanted to help others develop and leverage empathy. 

This decision weighed heavily on me, but I did vote in favor of it. Knowing what I know now, I would not have. But hindsight is always clearer.

Existential Crisis

In June of 2020, our employees came to us with grave concerns about the state of the organization. This triggered many changes to the organization and a reckoning as a board. I stepped in as board chair. As we focused on addressing the issues raised by employees, I felt as though we needed to prioritize what they were telling us. My priority was to listen to our staff, center the needs of our workers and texters, learn from them, and focus on our team, core business, and organizational processes. We also needed to hire a permanent CEO. The concerns we received were varied and diverse, requiring us to prioritize what to focus on when. 

Data practices were not among the dominant concerns, but they were among the issues raised. The most significant data concern raised to us was whether our data practices were as strong as the board believed them to be. This prompted three separate, interlocking audits. We had already conducted a privacy and security audit, but we revisited it in greater depth. We also hired two additional independent teams to conduct audits around 1) data governance and 2) ethical use of and bias in data. I was the board member overseeing this work, pushing each of these efforts to probe more deeply, engaging a range of stakeholders along the way (including counselors, staff, partners, and domain experts).

I quickly learned that as much as scholars talk about the need to do audits of ethics/biases, there is not a good roadmap out there for doing this work, especially in the context of a fairly large-scale organization. As someone who cares deeply about this, I was glad to be pushing the edges and interrogating every process, but I also wanted us to have guidance on how to strengthen our efforts even further. There is always room to improve, and there isn’t yet a community of practice for people doing this in real-time while people are depending on an organization’s work. Still, we got great feedback from the audits and set about to prioritize the changes that needed to be implemented.

Aside from the data audits, most of our changes over the last 18 months have been organizational and infrastructural, focused on strengthening our team, processes, and tools. As the board chair, I deliberately chose not to prioritize any changes to our contractual relationship with Loris.ai, in favor of prioritizing the human concerns raised by our staff. We focused our energies internally and on our core mission. When Loris asked the Crisis Text Line founder to leave the board, we chose not to offer up a replacement. Our most proactive stance over the last 18 months was to freeze the agreement with Loris, with an explicit commitment to reconsider the relationship in 2022 once a new CEO was in place. As a result of these decisions, we have not shared any data since the change in leadership. 

Governance 

The practice of non-profit governance requires collectively grappling with trade-off after trade-off. I have been a volunteer director of the board of Crisis Text Line for 8 years both because I believe in the mission and because I have been grateful to govern alongside amazing directors from whom I constantly learn. This doesn’t mean it’s been easy and it definitely doesn’t mean we always agree. But we do push each other and I learn a lot in the process. We strived to govern ethically, but that doesn’t mean others would see our decisions as such. We also make decisions that do not pan out as expected, requiring us to own our mistakes even as we change course. Sometimes, we can be fully transparent about our decisions; in other situations – especially when personnel matters are involved – we simply can’t. That is the hardest part of governance, both for our people and for myself personally. 

I want to own my decisions as a director of Crisis Text Line. I voted in favor of our internal uses of data, our collaborations with researchers, and our decision to contribute to the founding of Loris.ai. I did so based on a calculation of ethical trade-offs informed by my research and experiences. I want to share some aspects of the rubric in my mind: 

1. Consent. Consent in my mind exists in a more complex context than the simpler view I had before I began volunteering at CTL. I believe in the ideal of informed consent, which has informed my research. (A ToS is not consent.) But I have also learned from our clinical team about the limits of consent and when consent undermines ethical action. I have also come to believe that there are times when other ethical values must be prioritized against an ideal of consent. For example, I support Crisis Text Line’s decision to activate Public Safety Answering Points (PSAPs) when a texter presents an imminent life-or-death risk to themselves or to someone else, even when they have not consented to such an activation. Many members of our staff and volunteers are mandatory reporters who have the legal as well as ethical obligation to report. At the same time, I also support our ongoing work to reduce reliance on PSAPs and our policy efforts to have PSAPs center mental health more.  

2. Present and future. Our mission is to help individuals who come to us in need and to improve the state of mental health for people more generally. I would like to create a world in which we are not needed. To that end, I am always thinking about what benefits individuals and the collective. I’m also thinking about future individuals. What can we learn now that will help the next person who comes to us? And what can we do now so that fewer people need us? I believe in a moral imperative of paying it forward and I approach data ethics with this in mind. There is undeniably a tension between the obligation to the individual and the obligation to the collective, one that I regularly reflect on.

3. The field matters. We are a non-profit and part of a broader ecosystem of mental health services. We cannot serve everyone; even for those whom we do serve in crisis, we cannot be their primary mental health provider. We want there to be an entire ecosystem of support for people in crisis, of which we play just one part. We have a responsibility to the individual in the moment of crisis and we have a responsibility to learn from and strengthen the field to help individuals downstream. To this end, I think we have an ethical responsibility to give back to the ecosystem, not just to the individual in the moment. But we need to balance this imperative with respect for the individuals during their darkest moments.

4. Improve over time. Much of our data begins as conversations, involving data from both texters and counselors. As you might imagine, when our counselors’ attempts to help someone need improvement, it weighs deeply on our entire staff. Both counselors and texters benefit when counselors learn from reviewing their conversations, from reviewing what worked or didn’t work in others’ conversations, and from lessons learned being fed back into training. My eye is always on what will improve those conversations. (This is why an obsession at the board level is quality over quantity.)

The responsibility of CTL is a heavy one, in ways that may not be obvious to those who haven’t worked in this field or seen the sometimes-counterintuitive challenges of serving people in crisis. I use the needs and prioritizations of our texters and team as my first and most important filter when judging what decisions to make. I see helping counselors and staff succeed as key to helping serve people in need. This sometimes requires thinking about how texter data can help strengthen our counselors; this sometimes requires asking if conducting research will help them grow; and this sometimes requires asking what is needed to strengthen the broader ecosystem.

When it comes to thinking about texters, I’m focused on the quality of the conversation and the safety of the texter. When it comes to safety, I’m often confronted with non-knowledge, which is harrowing. (Did someone who was attempting suicide survive the night? Emergency responders don’t necessarily tell us, so we rely on hearing back from the texter, but what’s the healthiest way to followup with a texter?) I still don’t know the best way to measure quality; I have scoured the literature and sought advice from many to guide my thinking, but I am still struggling there and in conversation with others to try to crack this nut. I’m also thankful that there’s an entire team at Crisis Text Line dedicated to thinking about, evaluating, and improving conversation quality.

I regularly hear from both texters and counselors, whose experiences shape my thinking, but I also know that these are but a few perspectives. I read the feedback from our surveys, trying to grapple with the limitations and biases of those responses. There is no universal texter or counselor experience, which means that I have to constantly remind myself about the diversity of perspectives among texters and counselors. I cannot govern by focusing on the average; I must strive to think holistically about the diversity of viewpoints. When it comes to governance, I am always making trade-offs – often with partial information – which is hard. I also know that I sometimes get it wrong and I try to learn from those mistakes. 

These are some of the factors that go through my head when I’m thinking about our data practices. And of course, I’m also thinking about our legal and fiduciary responsibilities. But the decisions I make regarding our data start from thinking through the ethics and then I factor in financial or legal considerations. 

As I listen and learn from how people are responding to this conversation and from the decisions that I contributed to, it is clear to me that we have not done enough to share what we are doing with data and why. It’s also clear to me that I made mistakes and change is necessary. I know that after the challenges of the last year, I have erred on the side of doing the work inside the organization rather than grappling with the questions raised our arrangement with Loris.ai. 

In order to continue serving Crisis Text Line, I need to figure out what we – and I – can do better. I am fascinated by my peers calling to make this a case study in tech ethics. I think that’s quite interesting, and I hope that my detailing this thinking can contribute to that effort. I hope to learn from whatever case study emerges.

To that end, to my peers and colleagues, I also have some honest questions for all of you who are frustrated, angry, disappointed, or simply unsure about us: 

  • What is the best way to balance the implicit consent of users in crisis with other potentially beneficial uses of data which they likely will not have intentionally consented to but which can help them or others? 
  • Given that people come to us in their darkest moments, can/should we enable research on the traces that they produce? If so, how should this be structured? 
  • Is there any structure in which lessons learned from a non-profit service provider can be transferred to a for-profit entity? Also, how might this work with partner organizations, foundations, government agencies, sponsors, or subsidiaries, and are the answers different?
  • Given the data we have, how do we best serve our responsibility to others in the mental health ecosystem?
  • What can better community engagement and participatory decision-making in this context look like? How do we engage people to think holistically about the risks to life that we are balancing and that are shaping our decisions?  (And how do we not absolve our governance responsibilities to perform ethics, as we’ve seen play out in other contexts?)

There are also countless other questions that I struggle with that go beyond the data issues, but also shape them. For example, as always, I will continue to push up against the persistent and endemic question that plagues all non-profits: How can we build a financially sustainable service organization that is able to scale to meet people’s needs? I also struggle every day with broader dynamics in which tech, data, ethics, and mental health are entangled. For example, how do we collectively respond to mental health crises that are amplified by decisions made by for-profit entities? What is our collective responsibility in a society where mental health access is so painfully limited? 

These questions aren’t just important for a case study. These are questions I struggle with every day in practice and I would be grateful to learn from others’ journeys. I know I will make mistakes, but I hope that I can learn from them and, with your guidance, make fewer.

I’m grateful to everyone who cares enough about the texters we serve to engage in this conversation. I’m particularly grateful to be in a community that will call in anyone whom they feel isn’t exercising proper care with people’s data and privacy. And most of all I am thankful for the counsel, guidance, and clarity of our workers at Crisis Text Line, who do the hard work of caring for texters every day, while also providing clear feedback to help drive the future of the organization. I can only help that my decisions help them succeed at the hard work they do.

I warmly welcome any advice from all of you who’ve been watching the conversation and who care about seeing CTL succeed in its mission.