Author Archives: zephoria

What if failure is the plan?

I’ve been thinking a lot about failure lately. Failure comes in many forms, but I’m especially interested in situations in which people *perceive* something as failing (or about to fail) and the contestations over failure that often arise in such situations. Given this, it’s hard not to be fascinated by all that’s unfolding around Twitter. At this point in the story of Musk’s takeover, there’s a spectrum of perspectives about Twitter’s pending doom (or lack thereof). But there’s more to failure than the binary question of “will Twitter fail or won’t it?” Here’s some thoughts on how I’m thinking about the failure question…

A kid covered in dirt with a face in shock
8633780 © Andrey Kiselev

1. Failure of social media sites tends to be slow then fast.

I spent a ridiculous amount of time in the aughts trying to understand the rise and fall of social network sites like Friendster and MySpace. I noticed something fascinating. If a central node in a network disappeared and went somewhere else (like from MySpace to Facebook), that person could pull some portion of their connections with them to a new site. However, if the accounts on the site that drew emotional intensity stopped doing so, people stopped engaging as much. Watching Friendster come undone, I started to think that the fading of emotionally sticky nodes was even more problematic than the disappearance of segments of the graph.

With MySpace, I was trying to identify the point where I thought the site was going to unravel. When I started seeing the disappearance of emotionally sticky nodes, I reached out to members of the MySpace team to share my concerns and they told me that their numbers looked fine. Active uniques were high, the amount of time people spent on the site was continuing to grow, and new accounts were being created at a rate faster than accounts were being closed. I shook my head; I didn’t think that was enough. A few months later, the site started to unravel.

A gravestone for MySpace
Flickr: Carla Lynn Hall

On a different project, I was talking with a cis/hetero dating site that was struggling with fraud. Many of its “fake” accounts were purportedly “women” but they were really a scam to entice people into paying for a porn site. But when the site started removing these profiles, they found that the site as a whole was unraveling. Men didn’t like these fake women, but their profiles enticed them to return. Moreover, attractive women saw these profiles and felt like it was a site full of people more attractive than them so they came. When the fake women disappeared, the real women disappeared. And so did the men.

Network effects intersect with perception to drive a sense of a site’s social relevance and interpersonal significance.

I don’t have access to the Twitter social graph these days, but I’d bet my bottom dollar that it would indicate whether or not the site was on a trajectory towards collapse. We are certainly seeing entire sub-networks flock to Mastodon, but that’s not as meaningful as people might think because of the scale and complexity of the network graph. You can lose whole segments and not lose a site. However, if those departing are creating Swiss cheese into the network graph, then I would worry.

The bigger question concerns those emotionally sticky nodes. What constitutes a “can’t be missed” account or post varies. What draws someone to a service like Twitter varies. For some, it is the libidinal joy of seeing friends and community, the posts that provide light touch pleasure and joy. For others, it’s a masochistic desire for seeing content that raises one’s blood pressure. Still others can’t resist the drama of a train wreck.

The funny thing about Twitter’s feed algorithms is that they were designed to amplify the content that triggered the most reaction, those emotionally sticky posts. This is why boring but informative content never has a chance against that which prompts fury. But it also means that we’re all watching how our little universe of content is changing (or not). Are you still seeing the things that give you pleasure? Or just the stuff that makes you angry? Why can’t you resist looking away from the things that give you pain? (That question isn’t a new one… it’s the question that underlies our toxic social media ecology more generally.)

I have to give Musk and gang some credit for knowing that drama brings traffic. The drama that unfolds in the World Cup is wholesome compared to the drama of watching public acts of humiliation, cruelty, and hate. We’re in a modern day Coliseum watching a theater of suffering performed for the king under the rubric of “justice.” And just like the ancient Romans, we can’t look away.

But how long can the spectacle last? Even the Roman Empire eventually collapsed, but perhaps the theater of the absurd can persist for a while. Still, there are other factors to consider.

2. Failure can be nothing more than a normal accident that tears down the infrastructure.

Nearly everyone I talk with is surprised that the actual service of Twitter is mostly still working. What that says to me is that the engineering team was far more solid than I appreciated. Any engineering team worth its salt is going to build redundancy and resilience into the system. Exceptions that are thrown should be caught and managed. But that doesn’t mean that a system can persist indefinitely without maintenance and repair.

Think of it in terms of a house. If you walk away from your home for a while, the pipes will probably keep working fine on their own. Until a big freeze comes. And then, if no one is looking, they’ll burst, flood the house, and trigger failure after failure. The reason for doing maintenance is to minimize the likelihood of this event. And the reason to have contingencies built in is to prevent a problem from rippling across the system.

What happens when Twitter’s code needs to be tweaked to manage an iOS upgrade? Or if a library dependency goes poof? What happens when a security vulnerability isn’t patched?

One interesting concept in organizational sociology is “normal accidents theory.” Studying Three Mile Island, Charles Perrow created a 2×2 grid before b-schools everywhere made this passé.

Charles Perrow’s 2-by-2 described in text, with examples.

One axis represented the complexity of interactions in a system; the other axis reflected the “coupling” of a system. A loosely coupled system has little dependencies, but a tightly coupled system has components that are highly dependent on others. Perrow argued that “normal accidents” were nearly inevitable in a complex, tightly coupled system. To resist such an outcome, systems designers needed to have backups and redundancy, safety checks and maintenance. In the language of computers, resilience requires having available “buffer” to manage any overflow.

Having dozens of engineers working around the clock to respond to crises can temporarily prevent failure. But those engineers will get tired, mistakes will happen, and maintenance will get kicked down the road. Teams need buffer as much as systems do.

I’m concerned about the state of the team at Twitter, not just because so many people were laid off. If my hunch is right, many of the engineers who are keeping Twitter going fall into four groups. There are immigrants on H1Bs who are effectively indentured servants, many of whom would leave if they could, but the industry is falling apart which makes departures unlikely. There are also apolitical engineers who need a job and there are few jobs to be found in the industry right now. Neither of these groups will want to drive themselves to the bone in the long term. Then there are Musk fanboys who want to ride this rollercoaster for whatever personal motivation. And there are goons on loan from other public companies that Musk owns. (Side note: how how how is it legal for Musk to use employees from public companies for his private project!?!? Is this something that the Delaware courts are going to influence?)

Fail Whale, an internet icon

In the early days of Twitter, moments of failure were celebrated with a Fail Whale, the iconic image that Twitter posted when something went terribly awry in the system, requiring it to be shut down and, effectively, rebooted. It’s been a long time since we saw the Fail Whale because there was a strong infrastructure team who worked to bake resilience into the system. In other words, Twitter grew up.

How long can the resilience of the system allow it to keep functioning? It could be quite a while. But I also can’t help but think of a video I saw years ago about what would happen to New York City if the humans suddenly disappeared overnight. First the pipes burst and the rats invaded. But without humans leaving behind trash, the rats eventually died. The critters that remained? The cockroaches of course.

3. Failure is entangled with perception.

If you searched for “miserable failure” (or even just “failure”) on September 29, 2006, the first result was the official George W. Bush biography. This act of “Google bombing” made the internet lol. But it also hinted at a broader dynamic related to failure. There are failures that everyone can agree are failures (e.g. the explosion of the Challenger), but most failures are a matter of perception.

Politicians, policies, companies, and products are often deemed a “failure” rhetorically by those who oppose them, regardless of any empirical measure one might use. George W. Bush was deemed a failure by those who were opposed to his “War on Terrorism.” Declaring something a failure is a way to delegitimize it. And when something is delegitimized, it can become a failure.

Glasses that turn colorful tulips into black-and-white. A commentary on perception.
Photo 182315403 © mariavonotna

I often think back to MySpace’s downfall. In 2007, I penned a controversial blog post noting a division that was forming as teenagers self-segregated based on race and class in the US, splitting themselves between Facebook and MySpace. A few years later, I noted the role of the news media in this division, highlighting how media coverage about MySpace as scary, dangerous, and full of pedophiles (regardless of empirical evidence) helped make this division possible. The news media played a role in delegitimizing MySpace (aided and abetted by a team at Facebook, which was directly benefiting from this delegitimization work).

Perception (including racism and classism) have shaped the social media landscape since the beginning.

A lot has changed about our news media ecosystem since 2007. In the United States, it’s hard to overstate how the media is entangled with contemporary partisan politics and ideology. This means that information tends not to flow across partisan divides in coherent ways that enable debate. In general, when journalists/advocates/regular people on the left declare conservative politicians/policies to be failures, this has little impact on the right because it is actively ignored by the media outlets consumed by those on the right. But interestingly, when journalists/advocates/regular people on the right declare progressive politicians/policies to be failures, both mainstream media and the left obsessively amplify falsehoods and offensive content in an attempt to critique and counteract them. (Has anyone on the left managed to avoid hearing about the latest round of celebrity anti-Semitism?)

I’m especially fascinated by how the things that are widely deemed failures are deemed failures for different reasons across the political spectrum. Consider the withdrawal in Afghanistan. The right did a fantastic job of rhetorically spinning this as a Biden failure, while the left criticized aspects of the mission. This shared perception of failure landed in the collective public consciousness; there was no need to debate why individual groups saw it as failure. Of course, this also meant that there was no shared understanding of what led to that point, no discussion of what should’ve been done other than it should’ve been done better. Perceptions of failure don’t always lead to shared ideas of how to learn from these lessons.

The partisan and geopolitical dimensions of perception related to Twitter are gobsmacking. Twitter has long struggled to curb hate, racism, anti-Semitism, transphobia, and harassment. For a long time, those on the right have labeled these efforts censorship. Under the false flag of freedom of speech, the new Twitter has eradicated most safeguards, welcoming in a new era of amplified horrors, with the news media happily covering this spectacle. (This is what led Joan Donovan and I to talk about the importance of strategic silence.)

Musk appears to be betting that the spectacle is worth it. He’s probably correct in thinking that large swaths of the world will not deem his leadership a failure either because they are ideologically aligned with him or they simply don’t care and aren’t seeing any changes to their corner of the Twitterverse.

He also appears to believe that the advertising community will eventually relent because they always seem to do so when an audience is lingering around. And with a self-fashioned Gladiator torturing his enemies for sport in front of a live audience, there are lots of dollars on the table. Musk appears convinced that capitalistic interests will win out.

So the big question in my mind is: how effective will the perception that Twitter is failing be in the long run, given how it is not jumping across existing ideological divisions? Perception of failure can bring about failure, but it doesn’t always. That’s the story of many brands who resist public attacks. Perception of failure can also just fade into the background, reifying existing divisions.

Of course, a company needs money and the only revenue stream Twitter has stems from advertising. This is one of the reasons that activism around the advertisers matters. If advocates can convince advertisers to hold out, that will starve a precarious system. That is a tangible way to leverage perception of failure. Same can be said if advocates manage to convince Apple or Google to de-list. Or if perception can be leveraged into court fights, Congressional battles, or broader policy sanctions. But right now, it seems as though perception has gotten caught in the left/right cultural war that is unfolding in the United States.

4. Failure is an end state.

There are many ways in which the Twitter story could end, but it’s important to remember that most companies do eventually end (or become unrecognizable after 100+ years). The internet is littered with failed companies. And even though companies like Yahoo! still have a website, they are in a “permanently failing” status. Most companies fail when they run out of money. And the financials around Twitter are absurd. As a company, it has persisted almost entirely on a single profit stream: advertising. That business strategy requires eyeballs. As we’ve already witnessed, a subscription plan for salvation is a joke.

The debt financing around Twitter is gob-smacking. I cannot for the life of me understand what the creditors were thinking, but the game of finance is a next level sport where destroying people, companies, and products to achieve victory is widely tolerated. Historical trends suggest that the losers in this chaos will not be Musk or the banks, but the public.

For an anchor point, consider the collapse of local news journalism. The myth that this was caused by Craigslist or Google drives me bonkers. Throughout the 80s and 90s, private equity firms and hedge funds gobbled up local news enterprises to extract their real estate. They didn’t give a shit about journalism; they just wanted prime real estate that they could develop. And news organizations had it in the form of buildings in the middle of town. So financiers squeezed the news orgs until there was no money to be squeezed and then they hung them out to dry. There was no configuration in which local news was going to survive, no magical upwards trajectory of revenue based on advertising alone. If it weren’t for Craigslist and Google, the financiers would’ve squeezed these enterprises for a few more years, but the end state was always failure. Failure was the profit strategy for the financiers. (It still boggles my mind how many people believe that the loss of news journalism is because of internet advertising. I have to give financiers credit for their tremendous skill at shifting the blame.)

Photo 55254243 © Romolo Tavani

I highly doubt that Twitter is going to be a 100-year company. For better or worse, I think failure is the end state for Twitter. The question is not if but when, how, and who will be hurt in the process?

Right now, what worries me are the people getting hurt. I’m sickened to watch “journalists” aid and abet efforts to publicly shame former workers (especially junior employees) in a sadistic game of “accountability” that truly perverts the concept. I’m terrified for the activists and vulnerable people around the world whose content exists in Twitter’s databases, whose private tweets and DMs can be used against them if they land in the wrong hands (either by direct action or hacked activity). I’m disgusted to think that this data will almost certainly be auctioned off.

Frankly, there’s a part of me that keeps wondering if there’s a way to end this circus faster to prevent even greater harms. (Dear Delaware courts, any advice?)

No one who creates a product wants to envision failure as an inevitable end state. Then again, humans aren’t so good at remembering that death is an inevitable end state either. But when someone doesn’t estate plan, their dependents are left with a mess. Too many of us have watched the devastating effects of dementia and, still, few of us plan for all that can go wrong when our minds fall apart and we lash out at the ones we love. Few companies die a graceful death either. And sadly, that’s what I expect we’re about to see. A manic, demented creature hurting everyone who loved it on its way out the door.

Closing Thoughts

I’m not omniscient. I don’t know where this story ends. But after spending the last few years obsessing over what constitutes failure, I can’t help but watch this situation with a rock in my stomach.

Failure isn’t a state, but a process. It can be a generative process. After all, some plants only grow after a forest fire. (And yes, yes, tech is currently obsessed with “fail fast.” But frankly, that’s more about a status game than actually learning.)

Failure should not always be the end goal. There’s much to be said about the journey, about living a worthy life, about growing and learning and being whole. Yet, what keeps institutions, systems, companies, and products whole stems from how they are configured within a network of people, practices, and perception. Radical shifts in norms, values, and commitments can rearrange how these networks are configured. This is why transitions are hard and require a well-thought through strategy to prevent failure, especially if the goal is to be whole ethically.

Watching this situation unfold, a little voice keeps nagging in my head. How should our interpretation of this situation shift if we come to believe that failure is the desired end goal? There’s a big difference between a natural forest fire and one that stems from the toxic mixture of arson and climate change.

Dead bird

Differential Perspectives

This update is to let you know about a new essay that’s now online in in-press form: “Differential Perspectives: Epistemic Disconnects Surrounding the US Census Bureau’s Use of Differential Privacy.” Click here to read the full essay.

When the U.S. Census Bureau announced its intention to modernize its disclosure avoidance procedures for the 2020 Census, it sparked a controversy that is still underway. The move to differential privacy introduced technical and procedural uncertainties, leaving stakeholders unable to evaluate the quality of the data. More importantly, this transformation exposed the statistical illusions and limitations of census data, weakening stakeholders’ trust in the data and in the Census Bureau itself.

Jayshree Sarathy and I have been trying to make sense of the epistemic currents of this controversy. In other words, how do divergent ways of sense-making shape people’s understanding of census data – and what does that tell us about how people deal with census data controversies.

We wrote an essay for an upcoming special issue of Harvard Data Science Review that will focus on differential privacy and the 2020 Census. While the special issue is not yet out, we were given permission to post our in-press essay online. And so I thought I’d share it here for those of you who relish geeky writings about census, privacy, politics, and controversies. This paper draws heavily on Science and Technology Studies (STS) theories and is based on ethnographic fieldwork. In it, we analyze the current controversy over differential privacy as a battle over uncertainty, trust, and legitimacy of the Census. We argue that rebuilding trust will require more than technical repairs or improved communication; it will require reconstructing what we
identify as a ‘statistical imaginary.’ Check out our full argument here.

For those who prefer the tl;dr video version, I sketched out some of these ideas at the Microsoft Research Summit in the fall.

We are still continuing to work through these ideas so by all means, feel free to share feedback or critiques; we relish them.

Crisis Text Line, from my perspective

Like everyone who cares about Crisis Text Line and the people we serve, I have spent the last few days reflecting on recent critiques about the organization’s practices. Having spent my career thinking about and grappling with tech ethics and privacy issues, I knew that – had I not been privy to the details and context that I know – I would be outraged by what folks heard this weekend. I would be doing what many of my friends and colleagues are doing, voicing anger and disgust. But as a founding board member of Crisis Text Line, who served as board chair from June 2020 until the beginning of January 2021, I also have additional information that shaped how I thought about these matters and informed my actions and votes over the last eight years. 

As a director, I am currently working with others on the board and in the organization to chart a path forward. As was just announced, we have concluded that we were wrong to share texter data with Loris.ai and have ended our data-sharing agreement, effective immediately. We had not shared data since we changed leadership; the board had chosen to prioritize other organizational changes to support our staff, but this call-to-action was heard loud and clear and shifted our priorities. But that doesn’t mean that the broader questions being raised are resolved. 

Texters come to us in their darkest moments. What it means to govern the traces they leave behind looks different than what it means to govern other types of data. We are always asking ourselves when, how, and should we leverage individual conversations borne out of crisis to better help that individual, our counselors, and others who are suffering. These are challenging ethical questions with no easy answer. 

What follows is how I personally thought through, balanced, and made decisions related to the trade-offs around data that we face every day at Crisis Text Line. This has been a journey for me and everyone else involved in this organization, precisely because we care so deeply. I owe it to the people we serve, the workers of Crisis Text Line, and the broader community who are challenging me to come forward to own my decisions and role in this conversation. This is my attempt to share both the role that I played and the framework that shaped my thinking. Since my peers are asking for this to be a case study in tech ethics, I am going into significant detail. For those not seeking such detail, I apologize for the length of this. 

Most of the current conversation is focused on the ethics of private-sector access to messages from texters in crisis. These are important issues that I will address, but I want to walk through how earlier decisions influenced that decision. I also want to share how the ethical struggles we face are not as simple as a binary around private-sector access. There are ethical questions all the way down.

What follows here is, I want to emphasize, my personal perspective, not the perspective of the organization or the board. As a director of Crisis Text Line, I have spent the last 8 years trying to put what I know about tech ethics into practice. I am grateful that those who care about tech ethics are passionate about us doing right by our texters. We have made changes based on what we have heard from folks this weekend. But those changes are not enough. We need to keep developing and honing guiding principles to govern our work. My goal has been and continues to be ensuring ethical practices while navigating the challenges of governing both an organization and data. Putting theory into practice continues to be more challenging than I ever imagined. Given what has unfolded, I would also love advice from those who care as I do about both mental health and tech ethics.

First: Why data?

Even before we launched the CTL service, I knew that data would play a significant role in the future of the organization. My experience with tech and youth culture was why I was asked to join the board. Delivering a service that involved asynchronous interactions via text would invariably result in the storage of data. Storing data would be needed to deliver the service; the entire system was necessarily designed to enable handoffs between counselors and to allow texters to pick up conversations hours (or days) later.

Storing data immediately prompted three key questions:

  1. How long would we store the data that users provided to us?
  2. Could we create a secure system?
  3. Under what conditions would we delete data?

As a board, we realized the operational necessity of stored data, which meant an investment in the creation of a secure system and deep debate over our data retention policies. We decided that anyone should have the right to remove their data at any point, a value I strongly agreed with. The implementation of this policy relied on training all crisis counselors how to share this info with texters if they asked for it; we chose to implement the procedure by introducing a codeword that users could share to trigger a deletion of their data. (This was also documented as part of the terms of service, which texters were pointed to when they first contacted us. I know that no one in crisis reads lawyer-speak to learn this, which is why I was more interested in ensuring that our counselors knew this.)

Conducting the service would require storing data, but addressing the needs of those in crises required grappling with how data would be used more generally. Some examples of how data are used in the service: 

  • When our counselors want to offer recommendations for external services, they pull on outside data to bring into the conversation; this involves using geographic information texters provide to us.
  • Our supervisors review conversations both to support counselors real-time and give feedback later with an eye towards always improving the quality of conversations.

Our initial training program was designed based on what we could learn from other services, academic literature, and guidance from those who had been trained in social work and psychology. Early on, we began to wonder how the conversations that took place on our platform could and should inform the training itself. We knew that counselors gained knowledge through experience, and that they regularly mentored new counselors on the platform. But could we construct our training so that all counselors got to learn from the knowledge developed by those who came before them? 

This would mean using texter data for a purpose that went beyond the care and support of that individual. Yes, the Terms of Service allowed this, but this is not just a legal question; it’s an ethical question. Given the trade-offs, I made a judgment call early on that not only was using texter data to strengthen training of counselors without their explicit consent ethical, but that to not do this would be unethical. Our mission is clear: help people in crisis. To do this, we need to help our counselors better serve texters. We needed to help counselors learn and grow and develop skills with which they can help others. I supported the decision to use our data in this way.

A next critical turning point concerned scale. My mantra at Crisis Text Line has always been to focus on responsible scaling, not just scaling for scaling sake. But we provide a service that requires a delicate balance of available counselors to meet the needs of incoming texters. This meant that we had to think about how to predict the need and how to incentivize counselors to help out at spike moments. And still, there were often spikes where the need exceeded the availability of counselors. This led us to think about our ethical responsibilities in these moments. And this led to another use of data: 

  • When there are spikes in the service without enough counselors, we triage incoming requests to ensure that those most at physical risk get served fastest; this requires analyzing the incoming texts even before a conversation starts.

This may not seem like a huge deal, but it’s an ethical decision that I’ve struggled with for years. How do you know who is in most need from just intake messages? Yes, there are patterns, but we’ve also learned over the years that these are not always predictable. More harrowingly, we know retrospectively that these signals can be biased. Needless to say, I would simply prefer for us to serve everyone, immediately. But when that’s not possible, what’s our moral and ethical responsibility? Responding to incoming requests in order might meet some people’s definition of “fair,” but is that ethical? Especially when we know that when people are in the throes of a suicide attempt, time is of the essence? I came to the conclusion that we have an ethical responsibility to use our data to work to constantly improve the triage algorithm, to do the best we can to identify those for whom immediate responses can save a life. This means using people’s data without their direct consent, to leverage one person’s data to help another. 

Responsible scaling has introduced a series of questions over the years. I’ve reflected in my head on one for years that we’ve never implemented: Should we attempt to match need to expertise? In other words, should our counselors specialize? To date, we haven’t, but it’s something I think a lot about. But there are also questions that have been raised that we have intentionally abandoned. For example, there was once a board meeting where the question of automation came up. We already use some automation tools in training and for intake; should some conversations be automated? This was one of those board meetings where I put my foot down. Absolutely not. Data could be used to give our counselors superpowers, but centering this service on humans was essential. In this context, my mantra has always been augmentation not automation. The board and organization embraced this mantra, and I’m glad for it.

Next: Data for Research

From early on, researchers came to Crisis Text Line asking for access to data. This prompted even more reflection. We had significant data and we were seeing trends that had significant implications for far more than our service. We started reporting out key trends, highlighting patterns that we then published on our website. I supported this effort because others in the ecosystem told us it helped them to learn from the patterns that we were seeing. This then led to the more complicated issue of whether or not to allow external researchers to study our data with an eye towards scholarship. 

I’m a scholar. I know how important research is and can be. I knew how little data exists in the mental health space, how much we had tried to learn from others, how beneficial knowledge could be to others working in the mental health ecosystem. I also knew that people who came to us in crisis were not consenting to be studied. Yes, there was a terms of service that could contractually permit such use, but I knew darn straight that no one would read it, and advised everyone involved to proceed as such. 

I have also tracked the use of corporate data for research for decades, speaking up against some of Facebook’s experiments. Academic researchers often want to advance knowledge by leveraging corporate data, but they do not necessary grapple with the consequences of using data beyond IRB requirements. There have been heated debates in my field about whether or not it is ethical to use corporate trace data without the consent of users to advance scientific knowledge. I have had a range of mixed feelings about this, but have generally come out in opposition to private trace data being used for research. 

So when faced with a similar question at Crisis Text Line, I had to do a lot of soul searching. Our mission is to help people. Our texters come to us in their darkest hours. Our data was opening up internal questions right and left about how to best support them. We don’t have the internal resources to analyze the data to answer all of our questions, to improve our knowledge base in ways that can help texters. I knew that having additional help from researchers could help us learn in ways that would improve training of counselors and help people down the line. I also knew that what we were learning internally might be useful to other service providers in the mental health space and I felt queasy that we were not sharing what we had learned to help others.

Our organization does not exist for researchers to research. Our texters do not come to us to be research subjects. But our texters do come to us for help. And we do help them by leveraging what we learn helping others, including researchers. Texters may not come to us to pay it forward for the next person in need, but in effect, that’s what their engagement with us was enabling. I see that as an ethical use of data, one predicated on helping counselors and texters through experience mediated by data. The question in my mind then was: what is the relationship of research to this equation?

I elected to be the board member overseeing the research efforts. We have explored – and continue to explore – the right way to engage researchers in our work. We know that they are seeking data for their own interests, but our interest is clear: can their learnings benefit our texters and counselors, in addition to other service providers and the public health and mental health ecosystem. To this end, we have always vetted research proposals and focused on research that could help our mission, not just satisfy researcher curiosity. 

Needless to say, privacy was a major concern from day one. Privacy was a concern even before we talked about research; we built privacy processes even for internal analyses of data. But when research is involved, privacy concerns are next-level. Lots of folks have accused us of being naive about reidentification over the last few days, which I must admit has been painful to hear given how much time I spend thinking about and dealing with reidentification in other contexts. I know that reidentification is possible and that was at the heart and soul of our protocols. Researchers have constrained access to scrubbed data under contract precisely because there’s a possibility that, even with our scrubbing procedures, reidentification might be possible. But, we limited data to minimize reidentification risks and added contractual procedures to explicitly prevent reidentification.

When designing these protocols, my goal was to create the conditions where we could learn from people in crisis to help others in crisis without ever, in any way, adding to someone’s crisis. And this means privacy-first.

More generally though, the research question opened up a broader set of issues in my mind. Our service can directly help individuals. What can and should we do to advance mental health more generally? What can and should we be providing to the field? What is our responsibility to society outside our organization?

Next: Training as a Service

Our system is based on volunteers who we train to give counsel. As is true in any volunteer-heavy contexts, volunteers come and go. Training is resource intensive, but essential for the service. Repeatedly, volunteers approached us as a board to tell us about the secondary benefits of the training. Yes, the training was designed to empower a counselor to communicate with a person who was in crisis, but these same skills were beneficial at work and in personal relationships. Our counselors kept telling us that crisis management training has value in the world outside our doors. This prompted us to reflect on the potential benefit of training far more people to manage crises, even if they did not want to volunteer for our service.

The founder of Crisis Text Line saw an opportunity and came to the board. We did not have the resources to simply train anyone who was interested. But HR teams at companies had both the need for, and the resources for, larger training systems. The founder proposed building a service that could provide us with a needed revenue stream. I don’t remember every one of the options we discussed, but I do know that we talked about building a separate unit in the organization to conduct training for a fee. This raised the worry that this would be a distraction to our core focus. We did all see training as mission-aligned, but we needed to focus on the core service CTL was providing. 

We were also struggling, as all non-profits do, with how to be sustainable. Non-profit fundraising is excruciating and fraught. We were grateful for all of the philanthropic organizations who made starting the organization possible, but sustaining philanthropic funding is challenging and has significant burdens. Program officers always want grantees to find other sources of money. There are traditional sources: foundations, individual donors, corporate social responsibility donations. In some contexts, there’s government funding, though at that time, government was slashing funding not increasing it. Funding in the mental health space is always scarce. And yet, as a board, we always had a fiduciary responsibility to think about sustainability.  

Many of the options in front of us concerned me deeply. We could pursue money by billing insurance companies, but this had a lot of obvious downsides to it. Many of the people we serve do not have access to insurance. Moreover, what insurers really want is our data, which we were strongly against. They weren’t alone – many groups wanted to buy our data outright. We were strongly against those opportunities as well. No selling of data, period. 

Big tech companies and other players were increasingly relying on CTL as their first response for people in crisis, without committing commensurate (or sometimes, any) resources to help offset that burden. This was especially frustrating because they had the resources to support those in crisis but had chosen not to, preferring to outsource the work but not support it. They believed that traffic was a good enough gift.

This was why we, as a board, were reflecting on whether or not we could build a revenue stream out of training people based on what we learned from training counselors. In the end, we opted not to run such an effort from within Crisis Text Line, to reduce the likelihood of distracting from our mission. Instead, we gave the founder of Crisis Text Line permission to start a new organization, with us retaining a significant share in the company; we also retained the right to a board seat. This new entity was structured as a for-profit company designed to provide a service to businesses, leveraging what we had learned helping people. This company is called Loris.ai.

Loris.ai planned on learning from us to build training tools for people who were not going to serve as volunteers for our service. Yet, the company was a separate entity and the board rejected any plan that involved full access to our systems. Instead, we opted to create a data-sharing agreement that paralleled the agreement we had created with researchers: controlled access to scrubbed data solely to build models for training that would improve mental health more broadly. We knew that it did not make sense for them to directly import our training modules; they would be training people in a different context. Yet, both they and we believed that there were lessons to be learned from our experiences, both qualitatively and quantitatively.

I struggled with this decision at the time and ever since. I could see both benefits and risks in sharing our data with another organization, regardless of how mission-aligned we were. We debated this in the boardroom; I pushed back around certain proposals. In the end, some of the board members at the time saw this decision through the lens of a potential financial risk reduction. If the for-profit company did well, we could receive dividends or sell our stake in order to fund the crisis work we were doing. I voted in favor of creating Loris.ai for a different reason.  If another entity could train more people to develop the skills our crisis counselors were developing, perhaps the need for a crisis line would be reduced. After all, I didn’t want our service to be needed; the fact that it is stems from a system that is deeply flawed. If we could build tools that combat the cycles of pain and suffering, we could pay forward what we were learning from those we served. I wanted to help others develop and leverage empathy. 

This decision weighed heavily on me, but I did vote in favor of it. Knowing what I know now, I would not have. But hindsight is always clearer.

Existential Crisis

In June of 2020, our employees came to us with grave concerns about the state of the organization. This triggered many changes to the organization and a reckoning as a board. I stepped in as board chair. As we focused on addressing the issues raised by employees, I felt as though we needed to prioritize what they were telling us. My priority was to listen to our staff, center the needs of our workers and texters, learn from them, and focus on our team, core business, and organizational processes. We also needed to hire a permanent CEO. The concerns we received were varied and diverse, requiring us to prioritize what to focus on when. 

Data practices were not among the dominant concerns, but they were among the issues raised. The most significant data concern raised to us was whether our data practices were as strong as the board believed them to be. This prompted three separate, interlocking audits. We had already conducted a privacy and security audit, but we revisited it in greater depth. We also hired two additional independent teams to conduct audits around 1) data governance and 2) ethical use of and bias in data. I was the board member overseeing this work, pushing each of these efforts to probe more deeply, engaging a range of stakeholders along the way (including counselors, staff, partners, and domain experts).

I quickly learned that as much as scholars talk about the need to do audits of ethics/biases, there is not a good roadmap out there for doing this work, especially in the context of a fairly large-scale organization. As someone who cares deeply about this, I was glad to be pushing the edges and interrogating every process, but I also wanted us to have guidance on how to strengthen our efforts even further. There is always room to improve, and there isn’t yet a community of practice for people doing this in real-time while people are depending on an organization’s work. Still, we got great feedback from the audits and set about to prioritize the changes that needed to be implemented.

Aside from the data audits, most of our changes over the last 18 months have been organizational and infrastructural, focused on strengthening our team, processes, and tools. As the board chair, I deliberately chose not to prioritize any changes to our contractual relationship with Loris.ai, in favor of prioritizing the human concerns raised by our staff. We focused our energies internally and on our core mission. When Loris asked the Crisis Text Line founder to leave the board, we chose not to offer up a replacement. Our most proactive stance over the last 18 months was to freeze the agreement with Loris, with an explicit commitment to reconsider the relationship in 2022 once a new CEO was in place. As a result of these decisions, we have not shared any data since the change in leadership. 

Governance 

The practice of non-profit governance requires collectively grappling with trade-off after trade-off. I have been a volunteer director of the board of Crisis Text Line for 8 years both because I believe in the mission and because I have been grateful to govern alongside amazing directors from whom I constantly learn. This doesn’t mean it’s been easy and it definitely doesn’t mean we always agree. But we do push each other and I learn a lot in the process. We strived to govern ethically, but that doesn’t mean others would see our decisions as such. We also make decisions that do not pan out as expected, requiring us to own our mistakes even as we change course. Sometimes, we can be fully transparent about our decisions; in other situations – especially when personnel matters are involved – we simply can’t. That is the hardest part of governance, both for our people and for myself personally. 

I want to own my decisions as a director of Crisis Text Line. I voted in favor of our internal uses of data, our collaborations with researchers, and our decision to contribute to the founding of Loris.ai. I did so based on a calculation of ethical trade-offs informed by my research and experiences. I want to share some aspects of the rubric in my mind: 

1. Consent. Consent in my mind exists in a more complex context than the simpler view I had before I began volunteering at CTL. I believe in the ideal of informed consent, which has informed my research. (A ToS is not consent.) But I have also learned from our clinical team about the limits of consent and when consent undermines ethical action. I have also come to believe that there are times when other ethical values must be prioritized against an ideal of consent. For example, I support Crisis Text Line’s decision to activate Public Safety Answering Points (PSAPs) when a texter presents an imminent life-or-death risk to themselves or to someone else, even when they have not consented to such an activation. Many members of our staff and volunteers are mandatory reporters who have the legal as well as ethical obligation to report. At the same time, I also support our ongoing work to reduce reliance on PSAPs and our policy efforts to have PSAPs center mental health more.  

2. Present and future. Our mission is to help individuals who come to us in need and to improve the state of mental health for people more generally. I would like to create a world in which we are not needed. To that end, I am always thinking about what benefits individuals and the collective. I’m also thinking about future individuals. What can we learn now that will help the next person who comes to us? And what can we do now so that fewer people need us? I believe in a moral imperative of paying it forward and I approach data ethics with this in mind. There is undeniably a tension between the obligation to the individual and the obligation to the collective, one that I regularly reflect on.

3. The field matters. We are a non-profit and part of a broader ecosystem of mental health services. We cannot serve everyone; even for those whom we do serve in crisis, we cannot be their primary mental health provider. We want there to be an entire ecosystem of support for people in crisis, of which we play just one part. We have a responsibility to the individual in the moment of crisis and we have a responsibility to learn from and strengthen the field to help individuals downstream. To this end, I think we have an ethical responsibility to give back to the ecosystem, not just to the individual in the moment. But we need to balance this imperative with respect for the individuals during their darkest moments.

4. Improve over time. Much of our data begins as conversations, involving data from both texters and counselors. As you might imagine, when our counselors’ attempts to help someone need improvement, it weighs deeply on our entire staff. Both counselors and texters benefit when counselors learn from reviewing their conversations, from reviewing what worked or didn’t work in others’ conversations, and from lessons learned being fed back into training. My eye is always on what will improve those conversations. (This is why an obsession at the board level is quality over quantity.)

The responsibility of CTL is a heavy one, in ways that may not be obvious to those who haven’t worked in this field or seen the sometimes-counterintuitive challenges of serving people in crisis. I use the needs and prioritizations of our texters and team as my first and most important filter when judging what decisions to make. I see helping counselors and staff succeed as key to helping serve people in need. This sometimes requires thinking about how texter data can help strengthen our counselors; this sometimes requires asking if conducting research will help them grow; and this sometimes requires asking what is needed to strengthen the broader ecosystem.

When it comes to thinking about texters, I’m focused on the quality of the conversation and the safety of the texter. When it comes to safety, I’m often confronted with non-knowledge, which is harrowing. (Did someone who was attempting suicide survive the night? Emergency responders don’t necessarily tell us, so we rely on hearing back from the texter, but what’s the healthiest way to followup with a texter?) I still don’t know the best way to measure quality; I have scoured the literature and sought advice from many to guide my thinking, but I am still struggling there and in conversation with others to try to crack this nut. I’m also thankful that there’s an entire team at Crisis Text Line dedicated to thinking about, evaluating, and improving conversation quality.

I regularly hear from both texters and counselors, whose experiences shape my thinking, but I also know that these are but a few perspectives. I read the feedback from our surveys, trying to grapple with the limitations and biases of those responses. There is no universal texter or counselor experience, which means that I have to constantly remind myself about the diversity of perspectives among texters and counselors. I cannot govern by focusing on the average; I must strive to think holistically about the diversity of viewpoints. When it comes to governance, I am always making trade-offs – often with partial information – which is hard. I also know that I sometimes get it wrong and I try to learn from those mistakes. 

These are some of the factors that go through my head when I’m thinking about our data practices. And of course, I’m also thinking about our legal and fiduciary responsibilities. But the decisions I make regarding our data start from thinking through the ethics and then I factor in financial or legal considerations. 

As I listen and learn from how people are responding to this conversation and from the decisions that I contributed to, it is clear to me that we have not done enough to share what we are doing with data and why. It’s also clear to me that I made mistakes and change is necessary. I know that after the challenges of the last year, I have erred on the side of doing the work inside the organization rather than grappling with the questions raised our arrangement with Loris.ai. 

In order to continue serving Crisis Text Line, I need to figure out what we – and I – can do better. I am fascinated by my peers calling to make this a case study in tech ethics. I think that’s quite interesting, and I hope that my detailing this thinking can contribute to that effort. I hope to learn from whatever case study emerges.

To that end, to my peers and colleagues, I also have some honest questions for all of you who are frustrated, angry, disappointed, or simply unsure about us: 

  • What is the best way to balance the implicit consent of users in crisis with other potentially beneficial uses of data which they likely will not have intentionally consented to but which can help them or others? 
  • Given that people come to us in their darkest moments, can/should we enable research on the traces that they produce? If so, how should this be structured? 
  • Is there any structure in which lessons learned from a non-profit service provider can be transferred to a for-profit entity? Also, how might this work with partner organizations, foundations, government agencies, sponsors, or subsidiaries, and are the answers different?
  • Given the data we have, how do we best serve our responsibility to others in the mental health ecosystem?
  • What can better community engagement and participatory decision-making in this context look like? How do we engage people to think holistically about the risks to life that we are balancing and that are shaping our decisions?  (And how do we not absolve our governance responsibilities to perform ethics, as we’ve seen play out in other contexts?)

There are also countless other questions that I struggle with that go beyond the data issues, but also shape them. For example, as always, I will continue to push up against the persistent and endemic question that plagues all non-profits: How can we build a financially sustainable service organization that is able to scale to meet people’s needs? I also struggle every day with broader dynamics in which tech, data, ethics, and mental health are entangled. For example, how do we collectively respond to mental health crises that are amplified by decisions made by for-profit entities? What is our collective responsibility in a society where mental health access is so painfully limited? 

These questions aren’t just important for a case study. These are questions I struggle with every day in practice and I would be grateful to learn from others’ journeys. I know I will make mistakes, but I hope that I can learn from them and, with your guidance, make fewer.

I’m grateful to everyone who cares enough about the texters we serve to engage in this conversation. I’m particularly grateful to be in a community that will call in anyone whom they feel isn’t exercising proper care with people’s data and privacy. And most of all I am thankful for the counsel, guidance, and clarity of our workers at Crisis Text Line, who do the hard work of caring for texters every day, while also providing clear feedback to help drive the future of the organization. I can only help that my decisions help them succeed at the hard work they do.

I warmly welcome any advice from all of you who’ve been watching the conversation and who care about seeing CTL succeed in its mission.

The Muddled Speech of Numbers: Blood clots, COVID-19 vaccines, and statistical risk

Earlier this week, the CDC paused the roll-out of the Johnson & Johnson COVID-19 vaccination after 6 women experienced serious blood clots. Their caution has merit, given that the FDA has been approving vaccinations in advance of the typical large-scale evaluations because speed is seen as so crucial. Reasonably, there is a desire to know more about these blood clots before more might appear. Yet, there was also sheer frustration from many in the medical community because the choice to pause the roll-out suggested that there was a serious issue, that the vaccine was dangerous. In a context in which vaccine hesitancy is likely to undermine herd immunity, any suggestion that the vaccine might have consequences can be twisted and contorted. 

Across many mailing lists and Twitter streams, I kept seeing data points trying to ground the seriousness of the blood clots in the J&J vaccine. Most referenced the frequency of blood clots that women experience while taking the birth control pill, roughly 1/1000. People also highlighted how common blood clots are for those who are in the throes of COVID-19. These were meant to highlight just how rare and statistically insignificant blood clots are when taking the J&J vaccine. 

Yet, as these attempts to ground the conversation unfolded, a different kind of outrage formed. A handful of people highlighted women they knew who had died of blood clots most likely related to birth control. Many more women who took hormonal birth control expressed frustration that they had no idea that they were at increased risk of a blood clot. Sure, it’s part of the fine print of that printout you get from CVS when picking up your pill, but this wasn’t something doctors emphasized. Unlike the J&J vaccine situation, the relationship between birth control and blood clots – or even COVID-19 and blood clots – hasn’t been front page news.

As I was processing the back-and-forth about statistical risk and who was responsible for sharing what with whom, and at what level of amplitude, I couldn’t help but think about all of the scholarship into the politics of numbers. We’re living at a time when politicians are simultaneously espousing the need for “evidence-based policymaking” and working to diligently undermine, contort, or weaponize evidence. This is what scholars of “agnotology” mean when they talk about the manufacturing of ignorance through the seeding of doubt. Or what other scholars highlight as the “weaponization of transparency.” 

I couldn’t help but feel empathy for the scientists at J&J and the FDA who have been working around the clock trying to make a vaccine available to the public, trying to be responsible stewards of information and statistical risk in a context where their desire for caution can be turned on its head to undermine the legitimacy of their work. I also found myself feeling empathy for journalists who recognize the importance of reporting on this development, even as they know that their reporting is easily evolving into misinformation that’s undermining the vaccine roll-out. Working with numbers is itself political.

To work in the world of medicine and science, statistics and probabilities is to grapple with trade-offs at a macro level, which present ethical conundrums even in the best of times. After all, that one terrible death from a blood clot could perhaps have been prevented by not taking the vaccine. But this is where we enter into the world of trade-offs, of unknowns, of morality. Without a vaccine rollout, many more people will die of blood clots from COVID-19. Had that woman been infected with COVID-19, she might have still succumbed to a blood clot. Medicine alters the dimensionality of risk. So how do ethics get negotiated? And by whom? This is the story of public health. 

Those complexities underpinning the advancement of science are complicated further by a politicized context such as that which surrounds the COVID-19 vaccine. Each act of communication can be twisted and contorted to convey different agendas, different values, different goals. Amplified transparency of risk is itself a political act. Sprinkle in the expectation in our current society that individuals are expected to make informed decisions for themselves, their families, and their communities, and we have a recipe for disaster. This is what the production of ignorance – aka misinformation, information disorder, agnotology, etc… – looks like in practice. The very acts of scientific transparency, which are intended to help inform decision-making, are twisted on their head, serving to undermining the legitimacy of scientific work and the coordination of a public that must work together to address a deadly disease. 

I keep wondering what it will take for the public to trust scientific information. But, perhaps, a better question might be: What kind of information is needed to help a fragmented public work together to solve societal-level challenges?

Note to the reader: These are questions that I’m struggling with. If you have thoughts, ideas (or even reading recommendations!), don’t hesitate to reach out: zephoria [at] zephoria [dot] org.

Behind every algorithm, there be politics.

In my first class in computer science, I was taught that an algorithm is simply a way of expressing formal rules given to a computer. Computers like rules. They follow them. Turns out that bureaucracy and legal systems like rules too. The big difference is that, in the world of computing, we call those who are trying to find ways to circumvent the rules “hackers” but in the world of government, this is simply the mundane work of politicking and lawyering. 

When Dan Bouk (and I, as an earnest student of his) embarked on a journey to understand the history of the 1920 census, we both expected to encounter all sorts of politicking and lawyering. As scholars fascinated by the census, we’d heard the basics of the story: Congress failed to reapportion itself after receiving data from the Census Bureau because of racist and xenophobic attitudes mixed with political self-interest. In other words, politics. 

As we dove into this history, the first thing we realized was that one justification for non-apportionment centered on a fight about math. Politicians seemed to be arguing with each other over which algorithm was the right algorithm with which to apportion the House. In the end, they basically said that apportionment should wait until mathematicians could figure out what the “right” algorithm was. (Ha!) The House didn’t manage to pass an apportionment bill until 1929 when political negotiations had made this possible. (This story anchors our essay on “Democracy’s Data Infrastructure.”)

Dan kept going, starting what seemed like a simple question: what makes Congress need an algorithm in the first place? I bet you can’t guess what the answer is! Wait for it… wait for it… Politics! Yes, that’s right, Congress wanted to cement an algorithm into its processes in a feint attempt to de-politicize the reapportionment process. With a century of extra experience with algorithms, this is patently hysterical. Algorithms as a tool to de-politicize something!?!? Hahahah. But, that’s where they had gotten to. And now the real question was: why? 

In Dan’s newest piece – “House Arrest: How an Automated Algorithm Constrained Congress for a Century” – Dan peels back the layers of history with beautiful storytelling and skilled analysis to reveal why our contemporary debates about algorithmic systems aren’t so very new. Turns out that there were a variety of political actors deeply invested in ensuring that the People’s House stopped growing. Some of their logics were rooted in ideas about efficiency, but some were rooted in much older ideas of power and control. (Don’t forget that the electoral college is tethered to the size of the House too!) I like to imagine power-players sitting around playing with their hands and saying mwah-ha-ha-ha as they strategize over constraining the growth of the size of the House. They wanted to do this long before 1920, but it didn’t get locked in then because they couldn’t agree, which is why they fought over the algorithm. By 1929, everyone was fed up and just wanted Congress to properly apportion and so they passed a law, a law that did two things: it stabilized the size of the House at 435 and it automated the apportionment process. Those two things – the size of the House and the algorithm – were totally entangled. After all, an automated apportionment couldn’t happen without the key variables being defined. 

Of course, that’s not the whole story. That 1929 bill was just a law. Up until then, Congress had passed a new law every decade to determine how apportionment would work for that decade. But when the 1940 census came around, they were focused on other things. And then, in effect, Congress forgot. They forgot that they have the power to determine the size of the House. They forgot that they have control over that one critical variable. The algorithm became infrastructure and the variable was summarily ignored.

Every decade, when the Census data are delivered, there are people who speak out about the need to increase the size of the House. After all, George Washington only spoke once during the Constitutional Convention. He spoke up to say that we couldn’t possibly have Congresspeople represent 40,000 people because then they wouldn’t trust government! The constitutional writers listened to him and set the minimum at 30,000; today, our representatives each represent more than 720,000 of us. 

After the 1790 census, there were 105 representatives in Congress. Every decade, that would increase. Even though it wasn’t exact, there was an implicit algorithm in that size increase. In short, increase the size of the House so that no sitting member would lose his seat. After all, Congress had to pass that bill and this was the best way to get everyone to vote on it. The House didn’t increase at the same ratio as the size of the population, but it did increase every decade until 1910. And then it stopped (with extra seats given to new states before being brought back to the zero-sum game at the next census). 

One of the recommendations of the Commission on the Practice of Democratic Citizenship (for which I was a commissioner) was to increase the size of the House. When we were discussing this as a commission, everyone spoke of how radical this proposition was, how completely impossible it would be politically. This wasn’t one of my proposals – I wasn’t even on that subcommittee – so I listened with rapt curiosity. Why was it so radical? Dan taught me the answer to that. The key to political power is to turn politicking into infrastructure. After all, those who try to break a technical system, to work around an algorithm, they’re called hackers. And hackers are radical. 

Want more like this?

  1. Read “House Arrest: How an Automated Algorithm Constrained Congress for a Century” by Dan Bouk. There’s drama! And intrigue! And algorithms!
  2. Read “Democracy’s Data Infrastructure” by Dan Bouk and me. It might shape your view about public fights over math.
  3. Sign up for my newsletter. More will be coming, I promise!

The US Federal Government Needs a VP of Engineering, not a CTO

If you look at the roster of the Biden-Harris transition team, it’s quickly apparent that the incoming administration is tech-forward. Given the systematic dismantlement of the federal government over the last four years, and the significant logistical and scientific needs underpinning a large-scale vaccine roll-out, it is unsurprising to hear that the new team is looking to bring in tech talent. Under the Obama Administration, the White House invested significantly in shoring up the Office of Science and Technology Policy, an office that has for all intents and purposes laid dormant for four years under the current Administration. The Obama Administration also hired the first Chief Technology Officer (CTO) to help envision what a tech-forward US government might look like. As the Biden-Harris transition builds its plans for January 20th, many people in my networks are abuzz, wondering who might be the next CTO.

My advice to the transition team is this: You need a VP of Engineering even more than you need a CTO.

To the non-geeks of the world, these two titles might be meaningless or perhaps even interchangeable. The roles and responsibilities associated with each are often co-mingled, especially in start-ups. But in more mature tech companies, they signal distinct qualifications and responsibilities. Moreover, they signal different ideas for what is top priority. In their ideal incarnation, a CTO is a visionary, a thought leader, a big picture thinker. The right CTO sees how tech can fit into the big picture of a complex organization, sits in the C-suite to integrate tech into the strategy. A tech-forward White House would want such a person precisely to help envision a technocratic government structure that could do great things. Yet, a CTO is nothing more than a figurehead if the organizational infrastructure is dysfunctional. This can prompts organizations to want to build new tech separately inside an “office of the CTO” rather than doing the hard work of fixing the core organizational infrastructure to ensure that larger visions can work. When it comes to government, we’ve learned the hard way how easily a tech-forward effort located exclusively inside the White House can be swept away.

Inside tech companies, there is often a more important but less visible role when it comes to getting things done. To those on the outside, a VP title appears far less powerful, far less important than a C-Suite title. If you’re not a tech geek, a VP of Engineering might appear less important than a CTO. But in my experience, finding the right VP of Engineering is more essential than getting a high profile CTO when a system is broken. A VP-Eng is a fixer, someone who looks at broken infrastructure with a debugger’s eye and recognizes that the key to success is ensuring that the organizational and technical systems function hand-in-hand. While CTOs are often public figures in industry, a VP-Eng tends to shy away from public attention, focusing most of their effort on empowering their team to do great things. VP-Engs have technical chops, but their superpower comes from their ability to manage large technical teams, to really understand the forest and see what’s getting in the way of achieving a goal so that they can unblock that and ensure that their team thrives. A VP-Eng also understands that finding and nurturing the right talent is key to success, which is why they tend to spend an extraordinary amount of time recruiting, hiring, training, and mentoring.

When structured well, the CTO faces outwards while the VP-Eng faces inwards. They can and should be extraordinarily complementary roles. Yet, even though the Obama Administration invested in a CTO and built numerous programs to bring tech talent into the White House and sprinkle tech workers throughout all of the agencies, that tech-forward team never invested in a VP-Eng. They never invested in people whose job it was to truly debug the underlying problems that prevent government agencies from successfully building and deploying technical systems.

As I listen to friends and peers in Silicon Valley talk about all of the ways in which tech people are going to go east to “fix government,” I must admit that I’m cringing. Government functions very differently than industry, by design. In industry, our job is to serve customers. Yes, our companies might want more customers, but we have the luxury of focusing on those who have money and those who want to use our tools. Government must serve everyone. Much to the chagrin of capitalists, the vast majority of government resources goes to the hardest problems, to ensuring that whatever the government implements can serve everyone.

I have spent 20 years calling bullshit on “the pipeline problem” as industry’s excuse for its under-investment in hiring and retaining BIPOC and non-male talent. Even as tech workers are slowly starting to wake up to the realization that justice, equity, diversity, and inclusion are essential to the long-term health of tech, I’m watching the flawed logics that underpin the narrative about pipeline problems infuse the conversation about why government tech is broken. Government tech isn’t broken because government lacks talent. Government tech is broken because there are a range of stakeholders who are actively invested in ensuring that the federal government cannot execute, who are actively working to ensure that when the government is required to execute, it does so through upholding capitalist interests. Moreover, there are a range of stakeholders who would rather systematically undermine and hurt the extraordinarily diverse federal talent than invest in them.

If Silicon Valley waltzes into the federal government in January with its “I’ve got a submarine for that” mindset thinking that it can sprinkle tech fairy dust all over the agencies, we’re screwed. The undermining of the federal government’s tech infrastructure began decades ago. What has happened in the last four years has just sped up a trend that was well underway before this administration. And it’s getting worse by the day. The issue at play isn’t the lack of tech-forward vision. It’s the lack of organizational, human capital, and communications infrastructure that’s necessary for a complex “must-reach-everyone” organization to transform. Rather than coming in with hubris and focusing on grand vision, we need a new administration who is willing to dive deep and understand the cracks in the infrastructure that make a tech-forward agenda impossible. And this is why we need a federal VP-Eng whose job it is to engage in deep debugging. Cuz the bugs aren’t in the newest layer of code; they’re down deep in the libraries that no one has examined for years.

If the new administration is willing to invest in infrastructural repair, my ethnographic work in and around government has led me to three core areas that I would prioritize first. Two are esoteric structural barriers that prevent basic functioning. The third is a political weakness.

1. Procurement. Government outsourcing to industry is modern-day patronage. You don’t need Tammany Hall when you have a swarm of governmental contractors buzzing about. When politicians talk about about “small government,” what they really mean is “no federal employees.” Don’t let talk of “efficiency” fool you either. The cost of greasing the hands of Big Business through procurement procedures theoretically designed for efficiency is extraordinarily expensive. Not only is the financial cost of outsourcing to industry mind-boggling and bloated, but there are additional cost to morale, institutional memory, and mission that are not captured in the economic models. Government procurement infrastructure is also designed for failure, to ensure that government agencies are unable to deliver which, in turn, prompts Congress (regardless of who is in power) to reduce funding and increase scrutiny, tightening the screws on a tightly coupled system to increase the scale and speed of failure. It is a vicious cycle. Government procurement infrastructure is filled with strategically designed inefficiencies, frictions, and insanely corrupt incentives that undermine every aspect of government. They key here is not to replicate industry; the structures of contracting, outsourcing, and supply chains within a capitalist system do not make sense in government — and for good reason. A VP-Eng and a tech-forward government should begin by understanding the damage and ripple effects caused by OMB Directive A-76, which fundamentally shapes tech procurement.

2. Human Resources. Too many people in the tech industry think that HR is a waste of space…. that is, until they find that recruiter who makes everything easier. As such, in industry, we often talk about “people operations” or “talent management” instead of HR. We recognize the importance of investing in talent over the long-term, even if we reject HR. In government, HR is the lifeblood of how work happens in government and it was redesigned by progressives in the 20th century to ensure a more equitable approach to hiring and talent development. For decades, government created opportunities for women and Black communities when industry did not. Unfortunately, this aspect of the “deep state” was not at all appreciated by those invested in maintaining America’s caste system. Those invested in racist hierarchies didn’t need to be explicit about their agendas; they could rely on the language of capitalism to systematically undermine the talent of the federal government. Just as outsourcing in government has statistically taken jobs from Black federal workers and given them to white contractors, a range of HR policies have been designed to make working in government hellacious. Those who have stuck around — out of duty, out of necessity — have become enrolled in an existentially broken system. Some have chosen to sit back and not do their jobs, waiting to be fired. Others took the opposite approach, masochistically throwing themselves at the problem. People come from the outside and complain that government workers are lazy, stupid, incompetent. But it is the system that has produced these conditions. The system has been starved, the policies and protocols are corroded. It is through the purposeful torturing of HR that an executive branch hellbent on destroying federal government can wage the greatest damage; this has been underway for 40 years but the proverbial frogs are now sitting in boiling water. HR will require a lot of repair-work, not quick-fix policy changes. An untended HR system in government becomes a bottleneck unimaginable to those in industry and that’s where we are. Existing talent will require nurturing, and this investment is crucial because their institutional knowledge is profound. Any administration who wants to build a government that can respond to crises as grand as a pandemic or climate change will need to create the conditions for government to be a healthy workplace not just for the next four years but for decades to come. They will need a “people ops” mindset to HR. A VP-Eng should start with a listening tour of those who work on tech projects in agencies.

3. Communications. It never ceases to amaze me that the top communications professional in every federal agency is a political appointee. And every incoming administration — regardless of partisan affiliation — tends to fill these positions with campaign comms people who helped them win the election. Unfortunately, the type of comms that’s needed to win an election (which requires appealing to only some people) is not the same as the type of comms that’s needed to be accountable to the public as a whole for 365 days per year. Over and over, the comms people that White Houses install focus on speaking to their political base and to Congress. This is all fine and well if the only comms need is to negotiate policy outcomes. But the partisan perversion of comms within agencies has another outcome — it delegitimizes the agency among members of the public who are not affiliated with that political party, not to mention the wide majority of the public who is outright disgusted by all partisan tomfoolery. If your political interest is to eliminate the federal government, undermining the legitimacy of federal agencies benefits you. If that’s not your goal, you need to rethink your approach to communications. Right now, every agency needs a crisis comms expert at the helm to regain control over the agency’s narrative. When things are more stable, they need strategic comms professionals who can build a plan for re-legitimization. Each agency also needs an org comms expert whose job, like a VP-Eng, is to repair internal communications infrastructure so that information can effectively flow. Most politicians and government watchdogs think that the key to greater transparency is to increase oversight, just as progressives did after Nixon. But given how broken comms is in all of these agencies, turning up the heat through FOIA, GAO, and Congressional hearings will not increase accountability right now; it will increase breakage. Inside tech companies, comms is often seen as soft, squishy, irrational work, an afterthought that should not be prioritized. But comms, like HR, is the infrastructure that makes other things possible. A VP-Eng needs a comms counterpart working alongside them to achieve any organizational transformation.

Addressing these three seemingly non-tech issues would do more to enable a tech-forward government than any new-fangled shiny tech object. There is so much repair work to be done inside government. Yet, as I listen to those I know in Silicon Valley talk about all of the ways they wish to “fix government,” I fear that we will see a significant flood of solutionism when what’s needed most is humility and curiosity. Humility to understand that the structure of governmental agencies exists in response to the never-ending flow of solutionist interventions. And curiosity to understand how and why road blocks and barriers exist — and which ones to strategically eradicate to empower civil servants who are devoted to ensuring that government functions for the long-term, regardless of who is in power. Grand visioning has its role, but when infrastructure is breaking all around us, we need debuggers and maintenance people first and foremost. We need people who find joy in the invisible work of just making a system function, of recognizing that technical systems require the right organizational structures to thrive. This is the mindset a VP-Eng brings to the table.

In conclusion… If you are working on the transition or planning to jump into government in January, please spend some time understanding why the system is the way it is. If you are a tech person, do not presume you know based on your experience with other broken systems or based on what you read in the news; take the time to learn. If you are not a tech person, do not assume that tech can fix what politics can’t; this is a classic mistake with a long history. If the goal is truly to “build back better,” it requires starting with repairing the infrastructure. Without this, you are building on quicksand.

Teens Are Addicted to Socializing, Not Screens

Screenagers in the time of coronavirus.

(This was originally written for OneZero.)

If you’re a parent trying to corral your children into attending “school” online, you’ve probably had the joy of witnessing a complete meltdown. Tantrums are no longer the domain of two-year-olds; 15-year-olds are also kicking and screaming. Needless to say, so are the fortysomethings. Children are begging to go outside. Teenagers desperately want to share physical space with their friends. And parents are begging their kids to go online so that they themselves can get some downtime. These are just some of the ways in which today’s reality seems upside down.

I started studying teenagers’ use of social media in the early 2000s when Xanga and LiveJournal were cool. I watched as they rode the waves of MySpace and Facebook, into the realms of Snap and Instagram. My book It’s Complicated: The Social Lives of Networked Teens unpacks some of the most prevalent anxieties adults have about children’s use of technology, including the nonstop fear-inducing message that children are “addicted” to their phones, computers, and the internet. Needless to say, I never imagined how conditions might change when a global pandemic unfolded.

I cannot remember a period in my research when parents weren’t wringing their hands about kids’ use of screens. The tone that parents took paralleled the tone their parents took over heavy metal and rock music, the same one their grandparents had when they spoke of the evils of comic books. Moral panics are consistent — but the medium that the panic centers on changes. Still, as with each wave of moral panic, there’s supposedly something intrinsic to the new medium that makes it especially horrible for young people. Cognizant of this history and having gone deep on social media activities with hundreds of teenagers, I pushed back and said that it wasn’t the technology teens were addicted to; it was their friends. Adults rolled their eyes at me, just as their teens rolled their eyes at them.

Now, nearly a month into screen-based schooling en masse, I’ve gotten to witness a global natural experiment like none I ever expected. What have we learned? The majority of young people are going batshit crazy living a life wholly online. I can’t help but think that Covid-19 will end up teaching all of us how important human interaction in physical space is. If this goes on long enough, might this cohort end up going further and hating screens?

Until the world started sheltering in place, most teens spent the majority of their days in school, playing sports, and participating in other activities, almost always in physical spaces with lots of humans co-present. True physical privacy is a luxury for most young people whose location in space is heavily monitored and controlled. Screens represented a break from the mass social. They also represented privacy from parents, an opportunity to socialize without parents lurking even when their physical bodies were forced to be at home. Parents hated the portals that kids held in their hands because their children seemed to disappear from the living room into some unknown void. That unknown void was those children’s happy place — the place where they could hang out with their friends, play games, and negotiate a life of their own.

Now, with Covid-19, schools are being taught through video. Friends are through video. Activities are through video. There are even videos for gym and physical sport. Religious gatherings are through video. Well-intended adults are volunteering to step in and provide more video-based opportunities for young people. TV may have killed the radio star, but Zoom and Google Hangouts are going to kill the delight and joy in spending all day in front of screens.

The majority of young people are going batshit crazy living a life wholly online.

Fatigue is setting in. Sure, making a TikTok video with friends is still fun, but there’s a limit to how much time anyone can spend on any app — even teens. Give it another month and there will be kids dropping out of school or throwing their computers against the wall. (Well, I know of two teens who have already done the latter with their iPads.) Young people are begging to go outside, even if that means playing sports with their parents. Such things might not be surprising for a seven-year-old, but when your 15-year-old asks to play soccer with you, do it! As a child of the ‘80s, I was stunned during my fieldwork to learn that most contemporary kids didn’t find ways to sneak out of the house once their parents were asleep because going online was so much easier. I can’t help but wonder if sneaking out is becoming a thing once again.

As we’re all stuck at home, teens are still doing everything possible to escape into their devices to maintain relationships, socialize, and have fun. Their shell-shocked parents are ignoring any and all screen time limitations as they too crave escapism (people who study fortysomethings: explain Animal Crossing to me!!?). But when physical distancing is no longer required, we’ll get to see that social closeness often involves meaningful co-presence with other humans. Adults took this for granted, but teens had few other options outside of spaces heavily controlled by adults. They went online not because the technology is especially alluring, but because it has long been the most viable option for having meaningful connections with friends given the way that their lives have been structured. Maybe now adults will start recognizing what my research showed: youth are “addicted” to sociality, not technology for technology’s sake.

Joyfully Geeking Out

2020 US Census: Everybody counts!

In 2015, I was invited to join the Commerce Department’s Data Advisory Council. Truth be told, I was kinda oblivious to what this was all about. I didn’t know much about how the government functioned. I didn’t know what a “FACA” was. (Turns out that the “Federal Advisory Committee Act” is a formal government thing.) Heck, I only had the most cursory of understanding about the various agencies and bureaus associated with the Commerce Department. But I did understand one thing: the federal government has some of the most important data infrastructure out there. Long before discussions about our current tech industry, government agencies have been trying to wrangle data to help both the public and industry. The Weather Channel wouldn’t be able to do its work without NOAA (National Oceanographic and Atmospheric Administration). Standards would go haywire without NIST (National Institute of Standards and Technology). And we wouldn’t be able to apportion our representatives without Census. 

Over the last few years, I have fallen madly in love with the data puzzles that underpin the census. Thanks to Margo Anderson’s “The American Census,” I learned that the history of the census is far far far messier than I ever could’ve imagined.  An amazing network of people dedicated to helping ensure that people are represented have given me a crash course into the longstanding battle over collecting the best data possible. As the contours of the 2020 census became more visible, it also became clear that it would be the perfect networked fieldsite for trying to understand two questions that have been tickling my brain: 

  1. What makes data legitimate?
  2. What does it take to secure data infrastructure? 

(For any STS scholar reading this, add scare-quotes to all of the words that make you want to scream.)

Over the last two years, I’ve been learning as much as I could possibly learn about the census. I’ve also been dipping my toe into archival work and trying to strengthen my theoretical toolkit to handle the study of organizations and large scale operations. And now we’re a matter of days away from when everyone in the country will receive their invitation to participate in the census, and so I’m throwing myself into what is bound to be a whirlwind in order to fully understand how an operation of this magnitude unfolds.  

While I have produced a living document to explain how differential privacy is part of the 2020 census, I’ve mostly not been writing much about the research I’m doing. To be honest, I’m relishing taking the time to deeply understand something and to do the deep reflection I haven’t had the privilege of doing in almost a decade. 

If I’ve learned anything from the world of census junkies, this decadal process is raw insanity and full of unexpected twists and turns. Yet, what I can say is that it’s also filled with some of the most civic-minded people that I’ve ever encountered. There are so many different stakeholders trying to ensure that we get a good count in order to guarantee that everyone in this country is counted, represented, and acknowledged. This is important, not just for Congressional apportionment and redistricting, but also to make sure that funding is properly allocated, that social science research can inform important decision-making processes, and that laws designed to combat discrimination are enforced.

I’m sharing this now, not because I have new thinking to offer, but because I want folks to understand why I might be rather unresponsive to non-census-obsessives over the next few months. I want to dive head-first into this research and relish the opportunity to be surrounded by geeks engaged in a phenomenal civic effort. For those who aren’t thinking full-time about the census, please understand that I’m going to turn down requests for my time this spring and my email response time may also falter. 

Of course.. if you want to make me smile, send me photographs of cool census stuff happening in your community! Or interesting census content that comes through your feeds! And if you want to go hog wild, get involved. Census is hiring. Or you could make census-related content to encourage others to participate. Or at the very least, tell everyone you know to participate; they’ll get their official invitation starting March 12. 

The US census has been taking place every 10 years since 1790. It is our democracy’s data infrastructure. And it is “big data” before there was big data. It’s also the cornerstone of countless advances in statistics and social scientific knowledge. Understanding the complexity of the census is part-and-parcel with understanding where our data-driven world is headed. When this is all over, I hope that I’ll have a lot more to contribute to that conversation. In the meantime, forgive me for relishing my obsessive focus. 

Facing the Great Reckoning Head-On

I was recently honored by the Electronic Frontier Foundation. Alongside Oakland Privacy and William Gibson, I received a 2019 Barlow/Pioneer Award. I was asked to give a speech. As I reflected on what got me to this place, I realized I needed to reckon with how I have benefited from men whose actions have helped uphold a patriarchal system that has hurt so many people. I needed to face my past in order to find a way to create space to move forward.

This is the speech I gave in accepting the award. I hope sharing it can help others who are struggling to make sense of current events. And those who want to make the tech industry to do better.

— —

I cannot begin to express how honored I am to receive this award. My awe of the Electronic Frontier Foundation dates back to my teenage years. EFF has always inspired me to think deeply about what values should shape the internet. And so I want to talk about values tonight, and what happens when those values are lost, or violated, as we have seen recently in our industry and institutions.

But before I begin, I would like to ask you to join me in a moment of silence out of respect to all of those who have been raped, trafficked, harassed, and abused. For those of you who have been there, take this moment to breathe. For those who haven’t, take a moment to reflect on how the work that you do has enabled the harm of others, even when you never meant to.

<silence>

The story of how I got to be standing here is rife with pain and I need to expose part of my story in order to make visible why we need to have a Great Reckoning in the tech industry. This award may be about me, but it’s also not. It should be about all of the women and other minorities who have been excluded from tech by people who thought they were helping.

The first blog post I ever wrote was about my own sexual assault. It was 1997 and my audience was two people. I didn’t even know what I was doing would be called blogging. Years later, when many more people started reading my blog, I erased many of those early blog posts because I didn’t want strangers to have to respond to those vulnerable posts. I obfuscated my history to make others more comfortable.

I was at the MIT Media Lab from 1999–2002. At the incoming student orientation dinner, an older faculty member sat down next to me. He looked at me and asked if love existed. I raised my eyebrow as he talked about how love was a mirage, but that sex and pleasure were real. That was my introduction to Marvin Minsky and to my new institutional home.

My time at the Media Lab was full of contradictions. I have so many positive memories of people and conversations. I can close my eyes and flash back to laughter and late night conversations. But my time there was also excruciating. I couldn’t afford my rent and did some things that still bother me in order to make it all work. I grew numb to the worst parts of the Demo or Die culture. I witnessed so much harassment, so much bullying that it all started to feel normal. Senior leaders told me that “students need to learn their place” and that “we don’t pay you to read, we don’t pay you to think, we pay you to do.” The final straw for me was when I was pressured to work with the Department of Defense to track terrorists in 2002.

After leaving the Lab, I channeled my energy into V-Day, an organization best known for producing “The Vagina Monologues,” but whose daily work is focused on ending violence against women and girls. I found solace in helping build online networks of feminists who were trying to help combat sexual assault and a culture of abuse. To this day, I work on issues like trafficking and combating the distribution of images depicting the commercial sexual abuse of minors on social media.

By 2003, I was in San Francisco, where I started meeting tech luminaries, people I had admired so deeply from afar. One told me that I was “kinda smart for a chick.” Others propositioned me. But some were really kind and supportive. Joi Ito became a dear friend and mentor. He was that guy who made sure I got home OK. He was also that guy who took being called-in seriously, changing his behavior in profound ways when I challenged him to reflect on the cost of his actions. That made me deeply respect him.

I also met John Perry Barlow around the same time. We became good friends and spent lots of time together. Here was another tech luminary who had my back when I needed him to. A few years later, he asked me to forgive a friend of his, a friend whose sexual predation I had witnessed first hand. He told me it was in the past and he wanted everyone to get along. I refused, unable to convey to him just how much his ask hurt me. Our relationship frayed and we only talked a few times in the last few years of his life.

So here we are… I’m receiving this award, named after Barlow less than a week after Joi resigned from an institution that nearly destroyed me after he socialized with and took money from a known pedophile. Let me be clear — this is deeply destabilizing for me. I am here today in-no-small-part because I benefited from the generosity of men who tolerated and, in effect, enabled unethical, immoral, and criminal men. And because of that privilege, I managed to keep moving forward even as the collateral damage of patriarchy stifled the voices of so many others around me. I am angry and sad, horrified and disturbed because I know all too well that this world is not meritocratic. I am also complicit in helping uphold these systems.

What’s happening at the Media Lab right now is emblematic of a broader set of issues plaguing the tech industry and society more generally. Tech prides itself in being better than other sectors. But often it’s not. As an employee of Google in 2004, I watched my male colleagues ogle women coming to the cafeteria in our building from the second floor, making lewd comments. When I first visited TheFacebook in Palo Alto, I was greeted by a hyper-sexualized mural and a knowing look from the admin, one of the only women around. So many small moments seared into my brain, building up to a story of normalized misogyny. Fast forward fifteen years and there are countless stories of executive misconduct and purposeful suppression of the voices of women and sooooo many others whose bodies and experiences exclude them from the powerful elite. These are the toxic logics that have infested the tech industry. And, as an industry obsessed with scale, these are the toxic logics that the tech industry has amplified and normalized. The human costs of these logics continue to grow. Why are we tolerating sexual predators and sexual harassers in our industry? That’s not what inclusion means.

I am here today because I learned how to survive and thrive in a man’s world, to use my tongue wisely, watch my back, and dodge bullets. I am being honored because I figured out how to remove a few bricks in those fortified walls so that others could look in. But this isn’t enough.

I am grateful to EFF for this honor, but there are so many underrepresented and under-acknowledged voices out there trying to be heard who have been silenced. And they need to be here tonight and they need to be at tech’s tables. Around the world, they are asking for those in Silicon Valley to take their moral responsibilities seriously. They are asking everyone in the tech sector to take stock of their own complicity in what is unfolding and actively invite others in.

And so, if my recognition means anything, I need it to be a call to arms. We need to all stand up together and challenge the status quo. The tech industry must start to face The Great Reckoning head-on. My experiences are all-too common for women and other marginalized peoples in tech. And it it also all too common for well-meaning guys to do shitty things that make it worse for those that they believe they’re trying to support.

If change is going to happen, values and ethics need to have a seat in the boardroom. Corporate governance goes beyond protecting the interests of capitalism. Change also means that the ideas and concerns of all people need to be a part of the design phase and the auditing of systems, even if this slows down the process. We need to bring back and reinvigorate the profession of quality assurance so that products are not launched without systematic consideration of the harms that might occur. Call it security or call it safety, but it requires focusing on inclusion. After all, whether we like it or not, the tech industry is now in the business of global governance.

“Move fast and break things” is an abomination if your goal is to create a healthy society. Taking short-cuts may be financially profitable in the short-term, but the cost to society is too great to be justified. In a healthy society, we accommodate differently abled people through accessibility standards, not because it’s financially prudent but because it’s the right thing to do. In a healthy society, we make certain that the vulnerable amongst us are not harassed into silence because that is not the value behind free speech. In a healthy society, we strategically design to increase social cohesion because binaries are machine logic not human logic.

The Great Reckoning is in front of us. How we respond to the calls for justice will shape the future of technology and society. We must hold accountable all who perpetuate, amplify, and enable hate, harm, and cruelty. But accountability without transformation is simply spectacle. We owe it to ourselves and to all of those who have been hurt to focus on the root of the problem. We also owe it to them to actively seek to not build certain technologies because the human cost is too great.

My ask of you is to honor me and my story by stepping back and reckoning with your own contributions to the current state of affairs. No one in tech — not you, not me — is an innocent bystander. We have all enabled this current state of affairs in one way or another. Thus, it is our responsibility to take action. How can you personally amplify underrepresented voices? How can you intentionally take time to listen to those who have been injured and understand their perspective? How can you personally stand up to injustice so that structural inequities aren’t further calcified? The goal shouldn’t be to avoid being evil; it should be to actively do good. But it’s not enough to say that we’re going to do good; we need to collectively define — and hold each other to — shared values and standards.

People can change. Institutions can change. But doing so requires all who harmed — and all who benefited from harm — to come forward, admit their mistakes, and actively take steps to change the power dynamics. It requires everyone to hold each other accountable, but also to aim for reconciliation not simply retribution. So as we leave here tonight, let’s stop designing the technologies envisioned in dystopian novels. We need to heed the warnings of artists, not race head-on into their nightmares. Let’s focus on hearing the voices and experiences of those who have been harmed because of the technologies that made this industry so powerful. And let’s collaborate with and design alongside those communities to fix these wrongs, to build just and empowering technologies rather than those that reify the status quo.

Many of us are aghast to learn that a pedophile had this much influence in tech, science, and academia, but so many more people face the personal and professional harm of exclusion, the emotional burden of never-ending subtle misogyny, the exhaustion from dodging daggers, and the nagging feeling that you’re going crazy as you try to get through each day. Let’s change the norms. Please help me.

Thank you.

 

we’re all taught how to justify history as it passes by
and it’s your world that comes crashing down
when the big boys decide to throw their weight around
but he said just roll with it baby make it your career
keep the home fires burning till america is in the clear

i think my body is as restless as my mind
and i’m not gonna roll with it this time
no, i’m not gonna roll with it this time
— Ani Difranco

Agnotology and Epistemological Fragmentation

On April 17, 2019, I gave a talk at the Digital Public Library of America conference (DPLAfest). This is the transcript of that talk.

Illustration by Jim Cooke

I love the librarian community. You all are deeply committed to producing, curating, and enabling access to knowledge. Many of you embraced the internet with glee, recognizing the potential to help so many more people access critical information. Many of you also saw the democratic and civic potential of this new technology, not to mention the importance of an informed citizenry in a democratic world. Yet, slowly, and systematically, a virus has spread, using technology to systematically tear at the social fabric of public life.

This shouldn’t be surprising. After all, most of Silicon Valley in the late 90s and early aughts was obsessed with Neal Stephenson’s Snow Crash. How did they not recognize that this book was dystopian?

Slowly, and systematically, a virus has spread, using technology to systematically tear at the social fabric of public life.

Epistemology is the term that describes how we know what we know. Most people who think about knowledge think about the processes of obtaining it. Ignorance is often assumed to be not-yet-knowledgeable. But what if ignorance is strategically manufactured? What if the tools of knowledge production are perverted to enable ignorance? In 1995, Robert Proctor and Iain Boal coined the term “agnotology” to describe the strategic and purposeful production of ignorance. In an edited volume called Agnotology, Proctor and Londa Schiebinger collect essays detailing how agnotology is achieved. Whether we’re talking about the erasure of history or the undoing of scientific knowledge, agnotology is a tool of oppression by the powerful.

Swirling all around us are conversations about how social media platforms must get better at content management. Last week, Congress held hearings on the dynamics of white supremacy online and the perception that technology companies engage in anti-conservative bias. Many people who are steeped in history and committed to evidence-based decision-making are experiencing a collective sense of being gaslit—the concept that emerges from a film on domestic violence to explain how someone’s sense of reality can be intentionally destabilized by an abuser. How do you process a black conservative commentator testifying before the House that the Southern strategy never happened and that white nationalism is an invention of the Democrats to “scare black people”? Keep in mind that this commentator was intentionally trolled by the terrorist in Christchurch; she responded to this atrocity with tweets containing “LOL” and “HAHA.” Speaking of Christchurch, let’s talk about Christchurch. We all know the basic narrative. A terrorist espousing white nationalist messages livestreamed himself brutally murdering 50 people worshipping in a New Zealand mosque. The video was framed like a first-person shooter from a video game. Beyond the atrocity itself, what else was happening?

He produced a media spectacle. And he learned how to do it by exploiting the information ecosystem we’re currently in.

This terrorist understood the vulnerabilities of both social media and news media. The message he posted on 8chan announcing his intention included links to his manifesto and other sites, but it did not include a direct link to Facebook; he didn’t want Facebook to know that the traffic came from 8chan. The video included many minutes of him driving around, presumably to build audience but also, quite likely, in an effort to evade any content moderators that might be looking. He titled his manifesto with a well-known white nationalist call sign, knowing that the news media would cover the name of the manifesto, which in turn, would prompt people to search for that concept. And when they did, they’d find a treasure trove of anti-Semitic and white nationalist propaganda. This is the exploitation of what’s called a “data void.” He also trolled numerous people in his manifesto, knowing full well that the media would shine a spotlight on them and create distractions and retractions and more news cycles. He produced a media spectacle. And he learned how to do it by exploiting the information ecosystem we’re currently in. Afterwards, every social platform was inundated with millions and millions of copies and alterations of the video uploaded through a range of fake accounts, either to burn the resources of technology companies, shame them, or test their guardrails for future exploits.

What’s most notable about this terrorist is that he’s explicit in his white nationalist commitments. Most of those who are propagating white supremacist logics are not. Whether we’re talking about the so-called “alt-right” who simply ask questions like “Are jews people?” or the range of people who argue online for racial realism based on long-debunked fabricated science, there’s an increasing number of people who are propagating conspiracy theories or simply asking questions as a way of enabling and magnifying white supremacy. This is agnotology at work.

What’s at stake right now is not simply about hate speech vs. free speech or the role of state-sponsored bots in political activity. It’s much more basic. It’s about purposefully and intentionally seeding doubt to fragment society. To fragment epistemologies. This is a tactic that was well-honed by propagandists. Consider this Russia Today poster.

But what’s most profound is how it’s being done en masse now. Teenagers aren’t only radicalized by extreme sites on the web. It now starts with a simple YouTube query. Perhaps you’re a college student trying to learn a concept like “social justice” that you’ve heard in a classroom. The first result you encounter is from PragerU, a conservative organization that is committed to undoing so-called “leftist” ideas that are taught at universities. You watch the beautifully produced video, which promotes many of the tenets of media literacy. Ask hard questions. Follow the money. The video offers a biased and slightly conspiratorial take on what “social justice” is, suggesting that it’s not real, but instead a manufactured attempt to suppress you. After you watch this, you watch more videos of this kind from people who are professors and other apparent experts. This all makes you think differently about this term in your reading. You ask your professor a question raised by one of the YouTube influencers. She reacts in horror and silences you. The videos all told you to expect this. So now you want to learn more. You go deeper into a world of people who are actively anti-“social justice warriors.” You’re introduced to anti-feminism and racial realism. How far does the rabbit hole go?

One of the best ways to seed agnotology is to make sure that doubtful and conspiratorial content is easier to reach than scientific material.

YouTube is the primary search engine for people under 25. It’s where high school and college students go to do research. Digital Public Library of America works with many phenomenal partners who are all working to curate and make available their archives. Yet, how much of that work is available on YouTube? Most of DPLA’s partners want their content on their site. They want to be a destination site that people visit. Much of this is visual and textual, but are there explainers made about this content that are on YouTube? How many scientific articles have video explainers associated with them?

Herein lies the problem. One of the best ways to seed agnotology is to make sure that doubtful and conspiratorial content is easier to reach than scientific material. And then to make sure that what scientific information is available, is undermined. One tactic is to exploit “data voids.” These are areas within a search ecosystem where there’s no relevant data; those who want to manipulate media purposefully exploit these. Breaking news is one example of this. Another is to co-opt a term that was left behind, like social justice. But let me offer you another. Some terms are strategically created to achieve epistemological fragmentation. In the 1990s, Frank Luntz was the king of doing this with terms like partial-birth abortion, climate change, and death tax. Every week, he coordinated congressional staffers and told them to focus on the term of the week and push it through the news media. All to create a drumbeat.

Illustration by Jim Cooke Today’s drumbeat happens online. The goal is no longer just to go straight to the news media. It’s to first create a world of content and then to push the term through to the news media at the right time so that people search for that term and receive specific content. Terms like caravan, incel, crisis actor. By exploiting the data void, or the lack of viable information, media manipulators can help fragment knowledge and seed doubt.

Media manipulators are also very good at messing with structure. Yes, they optimize search engines, just like marketers. But they also look to create networks that are hard to undo. YouTube has great scientific videos about the value of vaccination, but countless anti-vaxxers have systematically trained YouTube to make sure that people who watch the Center for Disease Control and Prevention’s videos also watch videos asking questions about vaccinations or videos of parents who are talking emotionally about what they believe to be the result of vaccination. They comment on both of these videos, they watch them together, they link them together. This is the structural manipulation of media. Journalists often get caught up in telling “both sides,” but the creation of sides is a political project.

The creation of sides is a political project.

And this is where you come in. You all believe in knowledge. You believe in making sure the public is informed. You understand that knowledge emerges out of contestation, debate, scientific pursuit, and new knowledge replacing old knowledge. Scholars are obsessed with nuance. Producers of knowledge are often obsessed with credit and ownership. All of this is being exploited to undo knowledge today. You will not achieve an informed public simply by making sure that high quality content is publicly available and presuming that credibility is enough while you wait for people to come find it. You have to understand the networked nature of the information war we’re in, actively be there when people are looking, and blanket the information ecosystem with the information people need to make informed decisions.

Thank you!