Author Archives: zephoria

Differential Perspectives

This update is to let you know about a new essay that’s now online in in-press form: “Differential Perspectives: Epistemic Disconnects Surrounding the US Census Bureau’s Use of Differential Privacy.” Click here to read the full essay.

When the U.S. Census Bureau announced its intention to modernize its disclosure avoidance procedures for the 2020 Census, it sparked a controversy that is still underway. The move to differential privacy introduced technical and procedural uncertainties, leaving stakeholders unable to evaluate the quality of the data. More importantly, this transformation exposed the statistical illusions and limitations of census data, weakening stakeholders’ trust in the data and in the Census Bureau itself.

Jayshree Sarathy and I have been trying to make sense of the epistemic currents of this controversy. In other words, how do divergent ways of sense-making shape people’s understanding of census data – and what does that tell us about how people deal with census data controversies.

We wrote an essay for an upcoming special issue of Harvard Data Science Review that will focus on differential privacy and the 2020 Census. While the special issue is not yet out, we were given permission to post our in-press essay online. And so I thought I’d share it here for those of you who relish geeky writings about census, privacy, politics, and controversies. This paper draws heavily on Science and Technology Studies (STS) theories and is based on ethnographic fieldwork. In it, we analyze the current controversy over differential privacy as a battle over uncertainty, trust, and legitimacy of the Census. We argue that rebuilding trust will require more than technical repairs or improved communication; it will require reconstructing what we
identify as a ‘statistical imaginary.’ Check out our full argument here.

For those who prefer the tl;dr video version, I sketched out some of these ideas at the Microsoft Research Summit in the fall.

We are still continuing to work through these ideas so by all means, feel free to share feedback or critiques; we relish them.

Crisis Text Line, from my perspective

Like everyone who cares about Crisis Text Line and the people we serve, I have spent the last few days reflecting on recent critiques about the organization’s practices. Having spent my career thinking about and grappling with tech ethics and privacy issues, I knew that – had I not been privy to the details and context that I know – I would be outraged by what folks heard this weekend. I would be doing what many of my friends and colleagues are doing, voicing anger and disgust. But as a founding board member of Crisis Text Line, who served as board chair from June 2020 until the beginning of January 2021, I also have additional information that shaped how I thought about these matters and informed my actions and votes over the last eight years. 

As a director, I am currently working with others on the board and in the organization to chart a path forward. As was just announced, we have concluded that we were wrong to share texter data with and have ended our data-sharing agreement, effective immediately. We had not shared data since we changed leadership; the board had chosen to prioritize other organizational changes to support our staff, but this call-to-action was heard loud and clear and shifted our priorities. But that doesn’t mean that the broader questions being raised are resolved. 

Texters come to us in their darkest moments. What it means to govern the traces they leave behind looks different than what it means to govern other types of data. We are always asking ourselves when, how, and should we leverage individual conversations borne out of crisis to better help that individual, our counselors, and others who are suffering. These are challenging ethical questions with no easy answer. 

What follows is how I personally thought through, balanced, and made decisions related to the trade-offs around data that we face every day at Crisis Text Line. This has been a journey for me and everyone else involved in this organization, precisely because we care so deeply. I owe it to the people we serve, the workers of Crisis Text Line, and the broader community who are challenging me to come forward to own my decisions and role in this conversation. This is my attempt to share both the role that I played and the framework that shaped my thinking. Since my peers are asking for this to be a case study in tech ethics, I am going into significant detail. For those not seeking such detail, I apologize for the length of this. 

Most of the current conversation is focused on the ethics of private-sector access to messages from texters in crisis. These are important issues that I will address, but I want to walk through how earlier decisions influenced that decision. I also want to share how the ethical struggles we face are not as simple as a binary around private-sector access. There are ethical questions all the way down.

What follows here is, I want to emphasize, my personal perspective, not the perspective of the organization or the board. As a director of Crisis Text Line, I have spent the last 8 years trying to put what I know about tech ethics into practice. I am grateful that those who care about tech ethics are passionate about us doing right by our texters. We have made changes based on what we have heard from folks this weekend. But those changes are not enough. We need to keep developing and honing guiding principles to govern our work. My goal has been and continues to be ensuring ethical practices while navigating the challenges of governing both an organization and data. Putting theory into practice continues to be more challenging than I ever imagined. Given what has unfolded, I would also love advice from those who care as I do about both mental health and tech ethics.

First: Why data?

Even before we launched the CTL service, I knew that data would play a significant role in the future of the organization. My experience with tech and youth culture was why I was asked to join the board. Delivering a service that involved asynchronous interactions via text would invariably result in the storage of data. Storing data would be needed to deliver the service; the entire system was necessarily designed to enable handoffs between counselors and to allow texters to pick up conversations hours (or days) later.

Storing data immediately prompted three key questions:

  1. How long would we store the data that users provided to us?
  2. Could we create a secure system?
  3. Under what conditions would we delete data?

As a board, we realized the operational necessity of stored data, which meant an investment in the creation of a secure system and deep debate over our data retention policies. We decided that anyone should have the right to remove their data at any point, a value I strongly agreed with. The implementation of this policy relied on training all crisis counselors how to share this info with texters if they asked for it; we chose to implement the procedure by introducing a codeword that users could share to trigger a deletion of their data. (This was also documented as part of the terms of service, which texters were pointed to when they first contacted us. I know that no one in crisis reads lawyer-speak to learn this, which is why I was more interested in ensuring that our counselors knew this.)

Conducting the service would require storing data, but addressing the needs of those in crises required grappling with how data would be used more generally. Some examples of how data are used in the service: 

  • When our counselors want to offer recommendations for external services, they pull on outside data to bring into the conversation; this involves using geographic information texters provide to us.
  • Our supervisors review conversations both to support counselors real-time and give feedback later with an eye towards always improving the quality of conversations.

Our initial training program was designed based on what we could learn from other services, academic literature, and guidance from those who had been trained in social work and psychology. Early on, we began to wonder how the conversations that took place on our platform could and should inform the training itself. We knew that counselors gained knowledge through experience, and that they regularly mentored new counselors on the platform. But could we construct our training so that all counselors got to learn from the knowledge developed by those who came before them? 

This would mean using texter data for a purpose that went beyond the care and support of that individual. Yes, the Terms of Service allowed this, but this is not just a legal question; it’s an ethical question. Given the trade-offs, I made a judgment call early on that not only was using texter data to strengthen training of counselors without their explicit consent ethical, but that to not do this would be unethical. Our mission is clear: help people in crisis. To do this, we need to help our counselors better serve texters. We needed to help counselors learn and grow and develop skills with which they can help others. I supported the decision to use our data in this way.

A next critical turning point concerned scale. My mantra at Crisis Text Line has always been to focus on responsible scaling, not just scaling for scaling sake. But we provide a service that requires a delicate balance of available counselors to meet the needs of incoming texters. This meant that we had to think about how to predict the need and how to incentivize counselors to help out at spike moments. And still, there were often spikes where the need exceeded the availability of counselors. This led us to think about our ethical responsibilities in these moments. And this led to another use of data: 

  • When there are spikes in the service without enough counselors, we triage incoming requests to ensure that those most at physical risk get served fastest; this requires analyzing the incoming texts even before a conversation starts.

This may not seem like a huge deal, but it’s an ethical decision that I’ve struggled with for years. How do you know who is in most need from just intake messages? Yes, there are patterns, but we’ve also learned over the years that these are not always predictable. More harrowingly, we know retrospectively that these signals can be biased. Needless to say, I would simply prefer for us to serve everyone, immediately. But when that’s not possible, what’s our moral and ethical responsibility? Responding to incoming requests in order might meet some people’s definition of “fair,” but is that ethical? Especially when we know that when people are in the throes of a suicide attempt, time is of the essence? I came to the conclusion that we have an ethical responsibility to use our data to work to constantly improve the triage algorithm, to do the best we can to identify those for whom immediate responses can save a life. This means using people’s data without their direct consent, to leverage one person’s data to help another. 

Responsible scaling has introduced a series of questions over the years. I’ve reflected in my head on one for years that we’ve never implemented: Should we attempt to match need to expertise? In other words, should our counselors specialize? To date, we haven’t, but it’s something I think a lot about. But there are also questions that have been raised that we have intentionally abandoned. For example, there was once a board meeting where the question of automation came up. We already use some automation tools in training and for intake; should some conversations be automated? This was one of those board meetings where I put my foot down. Absolutely not. Data could be used to give our counselors superpowers, but centering this service on humans was essential. In this context, my mantra has always been augmentation not automation. The board and organization embraced this mantra, and I’m glad for it.

Next: Data for Research

From early on, researchers came to Crisis Text Line asking for access to data. This prompted even more reflection. We had significant data and we were seeing trends that had significant implications for far more than our service. We started reporting out key trends, highlighting patterns that we then published on our website. I supported this effort because others in the ecosystem told us it helped them to learn from the patterns that we were seeing. This then led to the more complicated issue of whether or not to allow external researchers to study our data with an eye towards scholarship. 

I’m a scholar. I know how important research is and can be. I knew how little data exists in the mental health space, how much we had tried to learn from others, how beneficial knowledge could be to others working in the mental health ecosystem. I also knew that people who came to us in crisis were not consenting to be studied. Yes, there was a terms of service that could contractually permit such use, but I knew darn straight that no one would read it, and advised everyone involved to proceed as such. 

I have also tracked the use of corporate data for research for decades, speaking up against some of Facebook’s experiments. Academic researchers often want to advance knowledge by leveraging corporate data, but they do not necessary grapple with the consequences of using data beyond IRB requirements. There have been heated debates in my field about whether or not it is ethical to use corporate trace data without the consent of users to advance scientific knowledge. I have had a range of mixed feelings about this, but have generally come out in opposition to private trace data being used for research. 

So when faced with a similar question at Crisis Text Line, I had to do a lot of soul searching. Our mission is to help people. Our texters come to us in their darkest hours. Our data was opening up internal questions right and left about how to best support them. We don’t have the internal resources to analyze the data to answer all of our questions, to improve our knowledge base in ways that can help texters. I knew that having additional help from researchers could help us learn in ways that would improve training of counselors and help people down the line. I also knew that what we were learning internally might be useful to other service providers in the mental health space and I felt queasy that we were not sharing what we had learned to help others.

Our organization does not exist for researchers to research. Our texters do not come to us to be research subjects. But our texters do come to us for help. And we do help them by leveraging what we learn helping others, including researchers. Texters may not come to us to pay it forward for the next person in need, but in effect, that’s what their engagement with us was enabling. I see that as an ethical use of data, one predicated on helping counselors and texters through experience mediated by data. The question in my mind then was: what is the relationship of research to this equation?

I elected to be the board member overseeing the research efforts. We have explored – and continue to explore – the right way to engage researchers in our work. We know that they are seeking data for their own interests, but our interest is clear: can their learnings benefit our texters and counselors, in addition to other service providers and the public health and mental health ecosystem. To this end, we have always vetted research proposals and focused on research that could help our mission, not just satisfy researcher curiosity. 

Needless to say, privacy was a major concern from day one. Privacy was a concern even before we talked about research; we built privacy processes even for internal analyses of data. But when research is involved, privacy concerns are next-level. Lots of folks have accused us of being naive about reidentification over the last few days, which I must admit has been painful to hear given how much time I spend thinking about and dealing with reidentification in other contexts. I know that reidentification is possible and that was at the heart and soul of our protocols. Researchers have constrained access to scrubbed data under contract precisely because there’s a possibility that, even with our scrubbing procedures, reidentification might be possible. But, we limited data to minimize reidentification risks and added contractual procedures to explicitly prevent reidentification.

When designing these protocols, my goal was to create the conditions where we could learn from people in crisis to help others in crisis without ever, in any way, adding to someone’s crisis. And this means privacy-first.

More generally though, the research question opened up a broader set of issues in my mind. Our service can directly help individuals. What can and should we do to advance mental health more generally? What can and should we be providing to the field? What is our responsibility to society outside our organization?

Next: Training as a Service

Our system is based on volunteers who we train to give counsel. As is true in any volunteer-heavy contexts, volunteers come and go. Training is resource intensive, but essential for the service. Repeatedly, volunteers approached us as a board to tell us about the secondary benefits of the training. Yes, the training was designed to empower a counselor to communicate with a person who was in crisis, but these same skills were beneficial at work and in personal relationships. Our counselors kept telling us that crisis management training has value in the world outside our doors. This prompted us to reflect on the potential benefit of training far more people to manage crises, even if they did not want to volunteer for our service.

The founder of Crisis Text Line saw an opportunity and came to the board. We did not have the resources to simply train anyone who was interested. But HR teams at companies had both the need for, and the resources for, larger training systems. The founder proposed building a service that could provide us with a needed revenue stream. I don’t remember every one of the options we discussed, but I do know that we talked about building a separate unit in the organization to conduct training for a fee. This raised the worry that this would be a distraction to our core focus. We did all see training as mission-aligned, but we needed to focus on the core service CTL was providing. 

We were also struggling, as all non-profits do, with how to be sustainable. Non-profit fundraising is excruciating and fraught. We were grateful for all of the philanthropic organizations who made starting the organization possible, but sustaining philanthropic funding is challenging and has significant burdens. Program officers always want grantees to find other sources of money. There are traditional sources: foundations, individual donors, corporate social responsibility donations. In some contexts, there’s government funding, though at that time, government was slashing funding not increasing it. Funding in the mental health space is always scarce. And yet, as a board, we always had a fiduciary responsibility to think about sustainability.  

Many of the options in front of us concerned me deeply. We could pursue money by billing insurance companies, but this had a lot of obvious downsides to it. Many of the people we serve do not have access to insurance. Moreover, what insurers really want is our data, which we were strongly against. They weren’t alone – many groups wanted to buy our data outright. We were strongly against those opportunities as well. No selling of data, period. 

Big tech companies and other players were increasingly relying on CTL as their first response for people in crisis, without committing commensurate (or sometimes, any) resources to help offset that burden. This was especially frustrating because they had the resources to support those in crisis but had chosen not to, preferring to outsource the work but not support it. They believed that traffic was a good enough gift.

This was why we, as a board, were reflecting on whether or not we could build a revenue stream out of training people based on what we learned from training counselors. In the end, we opted not to run such an effort from within Crisis Text Line, to reduce the likelihood of distracting from our mission. Instead, we gave the founder of Crisis Text Line permission to start a new organization, with us retaining a significant share in the company; we also retained the right to a board seat. This new entity was structured as a for-profit company designed to provide a service to businesses, leveraging what we had learned helping people. This company is called planned on learning from us to build training tools for people who were not going to serve as volunteers for our service. Yet, the company was a separate entity and the board rejected any plan that involved full access to our systems. Instead, we opted to create a data-sharing agreement that paralleled the agreement we had created with researchers: controlled access to scrubbed data solely to build models for training that would improve mental health more broadly. We knew that it did not make sense for them to directly import our training modules; they would be training people in a different context. Yet, both they and we believed that there were lessons to be learned from our experiences, both qualitatively and quantitatively.

I struggled with this decision at the time and ever since. I could see both benefits and risks in sharing our data with another organization, regardless of how mission-aligned we were. We debated this in the boardroom; I pushed back around certain proposals. In the end, some of the board members at the time saw this decision through the lens of a potential financial risk reduction. If the for-profit company did well, we could receive dividends or sell our stake in order to fund the crisis work we were doing. I voted in favor of creating for a different reason.  If another entity could train more people to develop the skills our crisis counselors were developing, perhaps the need for a crisis line would be reduced. After all, I didn’t want our service to be needed; the fact that it is stems from a system that is deeply flawed. If we could build tools that combat the cycles of pain and suffering, we could pay forward what we were learning from those we served. I wanted to help others develop and leverage empathy. 

This decision weighed heavily on me, but I did vote in favor of it. Knowing what I know now, I would not have. But hindsight is always clearer.

Existential Crisis

In June of 2020, our employees came to us with grave concerns about the state of the organization. This triggered many changes to the organization and a reckoning as a board. I stepped in as board chair. As we focused on addressing the issues raised by employees, I felt as though we needed to prioritize what they were telling us. My priority was to listen to our staff, center the needs of our workers and texters, learn from them, and focus on our team, core business, and organizational processes. We also needed to hire a permanent CEO. The concerns we received were varied and diverse, requiring us to prioritize what to focus on when. 

Data practices were not among the dominant concerns, but they were among the issues raised. The most significant data concern raised to us was whether our data practices were as strong as the board believed them to be. This prompted three separate, interlocking audits. We had already conducted a privacy and security audit, but we revisited it in greater depth. We also hired two additional independent teams to conduct audits around 1) data governance and 2) ethical use of and bias in data. I was the board member overseeing this work, pushing each of these efforts to probe more deeply, engaging a range of stakeholders along the way (including counselors, staff, partners, and domain experts).

I quickly learned that as much as scholars talk about the need to do audits of ethics/biases, there is not a good roadmap out there for doing this work, especially in the context of a fairly large-scale organization. As someone who cares deeply about this, I was glad to be pushing the edges and interrogating every process, but I also wanted us to have guidance on how to strengthen our efforts even further. There is always room to improve, and there isn’t yet a community of practice for people doing this in real-time while people are depending on an organization’s work. Still, we got great feedback from the audits and set about to prioritize the changes that needed to be implemented.

Aside from the data audits, most of our changes over the last 18 months have been organizational and infrastructural, focused on strengthening our team, processes, and tools. As the board chair, I deliberately chose not to prioritize any changes to our contractual relationship with, in favor of prioritizing the human concerns raised by our staff. We focused our energies internally and on our core mission. When Loris asked the Crisis Text Line founder to leave the board, we chose not to offer up a replacement. Our most proactive stance over the last 18 months was to freeze the agreement with Loris, with an explicit commitment to reconsider the relationship in 2022 once a new CEO was in place. As a result of these decisions, we have not shared any data since the change in leadership. 


The practice of non-profit governance requires collectively grappling with trade-off after trade-off. I have been a volunteer director of the board of Crisis Text Line for 8 years both because I believe in the mission and because I have been grateful to govern alongside amazing directors from whom I constantly learn. This doesn’t mean it’s been easy and it definitely doesn’t mean we always agree. But we do push each other and I learn a lot in the process. We strived to govern ethically, but that doesn’t mean others would see our decisions as such. We also make decisions that do not pan out as expected, requiring us to own our mistakes even as we change course. Sometimes, we can be fully transparent about our decisions; in other situations – especially when personnel matters are involved – we simply can’t. That is the hardest part of governance, both for our people and for myself personally. 

I want to own my decisions as a director of Crisis Text Line. I voted in favor of our internal uses of data, our collaborations with researchers, and our decision to contribute to the founding of I did so based on a calculation of ethical trade-offs informed by my research and experiences. I want to share some aspects of the rubric in my mind: 

1. Consent. Consent in my mind exists in a more complex context than the simpler view I had before I began volunteering at CTL. I believe in the ideal of informed consent, which has informed my research. (A ToS is not consent.) But I have also learned from our clinical team about the limits of consent and when consent undermines ethical action. I have also come to believe that there are times when other ethical values must be prioritized against an ideal of consent. For example, I support Crisis Text Line’s decision to activate Public Safety Answering Points (PSAPs) when a texter presents an imminent life-or-death risk to themselves or to someone else, even when they have not consented to such an activation. Many members of our staff and volunteers are mandatory reporters who have the legal as well as ethical obligation to report. At the same time, I also support our ongoing work to reduce reliance on PSAPs and our policy efforts to have PSAPs center mental health more.  

2. Present and future. Our mission is to help individuals who come to us in need and to improve the state of mental health for people more generally. I would like to create a world in which we are not needed. To that end, I am always thinking about what benefits individuals and the collective. I’m also thinking about future individuals. What can we learn now that will help the next person who comes to us? And what can we do now so that fewer people need us? I believe in a moral imperative of paying it forward and I approach data ethics with this in mind. There is undeniably a tension between the obligation to the individual and the obligation to the collective, one that I regularly reflect on.

3. The field matters. We are a non-profit and part of a broader ecosystem of mental health services. We cannot serve everyone; even for those whom we do serve in crisis, we cannot be their primary mental health provider. We want there to be an entire ecosystem of support for people in crisis, of which we play just one part. We have a responsibility to the individual in the moment of crisis and we have a responsibility to learn from and strengthen the field to help individuals downstream. To this end, I think we have an ethical responsibility to give back to the ecosystem, not just to the individual in the moment. But we need to balance this imperative with respect for the individuals during their darkest moments.

4. Improve over time. Much of our data begins as conversations, involving data from both texters and counselors. As you might imagine, when our counselors’ attempts to help someone need improvement, it weighs deeply on our entire staff. Both counselors and texters benefit when counselors learn from reviewing their conversations, from reviewing what worked or didn’t work in others’ conversations, and from lessons learned being fed back into training. My eye is always on what will improve those conversations. (This is why an obsession at the board level is quality over quantity.)

The responsibility of CTL is a heavy one, in ways that may not be obvious to those who haven’t worked in this field or seen the sometimes-counterintuitive challenges of serving people in crisis. I use the needs and prioritizations of our texters and team as my first and most important filter when judging what decisions to make. I see helping counselors and staff succeed as key to helping serve people in need. This sometimes requires thinking about how texter data can help strengthen our counselors; this sometimes requires asking if conducting research will help them grow; and this sometimes requires asking what is needed to strengthen the broader ecosystem.

When it comes to thinking about texters, I’m focused on the quality of the conversation and the safety of the texter. When it comes to safety, I’m often confronted with non-knowledge, which is harrowing. (Did someone who was attempting suicide survive the night? Emergency responders don’t necessarily tell us, so we rely on hearing back from the texter, but what’s the healthiest way to followup with a texter?) I still don’t know the best way to measure quality; I have scoured the literature and sought advice from many to guide my thinking, but I am still struggling there and in conversation with others to try to crack this nut. I’m also thankful that there’s an entire team at Crisis Text Line dedicated to thinking about, evaluating, and improving conversation quality.

I regularly hear from both texters and counselors, whose experiences shape my thinking, but I also know that these are but a few perspectives. I read the feedback from our surveys, trying to grapple with the limitations and biases of those responses. There is no universal texter or counselor experience, which means that I have to constantly remind myself about the diversity of perspectives among texters and counselors. I cannot govern by focusing on the average; I must strive to think holistically about the diversity of viewpoints. When it comes to governance, I am always making trade-offs – often with partial information – which is hard. I also know that I sometimes get it wrong and I try to learn from those mistakes. 

These are some of the factors that go through my head when I’m thinking about our data practices. And of course, I’m also thinking about our legal and fiduciary responsibilities. But the decisions I make regarding our data start from thinking through the ethics and then I factor in financial or legal considerations. 

As I listen and learn from how people are responding to this conversation and from the decisions that I contributed to, it is clear to me that we have not done enough to share what we are doing with data and why. It’s also clear to me that I made mistakes and change is necessary. I know that after the challenges of the last year, I have erred on the side of doing the work inside the organization rather than grappling with the questions raised our arrangement with 

In order to continue serving Crisis Text Line, I need to figure out what we – and I – can do better. I am fascinated by my peers calling to make this a case study in tech ethics. I think that’s quite interesting, and I hope that my detailing this thinking can contribute to that effort. I hope to learn from whatever case study emerges.

To that end, to my peers and colleagues, I also have some honest questions for all of you who are frustrated, angry, disappointed, or simply unsure about us: 

  • What is the best way to balance the implicit consent of users in crisis with other potentially beneficial uses of data which they likely will not have intentionally consented to but which can help them or others? 
  • Given that people come to us in their darkest moments, can/should we enable research on the traces that they produce? If so, how should this be structured? 
  • Is there any structure in which lessons learned from a non-profit service provider can be transferred to a for-profit entity? Also, how might this work with partner organizations, foundations, government agencies, sponsors, or subsidiaries, and are the answers different?
  • Given the data we have, how do we best serve our responsibility to others in the mental health ecosystem?
  • What can better community engagement and participatory decision-making in this context look like? How do we engage people to think holistically about the risks to life that we are balancing and that are shaping our decisions?  (And how do we not absolve our governance responsibilities to perform ethics, as we’ve seen play out in other contexts?)

There are also countless other questions that I struggle with that go beyond the data issues, but also shape them. For example, as always, I will continue to push up against the persistent and endemic question that plagues all non-profits: How can we build a financially sustainable service organization that is able to scale to meet people’s needs? I also struggle every day with broader dynamics in which tech, data, ethics, and mental health are entangled. For example, how do we collectively respond to mental health crises that are amplified by decisions made by for-profit entities? What is our collective responsibility in a society where mental health access is so painfully limited? 

These questions aren’t just important for a case study. These are questions I struggle with every day in practice and I would be grateful to learn from others’ journeys. I know I will make mistakes, but I hope that I can learn from them and, with your guidance, make fewer.

I’m grateful to everyone who cares enough about the texters we serve to engage in this conversation. I’m particularly grateful to be in a community that will call in anyone whom they feel isn’t exercising proper care with people’s data and privacy. And most of all I am thankful for the counsel, guidance, and clarity of our workers at Crisis Text Line, who do the hard work of caring for texters every day, while also providing clear feedback to help drive the future of the organization. I can only help that my decisions help them succeed at the hard work they do.

I warmly welcome any advice from all of you who’ve been watching the conversation and who care about seeing CTL succeed in its mission.

The Muddled Speech of Numbers: Blood clots, COVID-19 vaccines, and statistical risk

Earlier this week, the CDC paused the roll-out of the Johnson & Johnson COVID-19 vaccination after 6 women experienced serious blood clots. Their caution has merit, given that the FDA has been approving vaccinations in advance of the typical large-scale evaluations because speed is seen as so crucial. Reasonably, there is a desire to know more about these blood clots before more might appear. Yet, there was also sheer frustration from many in the medical community because the choice to pause the roll-out suggested that there was a serious issue, that the vaccine was dangerous. In a context in which vaccine hesitancy is likely to undermine herd immunity, any suggestion that the vaccine might have consequences can be twisted and contorted. 

Across many mailing lists and Twitter streams, I kept seeing data points trying to ground the seriousness of the blood clots in the J&J vaccine. Most referenced the frequency of blood clots that women experience while taking the birth control pill, roughly 1/1000. People also highlighted how common blood clots are for those who are in the throes of COVID-19. These were meant to highlight just how rare and statistically insignificant blood clots are when taking the J&J vaccine. 

Yet, as these attempts to ground the conversation unfolded, a different kind of outrage formed. A handful of people highlighted women they knew who had died of blood clots most likely related to birth control. Many more women who took hormonal birth control expressed frustration that they had no idea that they were at increased risk of a blood clot. Sure, it’s part of the fine print of that printout you get from CVS when picking up your pill, but this wasn’t something doctors emphasized. Unlike the J&J vaccine situation, the relationship between birth control and blood clots – or even COVID-19 and blood clots – hasn’t been front page news.

As I was processing the back-and-forth about statistical risk and who was responsible for sharing what with whom, and at what level of amplitude, I couldn’t help but think about all of the scholarship into the politics of numbers. We’re living at a time when politicians are simultaneously espousing the need for “evidence-based policymaking” and working to diligently undermine, contort, or weaponize evidence. This is what scholars of “agnotology” mean when they talk about the manufacturing of ignorance through the seeding of doubt. Or what other scholars highlight as the “weaponization of transparency.” 

I couldn’t help but feel empathy for the scientists at J&J and the FDA who have been working around the clock trying to make a vaccine available to the public, trying to be responsible stewards of information and statistical risk in a context where their desire for caution can be turned on its head to undermine the legitimacy of their work. I also found myself feeling empathy for journalists who recognize the importance of reporting on this development, even as they know that their reporting is easily evolving into misinformation that’s undermining the vaccine roll-out. Working with numbers is itself political.

To work in the world of medicine and science, statistics and probabilities is to grapple with trade-offs at a macro level, which present ethical conundrums even in the best of times. After all, that one terrible death from a blood clot could perhaps have been prevented by not taking the vaccine. But this is where we enter into the world of trade-offs, of unknowns, of morality. Without a vaccine rollout, many more people will die of blood clots from COVID-19. Had that woman been infected with COVID-19, she might have still succumbed to a blood clot. Medicine alters the dimensionality of risk. So how do ethics get negotiated? And by whom? This is the story of public health. 

Those complexities underpinning the advancement of science are complicated further by a politicized context such as that which surrounds the COVID-19 vaccine. Each act of communication can be twisted and contorted to convey different agendas, different values, different goals. Amplified transparency of risk is itself a political act. Sprinkle in the expectation in our current society that individuals are expected to make informed decisions for themselves, their families, and their communities, and we have a recipe for disaster. This is what the production of ignorance – aka misinformation, information disorder, agnotology, etc… – looks like in practice. The very acts of scientific transparency, which are intended to help inform decision-making, are twisted on their head, serving to undermining the legitimacy of scientific work and the coordination of a public that must work together to address a deadly disease. 

I keep wondering what it will take for the public to trust scientific information. But, perhaps, a better question might be: What kind of information is needed to help a fragmented public work together to solve societal-level challenges?

Note to the reader: These are questions that I’m struggling with. If you have thoughts, ideas (or even reading recommendations!), don’t hesitate to reach out: zephoria [at] zephoria [dot] org.

Behind every algorithm, there be politics.

In my first class in computer science, I was taught that an algorithm is simply a way of expressing formal rules given to a computer. Computers like rules. They follow them. Turns out that bureaucracy and legal systems like rules too. The big difference is that, in the world of computing, we call those who are trying to find ways to circumvent the rules “hackers” but in the world of government, this is simply the mundane work of politicking and lawyering. 

When Dan Bouk (and I, as an earnest student of his) embarked on a journey to understand the history of the 1920 census, we both expected to encounter all sorts of politicking and lawyering. As scholars fascinated by the census, we’d heard the basics of the story: Congress failed to reapportion itself after receiving data from the Census Bureau because of racist and xenophobic attitudes mixed with political self-interest. In other words, politics. 

As we dove into this history, the first thing we realized was that one justification for non-apportionment centered on a fight about math. Politicians seemed to be arguing with each other over which algorithm was the right algorithm with which to apportion the House. In the end, they basically said that apportionment should wait until mathematicians could figure out what the “right” algorithm was. (Ha!) The House didn’t manage to pass an apportionment bill until 1929 when political negotiations had made this possible. (This story anchors our essay on “Democracy’s Data Infrastructure.”)

Dan kept going, starting what seemed like a simple question: what makes Congress need an algorithm in the first place? I bet you can’t guess what the answer is! Wait for it… wait for it… Politics! Yes, that’s right, Congress wanted to cement an algorithm into its processes in a feint attempt to de-politicize the reapportionment process. With a century of extra experience with algorithms, this is patently hysterical. Algorithms as a tool to de-politicize something!?!? Hahahah. But, that’s where they had gotten to. And now the real question was: why? 

In Dan’s newest piece – “House Arrest: How an Automated Algorithm Constrained Congress for a Century” – Dan peels back the layers of history with beautiful storytelling and skilled analysis to reveal why our contemporary debates about algorithmic systems aren’t so very new. Turns out that there were a variety of political actors deeply invested in ensuring that the People’s House stopped growing. Some of their logics were rooted in ideas about efficiency, but some were rooted in much older ideas of power and control. (Don’t forget that the electoral college is tethered to the size of the House too!) I like to imagine power-players sitting around playing with their hands and saying mwah-ha-ha-ha as they strategize over constraining the growth of the size of the House. They wanted to do this long before 1920, but it didn’t get locked in then because they couldn’t agree, which is why they fought over the algorithm. By 1929, everyone was fed up and just wanted Congress to properly apportion and so they passed a law, a law that did two things: it stabilized the size of the House at 435 and it automated the apportionment process. Those two things – the size of the House and the algorithm – were totally entangled. After all, an automated apportionment couldn’t happen without the key variables being defined. 

Of course, that’s not the whole story. That 1929 bill was just a law. Up until then, Congress had passed a new law every decade to determine how apportionment would work for that decade. But when the 1940 census came around, they were focused on other things. And then, in effect, Congress forgot. They forgot that they have the power to determine the size of the House. They forgot that they have control over that one critical variable. The algorithm became infrastructure and the variable was summarily ignored.

Every decade, when the Census data are delivered, there are people who speak out about the need to increase the size of the House. After all, George Washington only spoke once during the Constitutional Convention. He spoke up to say that we couldn’t possibly have Congresspeople represent 40,000 people because then they wouldn’t trust government! The constitutional writers listened to him and set the minimum at 30,000; today, our representatives each represent more than 720,000 of us. 

After the 1790 census, there were 105 representatives in Congress. Every decade, that would increase. Even though it wasn’t exact, there was an implicit algorithm in that size increase. In short, increase the size of the House so that no sitting member would lose his seat. After all, Congress had to pass that bill and this was the best way to get everyone to vote on it. The House didn’t increase at the same ratio as the size of the population, but it did increase every decade until 1910. And then it stopped (with extra seats given to new states before being brought back to the zero-sum game at the next census). 

One of the recommendations of the Commission on the Practice of Democratic Citizenship (for which I was a commissioner) was to increase the size of the House. When we were discussing this as a commission, everyone spoke of how radical this proposition was, how completely impossible it would be politically. This wasn’t one of my proposals – I wasn’t even on that subcommittee – so I listened with rapt curiosity. Why was it so radical? Dan taught me the answer to that. The key to political power is to turn politicking into infrastructure. After all, those who try to break a technical system, to work around an algorithm, they’re called hackers. And hackers are radical. 

Want more like this?

  1. Read “House Arrest: How an Automated Algorithm Constrained Congress for a Century” by Dan Bouk. There’s drama! And intrigue! And algorithms!
  2. Read “Democracy’s Data Infrastructure” by Dan Bouk and me. It might shape your view about public fights over math.
  3. Sign up for my newsletter. More will be coming, I promise!

The US Federal Government Needs a VP of Engineering, not a CTO

If you look at the roster of the Biden-Harris transition team, it’s quickly apparent that the incoming administration is tech-forward. Given the systematic dismantlement of the federal government over the last four years, and the significant logistical and scientific needs underpinning a large-scale vaccine roll-out, it is unsurprising to hear that the new team is looking to bring in tech talent. Under the Obama Administration, the White House invested significantly in shoring up the Office of Science and Technology Policy, an office that has for all intents and purposes laid dormant for four years under the current Administration. The Obama Administration also hired the first Chief Technology Officer (CTO) to help envision what a tech-forward US government might look like. As the Biden-Harris transition builds its plans for January 20th, many people in my networks are abuzz, wondering who might be the next CTO.

My advice to the transition team is this: You need a VP of Engineering even more than you need a CTO.

To the non-geeks of the world, these two titles might be meaningless or perhaps even interchangeable. The roles and responsibilities associated with each are often co-mingled, especially in start-ups. But in more mature tech companies, they signal distinct qualifications and responsibilities. Moreover, they signal different ideas for what is top priority. In their ideal incarnation, a CTO is a visionary, a thought leader, a big picture thinker. The right CTO sees how tech can fit into the big picture of a complex organization, sits in the C-suite to integrate tech into the strategy. A tech-forward White House would want such a person precisely to help envision a technocratic government structure that could do great things. Yet, a CTO is nothing more than a figurehead if the organizational infrastructure is dysfunctional. This can prompts organizations to want to build new tech separately inside an “office of the CTO” rather than doing the hard work of fixing the core organizational infrastructure to ensure that larger visions can work. When it comes to government, we’ve learned the hard way how easily a tech-forward effort located exclusively inside the White House can be swept away.

Inside tech companies, there is often a more important but less visible role when it comes to getting things done. To those on the outside, a VP title appears far less powerful, far less important than a C-Suite title. If you’re not a tech geek, a VP of Engineering might appear less important than a CTO. But in my experience, finding the right VP of Engineering is more essential than getting a high profile CTO when a system is broken. A VP-Eng is a fixer, someone who looks at broken infrastructure with a debugger’s eye and recognizes that the key to success is ensuring that the organizational and technical systems function hand-in-hand. While CTOs are often public figures in industry, a VP-Eng tends to shy away from public attention, focusing most of their effort on empowering their team to do great things. VP-Engs have technical chops, but their superpower comes from their ability to manage large technical teams, to really understand the forest and see what’s getting in the way of achieving a goal so that they can unblock that and ensure that their team thrives. A VP-Eng also understands that finding and nurturing the right talent is key to success, which is why they tend to spend an extraordinary amount of time recruiting, hiring, training, and mentoring.

When structured well, the CTO faces outwards while the VP-Eng faces inwards. They can and should be extraordinarily complementary roles. Yet, even though the Obama Administration invested in a CTO and built numerous programs to bring tech talent into the White House and sprinkle tech workers throughout all of the agencies, that tech-forward team never invested in a VP-Eng. They never invested in people whose job it was to truly debug the underlying problems that prevent government agencies from successfully building and deploying technical systems.

As I listen to friends and peers in Silicon Valley talk about all of the ways in which tech people are going to go east to “fix government,” I must admit that I’m cringing. Government functions very differently than industry, by design. In industry, our job is to serve customers. Yes, our companies might want more customers, but we have the luxury of focusing on those who have money and those who want to use our tools. Government must serve everyone. Much to the chagrin of capitalists, the vast majority of government resources goes to the hardest problems, to ensuring that whatever the government implements can serve everyone.

I have spent 20 years calling bullshit on “the pipeline problem” as industry’s excuse for its under-investment in hiring and retaining BIPOC and non-male talent. Even as tech workers are slowly starting to wake up to the realization that justice, equity, diversity, and inclusion are essential to the long-term health of tech, I’m watching the flawed logics that underpin the narrative about pipeline problems infuse the conversation about why government tech is broken. Government tech isn’t broken because government lacks talent. Government tech is broken because there are a range of stakeholders who are actively invested in ensuring that the federal government cannot execute, who are actively working to ensure that when the government is required to execute, it does so through upholding capitalist interests. Moreover, there are a range of stakeholders who would rather systematically undermine and hurt the extraordinarily diverse federal talent than invest in them.

If Silicon Valley waltzes into the federal government in January with its “I’ve got a submarine for that” mindset thinking that it can sprinkle tech fairy dust all over the agencies, we’re screwed. The undermining of the federal government’s tech infrastructure began decades ago. What has happened in the last four years has just sped up a trend that was well underway before this administration. And it’s getting worse by the day. The issue at play isn’t the lack of tech-forward vision. It’s the lack of organizational, human capital, and communications infrastructure that’s necessary for a complex “must-reach-everyone” organization to transform. Rather than coming in with hubris and focusing on grand vision, we need a new administration who is willing to dive deep and understand the cracks in the infrastructure that make a tech-forward agenda impossible. And this is why we need a federal VP-Eng whose job it is to engage in deep debugging. Cuz the bugs aren’t in the newest layer of code; they’re down deep in the libraries that no one has examined for years.

If the new administration is willing to invest in infrastructural repair, my ethnographic work in and around government has led me to three core areas that I would prioritize first. Two are esoteric structural barriers that prevent basic functioning. The third is a political weakness.

1. Procurement. Government outsourcing to industry is modern-day patronage. You don’t need Tammany Hall when you have a swarm of governmental contractors buzzing about. When politicians talk about about “small government,” what they really mean is “no federal employees.” Don’t let talk of “efficiency” fool you either. The cost of greasing the hands of Big Business through procurement procedures theoretically designed for efficiency is extraordinarily expensive. Not only is the financial cost of outsourcing to industry mind-boggling and bloated, but there are additional cost to morale, institutional memory, and mission that are not captured in the economic models. Government procurement infrastructure is also designed for failure, to ensure that government agencies are unable to deliver which, in turn, prompts Congress (regardless of who is in power) to reduce funding and increase scrutiny, tightening the screws on a tightly coupled system to increase the scale and speed of failure. It is a vicious cycle. Government procurement infrastructure is filled with strategically designed inefficiencies, frictions, and insanely corrupt incentives that undermine every aspect of government. They key here is not to replicate industry; the structures of contracting, outsourcing, and supply chains within a capitalist system do not make sense in government — and for good reason. A VP-Eng and a tech-forward government should begin by understanding the damage and ripple effects caused by OMB Directive A-76, which fundamentally shapes tech procurement.

2. Human Resources. Too many people in the tech industry think that HR is a waste of space…. that is, until they find that recruiter who makes everything easier. As such, in industry, we often talk about “people operations” or “talent management” instead of HR. We recognize the importance of investing in talent over the long-term, even if we reject HR. In government, HR is the lifeblood of how work happens in government and it was redesigned by progressives in the 20th century to ensure a more equitable approach to hiring and talent development. For decades, government created opportunities for women and Black communities when industry did not. Unfortunately, this aspect of the “deep state” was not at all appreciated by those invested in maintaining America’s caste system. Those invested in racist hierarchies didn’t need to be explicit about their agendas; they could rely on the language of capitalism to systematically undermine the talent of the federal government. Just as outsourcing in government has statistically taken jobs from Black federal workers and given them to white contractors, a range of HR policies have been designed to make working in government hellacious. Those who have stuck around — out of duty, out of necessity — have become enrolled in an existentially broken system. Some have chosen to sit back and not do their jobs, waiting to be fired. Others took the opposite approach, masochistically throwing themselves at the problem. People come from the outside and complain that government workers are lazy, stupid, incompetent. But it is the system that has produced these conditions. The system has been starved, the policies and protocols are corroded. It is through the purposeful torturing of HR that an executive branch hellbent on destroying federal government can wage the greatest damage; this has been underway for 40 years but the proverbial frogs are now sitting in boiling water. HR will require a lot of repair-work, not quick-fix policy changes. An untended HR system in government becomes a bottleneck unimaginable to those in industry and that’s where we are. Existing talent will require nurturing, and this investment is crucial because their institutional knowledge is profound. Any administration who wants to build a government that can respond to crises as grand as a pandemic or climate change will need to create the conditions for government to be a healthy workplace not just for the next four years but for decades to come. They will need a “people ops” mindset to HR. A VP-Eng should start with a listening tour of those who work on tech projects in agencies.

3. Communications. It never ceases to amaze me that the top communications professional in every federal agency is a political appointee. And every incoming administration — regardless of partisan affiliation — tends to fill these positions with campaign comms people who helped them win the election. Unfortunately, the type of comms that’s needed to win an election (which requires appealing to only some people) is not the same as the type of comms that’s needed to be accountable to the public as a whole for 365 days per year. Over and over, the comms people that White Houses install focus on speaking to their political base and to Congress. This is all fine and well if the only comms need is to negotiate policy outcomes. But the partisan perversion of comms within agencies has another outcome — it delegitimizes the agency among members of the public who are not affiliated with that political party, not to mention the wide majority of the public who is outright disgusted by all partisan tomfoolery. If your political interest is to eliminate the federal government, undermining the legitimacy of federal agencies benefits you. If that’s not your goal, you need to rethink your approach to communications. Right now, every agency needs a crisis comms expert at the helm to regain control over the agency’s narrative. When things are more stable, they need strategic comms professionals who can build a plan for re-legitimization. Each agency also needs an org comms expert whose job, like a VP-Eng, is to repair internal communications infrastructure so that information can effectively flow. Most politicians and government watchdogs think that the key to greater transparency is to increase oversight, just as progressives did after Nixon. But given how broken comms is in all of these agencies, turning up the heat through FOIA, GAO, and Congressional hearings will not increase accountability right now; it will increase breakage. Inside tech companies, comms is often seen as soft, squishy, irrational work, an afterthought that should not be prioritized. But comms, like HR, is the infrastructure that makes other things possible. A VP-Eng needs a comms counterpart working alongside them to achieve any organizational transformation.

Addressing these three seemingly non-tech issues would do more to enable a tech-forward government than any new-fangled shiny tech object. There is so much repair work to be done inside government. Yet, as I listen to those I know in Silicon Valley talk about all of the ways they wish to “fix government,” I fear that we will see a significant flood of solutionism when what’s needed most is humility and curiosity. Humility to understand that the structure of governmental agencies exists in response to the never-ending flow of solutionist interventions. And curiosity to understand how and why road blocks and barriers exist — and which ones to strategically eradicate to empower civil servants who are devoted to ensuring that government functions for the long-term, regardless of who is in power. Grand visioning has its role, but when infrastructure is breaking all around us, we need debuggers and maintenance people first and foremost. We need people who find joy in the invisible work of just making a system function, of recognizing that technical systems require the right organizational structures to thrive. This is the mindset a VP-Eng brings to the table.

In conclusion… If you are working on the transition or planning to jump into government in January, please spend some time understanding why the system is the way it is. If you are a tech person, do not presume you know based on your experience with other broken systems or based on what you read in the news; take the time to learn. If you are not a tech person, do not assume that tech can fix what politics can’t; this is a classic mistake with a long history. If the goal is truly to “build back better,” it requires starting with repairing the infrastructure. Without this, you are building on quicksand.

Teens Are Addicted to Socializing, Not Screens

Screenagers in the time of coronavirus.

(This was originally written for OneZero.)

If you’re a parent trying to corral your children into attending “school” online, you’ve probably had the joy of witnessing a complete meltdown. Tantrums are no longer the domain of two-year-olds; 15-year-olds are also kicking and screaming. Needless to say, so are the fortysomethings. Children are begging to go outside. Teenagers desperately want to share physical space with their friends. And parents are begging their kids to go online so that they themselves can get some downtime. These are just some of the ways in which today’s reality seems upside down.

I started studying teenagers’ use of social media in the early 2000s when Xanga and LiveJournal were cool. I watched as they rode the waves of MySpace and Facebook, into the realms of Snap and Instagram. My book It’s Complicated: The Social Lives of Networked Teens unpacks some of the most prevalent anxieties adults have about children’s use of technology, including the nonstop fear-inducing message that children are “addicted” to their phones, computers, and the internet. Needless to say, I never imagined how conditions might change when a global pandemic unfolded.

I cannot remember a period in my research when parents weren’t wringing their hands about kids’ use of screens. The tone that parents took paralleled the tone their parents took over heavy metal and rock music, the same one their grandparents had when they spoke of the evils of comic books. Moral panics are consistent — but the medium that the panic centers on changes. Still, as with each wave of moral panic, there’s supposedly something intrinsic to the new medium that makes it especially horrible for young people. Cognizant of this history and having gone deep on social media activities with hundreds of teenagers, I pushed back and said that it wasn’t the technology teens were addicted to; it was their friends. Adults rolled their eyes at me, just as their teens rolled their eyes at them.

Now, nearly a month into screen-based schooling en masse, I’ve gotten to witness a global natural experiment like none I ever expected. What have we learned? The majority of young people are going batshit crazy living a life wholly online. I can’t help but think that Covid-19 will end up teaching all of us how important human interaction in physical space is. If this goes on long enough, might this cohort end up going further and hating screens?

Until the world started sheltering in place, most teens spent the majority of their days in school, playing sports, and participating in other activities, almost always in physical spaces with lots of humans co-present. True physical privacy is a luxury for most young people whose location in space is heavily monitored and controlled. Screens represented a break from the mass social. They also represented privacy from parents, an opportunity to socialize without parents lurking even when their physical bodies were forced to be at home. Parents hated the portals that kids held in their hands because their children seemed to disappear from the living room into some unknown void. That unknown void was those children’s happy place — the place where they could hang out with their friends, play games, and negotiate a life of their own.

Now, with Covid-19, schools are being taught through video. Friends are through video. Activities are through video. There are even videos for gym and physical sport. Religious gatherings are through video. Well-intended adults are volunteering to step in and provide more video-based opportunities for young people. TV may have killed the radio star, but Zoom and Google Hangouts are going to kill the delight and joy in spending all day in front of screens.

The majority of young people are going batshit crazy living a life wholly online.

Fatigue is setting in. Sure, making a TikTok video with friends is still fun, but there’s a limit to how much time anyone can spend on any app — even teens. Give it another month and there will be kids dropping out of school or throwing their computers against the wall. (Well, I know of two teens who have already done the latter with their iPads.) Young people are begging to go outside, even if that means playing sports with their parents. Such things might not be surprising for a seven-year-old, but when your 15-year-old asks to play soccer with you, do it! As a child of the ‘80s, I was stunned during my fieldwork to learn that most contemporary kids didn’t find ways to sneak out of the house once their parents were asleep because going online was so much easier. I can’t help but wonder if sneaking out is becoming a thing once again.

As we’re all stuck at home, teens are still doing everything possible to escape into their devices to maintain relationships, socialize, and have fun. Their shell-shocked parents are ignoring any and all screen time limitations as they too crave escapism (people who study fortysomethings: explain Animal Crossing to me!!?). But when physical distancing is no longer required, we’ll get to see that social closeness often involves meaningful co-presence with other humans. Adults took this for granted, but teens had few other options outside of spaces heavily controlled by adults. They went online not because the technology is especially alluring, but because it has long been the most viable option for having meaningful connections with friends given the way that their lives have been structured. Maybe now adults will start recognizing what my research showed: youth are “addicted” to sociality, not technology for technology’s sake.

Joyfully Geeking Out

2020 US Census: Everybody counts!

In 2015, I was invited to join the Commerce Department’s Data Advisory Council. Truth be told, I was kinda oblivious to what this was all about. I didn’t know much about how the government functioned. I didn’t know what a “FACA” was. (Turns out that the “Federal Advisory Committee Act” is a formal government thing.) Heck, I only had the most cursory of understanding about the various agencies and bureaus associated with the Commerce Department. But I did understand one thing: the federal government has some of the most important data infrastructure out there. Long before discussions about our current tech industry, government agencies have been trying to wrangle data to help both the public and industry. The Weather Channel wouldn’t be able to do its work without NOAA (National Oceanographic and Atmospheric Administration). Standards would go haywire without NIST (National Institute of Standards and Technology). And we wouldn’t be able to apportion our representatives without Census. 

Over the last few years, I have fallen madly in love with the data puzzles that underpin the census. Thanks to Margo Anderson’s “The American Census,” I learned that the history of the census is far far far messier than I ever could’ve imagined.  An amazing network of people dedicated to helping ensure that people are represented have given me a crash course into the longstanding battle over collecting the best data possible. As the contours of the 2020 census became more visible, it also became clear that it would be the perfect networked fieldsite for trying to understand two questions that have been tickling my brain: 

  1. What makes data legitimate?
  2. What does it take to secure data infrastructure? 

(For any STS scholar reading this, add scare-quotes to all of the words that make you want to scream.)

Over the last two years, I’ve been learning as much as I could possibly learn about the census. I’ve also been dipping my toe into archival work and trying to strengthen my theoretical toolkit to handle the study of organizations and large scale operations. And now we’re a matter of days away from when everyone in the country will receive their invitation to participate in the census, and so I’m throwing myself into what is bound to be a whirlwind in order to fully understand how an operation of this magnitude unfolds.  

While I have produced a living document to explain how differential privacy is part of the 2020 census, I’ve mostly not been writing much about the research I’m doing. To be honest, I’m relishing taking the time to deeply understand something and to do the deep reflection I haven’t had the privilege of doing in almost a decade. 

If I’ve learned anything from the world of census junkies, this decadal process is raw insanity and full of unexpected twists and turns. Yet, what I can say is that it’s also filled with some of the most civic-minded people that I’ve ever encountered. There are so many different stakeholders trying to ensure that we get a good count in order to guarantee that everyone in this country is counted, represented, and acknowledged. This is important, not just for Congressional apportionment and redistricting, but also to make sure that funding is properly allocated, that social science research can inform important decision-making processes, and that laws designed to combat discrimination are enforced.

I’m sharing this now, not because I have new thinking to offer, but because I want folks to understand why I might be rather unresponsive to non-census-obsessives over the next few months. I want to dive head-first into this research and relish the opportunity to be surrounded by geeks engaged in a phenomenal civic effort. For those who aren’t thinking full-time about the census, please understand that I’m going to turn down requests for my time this spring and my email response time may also falter. 

Of course.. if you want to make me smile, send me photographs of cool census stuff happening in your community! Or interesting census content that comes through your feeds! And if you want to go hog wild, get involved. Census is hiring. Or you could make census-related content to encourage others to participate. Or at the very least, tell everyone you know to participate; they’ll get their official invitation starting March 12. 

The US census has been taking place every 10 years since 1790. It is our democracy’s data infrastructure. And it is “big data” before there was big data. It’s also the cornerstone of countless advances in statistics and social scientific knowledge. Understanding the complexity of the census is part-and-parcel with understanding where our data-driven world is headed. When this is all over, I hope that I’ll have a lot more to contribute to that conversation. In the meantime, forgive me for relishing my obsessive focus. 

Facing the Great Reckoning Head-On

I was recently honored by the Electronic Frontier Foundation. Alongside Oakland Privacy and William Gibson, I received a 2019 Barlow/Pioneer Award. I was asked to give a speech. As I reflected on what got me to this place, I realized I needed to reckon with how I have benefited from men whose actions have helped uphold a patriarchal system that has hurt so many people. I needed to face my past in order to find a way to create space to move forward.

This is the speech I gave in accepting the award. I hope sharing it can help others who are struggling to make sense of current events. And those who want to make the tech industry to do better.

— —

I cannot begin to express how honored I am to receive this award. My awe of the Electronic Frontier Foundation dates back to my teenage years. EFF has always inspired me to think deeply about what values should shape the internet. And so I want to talk about values tonight, and what happens when those values are lost, or violated, as we have seen recently in our industry and institutions.

But before I begin, I would like to ask you to join me in a moment of silence out of respect to all of those who have been raped, trafficked, harassed, and abused. For those of you who have been there, take this moment to breathe. For those who haven’t, take a moment to reflect on how the work that you do has enabled the harm of others, even when you never meant to.


The story of how I got to be standing here is rife with pain and I need to expose part of my story in order to make visible why we need to have a Great Reckoning in the tech industry. This award may be about me, but it’s also not. It should be about all of the women and other minorities who have been excluded from tech by people who thought they were helping.

The first blog post I ever wrote was about my own sexual assault. It was 1997 and my audience was two people. I didn’t even know what I was doing would be called blogging. Years later, when many more people started reading my blog, I erased many of those early blog posts because I didn’t want strangers to have to respond to those vulnerable posts. I obfuscated my history to make others more comfortable.

I was at the MIT Media Lab from 1999–2002. At the incoming student orientation dinner, an older faculty member sat down next to me. He looked at me and asked if love existed. I raised my eyebrow as he talked about how love was a mirage, but that sex and pleasure were real. That was my introduction to Marvin Minsky and to my new institutional home.

My time at the Media Lab was full of contradictions. I have so many positive memories of people and conversations. I can close my eyes and flash back to laughter and late night conversations. But my time there was also excruciating. I couldn’t afford my rent and did some things that still bother me in order to make it all work. I grew numb to the worst parts of the Demo or Die culture. I witnessed so much harassment, so much bullying that it all started to feel normal. Senior leaders told me that “students need to learn their place” and that “we don’t pay you to read, we don’t pay you to think, we pay you to do.” The final straw for me was when I was pressured to work with the Department of Defense to track terrorists in 2002.

After leaving the Lab, I channeled my energy into V-Day, an organization best known for producing “The Vagina Monologues,” but whose daily work is focused on ending violence against women and girls. I found solace in helping build online networks of feminists who were trying to help combat sexual assault and a culture of abuse. To this day, I work on issues like trafficking and combating the distribution of images depicting the commercial sexual abuse of minors on social media.

By 2003, I was in San Francisco, where I started meeting tech luminaries, people I had admired so deeply from afar. One told me that I was “kinda smart for a chick.” Others propositioned me. But some were really kind and supportive. Joi Ito became a dear friend and mentor. He was that guy who made sure I got home OK. He was also that guy who took being called-in seriously, changing his behavior in profound ways when I challenged him to reflect on the cost of his actions. That made me deeply respect him.

I also met John Perry Barlow around the same time. We became good friends and spent lots of time together. Here was another tech luminary who had my back when I needed him to. A few years later, he asked me to forgive a friend of his, a friend whose sexual predation I had witnessed first hand. He told me it was in the past and he wanted everyone to get along. I refused, unable to convey to him just how much his ask hurt me. Our relationship frayed and we only talked a few times in the last few years of his life.

So here we are… I’m receiving this award, named after Barlow less than a week after Joi resigned from an institution that nearly destroyed me after he socialized with and took money from a known pedophile. Let me be clear — this is deeply destabilizing for me. I am here today in-no-small-part because I benefited from the generosity of men who tolerated and, in effect, enabled unethical, immoral, and criminal men. And because of that privilege, I managed to keep moving forward even as the collateral damage of patriarchy stifled the voices of so many others around me. I am angry and sad, horrified and disturbed because I know all too well that this world is not meritocratic. I am also complicit in helping uphold these systems.

What’s happening at the Media Lab right now is emblematic of a broader set of issues plaguing the tech industry and society more generally. Tech prides itself in being better than other sectors. But often it’s not. As an employee of Google in 2004, I watched my male colleagues ogle women coming to the cafeteria in our building from the second floor, making lewd comments. When I first visited TheFacebook in Palo Alto, I was greeted by a hyper-sexualized mural and a knowing look from the admin, one of the only women around. So many small moments seared into my brain, building up to a story of normalized misogyny. Fast forward fifteen years and there are countless stories of executive misconduct and purposeful suppression of the voices of women and sooooo many others whose bodies and experiences exclude them from the powerful elite. These are the toxic logics that have infested the tech industry. And, as an industry obsessed with scale, these are the toxic logics that the tech industry has amplified and normalized. The human costs of these logics continue to grow. Why are we tolerating sexual predators and sexual harassers in our industry? That’s not what inclusion means.

I am here today because I learned how to survive and thrive in a man’s world, to use my tongue wisely, watch my back, and dodge bullets. I am being honored because I figured out how to remove a few bricks in those fortified walls so that others could look in. But this isn’t enough.

I am grateful to EFF for this honor, but there are so many underrepresented and under-acknowledged voices out there trying to be heard who have been silenced. And they need to be here tonight and they need to be at tech’s tables. Around the world, they are asking for those in Silicon Valley to take their moral responsibilities seriously. They are asking everyone in the tech sector to take stock of their own complicity in what is unfolding and actively invite others in.

And so, if my recognition means anything, I need it to be a call to arms. We need to all stand up together and challenge the status quo. The tech industry must start to face The Great Reckoning head-on. My experiences are all-too common for women and other marginalized peoples in tech. And it it also all too common for well-meaning guys to do shitty things that make it worse for those that they believe they’re trying to support.

If change is going to happen, values and ethics need to have a seat in the boardroom. Corporate governance goes beyond protecting the interests of capitalism. Change also means that the ideas and concerns of all people need to be a part of the design phase and the auditing of systems, even if this slows down the process. We need to bring back and reinvigorate the profession of quality assurance so that products are not launched without systematic consideration of the harms that might occur. Call it security or call it safety, but it requires focusing on inclusion. After all, whether we like it or not, the tech industry is now in the business of global governance.

“Move fast and break things” is an abomination if your goal is to create a healthy society. Taking short-cuts may be financially profitable in the short-term, but the cost to society is too great to be justified. In a healthy society, we accommodate differently abled people through accessibility standards, not because it’s financially prudent but because it’s the right thing to do. In a healthy society, we make certain that the vulnerable amongst us are not harassed into silence because that is not the value behind free speech. In a healthy society, we strategically design to increase social cohesion because binaries are machine logic not human logic.

The Great Reckoning is in front of us. How we respond to the calls for justice will shape the future of technology and society. We must hold accountable all who perpetuate, amplify, and enable hate, harm, and cruelty. But accountability without transformation is simply spectacle. We owe it to ourselves and to all of those who have been hurt to focus on the root of the problem. We also owe it to them to actively seek to not build certain technologies because the human cost is too great.

My ask of you is to honor me and my story by stepping back and reckoning with your own contributions to the current state of affairs. No one in tech — not you, not me — is an innocent bystander. We have all enabled this current state of affairs in one way or another. Thus, it is our responsibility to take action. How can you personally amplify underrepresented voices? How can you intentionally take time to listen to those who have been injured and understand their perspective? How can you personally stand up to injustice so that structural inequities aren’t further calcified? The goal shouldn’t be to avoid being evil; it should be to actively do good. But it’s not enough to say that we’re going to do good; we need to collectively define — and hold each other to — shared values and standards.

People can change. Institutions can change. But doing so requires all who harmed — and all who benefited from harm — to come forward, admit their mistakes, and actively take steps to change the power dynamics. It requires everyone to hold each other accountable, but also to aim for reconciliation not simply retribution. So as we leave here tonight, let’s stop designing the technologies envisioned in dystopian novels. We need to heed the warnings of artists, not race head-on into their nightmares. Let’s focus on hearing the voices and experiences of those who have been harmed because of the technologies that made this industry so powerful. And let’s collaborate with and design alongside those communities to fix these wrongs, to build just and empowering technologies rather than those that reify the status quo.

Many of us are aghast to learn that a pedophile had this much influence in tech, science, and academia, but so many more people face the personal and professional harm of exclusion, the emotional burden of never-ending subtle misogyny, the exhaustion from dodging daggers, and the nagging feeling that you’re going crazy as you try to get through each day. Let’s change the norms. Please help me.

Thank you.


we’re all taught how to justify history as it passes by
and it’s your world that comes crashing down
when the big boys decide to throw their weight around
but he said just roll with it baby make it your career
keep the home fires burning till america is in the clear

i think my body is as restless as my mind
and i’m not gonna roll with it this time
no, i’m not gonna roll with it this time
— Ani Difranco

Agnotology and Epistemological Fragmentation

On April 17, 2019, I gave a talk at the Digital Public Library of America conference (DPLAfest). This is the transcript of that talk.

Illustration by Jim Cooke

I love the librarian community. You all are deeply committed to producing, curating, and enabling access to knowledge. Many of you embraced the internet with glee, recognizing the potential to help so many more people access critical information. Many of you also saw the democratic and civic potential of this new technology, not to mention the importance of an informed citizenry in a democratic world. Yet, slowly, and systematically, a virus has spread, using technology to systematically tear at the social fabric of public life.

This shouldn’t be surprising. After all, most of Silicon Valley in the late 90s and early aughts was obsessed with Neal Stephenson’s Snow Crash. How did they not recognize that this book was dystopian?

Slowly, and systematically, a virus has spread, using technology to systematically tear at the social fabric of public life.

Epistemology is the term that describes how we know what we know. Most people who think about knowledge think about the processes of obtaining it. Ignorance is often assumed to be not-yet-knowledgeable. But what if ignorance is strategically manufactured? What if the tools of knowledge production are perverted to enable ignorance? In 1995, Robert Proctor and Iain Boal coined the term “agnotology” to describe the strategic and purposeful production of ignorance. In an edited volume called Agnotology, Proctor and Londa Schiebinger collect essays detailing how agnotology is achieved. Whether we’re talking about the erasure of history or the undoing of scientific knowledge, agnotology is a tool of oppression by the powerful.

Swirling all around us are conversations about how social media platforms must get better at content management. Last week, Congress held hearings on the dynamics of white supremacy online and the perception that technology companies engage in anti-conservative bias. Many people who are steeped in history and committed to evidence-based decision-making are experiencing a collective sense of being gaslit—the concept that emerges from a film on domestic violence to explain how someone’s sense of reality can be intentionally destabilized by an abuser. How do you process a black conservative commentator testifying before the House that the Southern strategy never happened and that white nationalism is an invention of the Democrats to “scare black people”? Keep in mind that this commentator was intentionally trolled by the terrorist in Christchurch; she responded to this atrocity with tweets containing “LOL” and “HAHA.” Speaking of Christchurch, let’s talk about Christchurch. We all know the basic narrative. A terrorist espousing white nationalist messages livestreamed himself brutally murdering 50 people worshipping in a New Zealand mosque. The video was framed like a first-person shooter from a video game. Beyond the atrocity itself, what else was happening?

He produced a media spectacle. And he learned how to do it by exploiting the information ecosystem we’re currently in.

This terrorist understood the vulnerabilities of both social media and news media. The message he posted on 8chan announcing his intention included links to his manifesto and other sites, but it did not include a direct link to Facebook; he didn’t want Facebook to know that the traffic came from 8chan. The video included many minutes of him driving around, presumably to build audience but also, quite likely, in an effort to evade any content moderators that might be looking. He titled his manifesto with a well-known white nationalist call sign, knowing that the news media would cover the name of the manifesto, which in turn, would prompt people to search for that concept. And when they did, they’d find a treasure trove of anti-Semitic and white nationalist propaganda. This is the exploitation of what’s called a “data void.” He also trolled numerous people in his manifesto, knowing full well that the media would shine a spotlight on them and create distractions and retractions and more news cycles. He produced a media spectacle. And he learned how to do it by exploiting the information ecosystem we’re currently in. Afterwards, every social platform was inundated with millions and millions of copies and alterations of the video uploaded through a range of fake accounts, either to burn the resources of technology companies, shame them, or test their guardrails for future exploits.

What’s most notable about this terrorist is that he’s explicit in his white nationalist commitments. Most of those who are propagating white supremacist logics are not. Whether we’re talking about the so-called “alt-right” who simply ask questions like “Are jews people?” or the range of people who argue online for racial realism based on long-debunked fabricated science, there’s an increasing number of people who are propagating conspiracy theories or simply asking questions as a way of enabling and magnifying white supremacy. This is agnotology at work.

What’s at stake right now is not simply about hate speech vs. free speech or the role of state-sponsored bots in political activity. It’s much more basic. It’s about purposefully and intentionally seeding doubt to fragment society. To fragment epistemologies. This is a tactic that was well-honed by propagandists. Consider this Russia Today poster.

But what’s most profound is how it’s being done en masse now. Teenagers aren’t only radicalized by extreme sites on the web. It now starts with a simple YouTube query. Perhaps you’re a college student trying to learn a concept like “social justice” that you’ve heard in a classroom. The first result you encounter is from PragerU, a conservative organization that is committed to undoing so-called “leftist” ideas that are taught at universities. You watch the beautifully produced video, which promotes many of the tenets of media literacy. Ask hard questions. Follow the money. The video offers a biased and slightly conspiratorial take on what “social justice” is, suggesting that it’s not real, but instead a manufactured attempt to suppress you. After you watch this, you watch more videos of this kind from people who are professors and other apparent experts. This all makes you think differently about this term in your reading. You ask your professor a question raised by one of the YouTube influencers. She reacts in horror and silences you. The videos all told you to expect this. So now you want to learn more. You go deeper into a world of people who are actively anti-“social justice warriors.” You’re introduced to anti-feminism and racial realism. How far does the rabbit hole go?

One of the best ways to seed agnotology is to make sure that doubtful and conspiratorial content is easier to reach than scientific material.

YouTube is the primary search engine for people under 25. It’s where high school and college students go to do research. Digital Public Library of America works with many phenomenal partners who are all working to curate and make available their archives. Yet, how much of that work is available on YouTube? Most of DPLA’s partners want their content on their site. They want to be a destination site that people visit. Much of this is visual and textual, but are there explainers made about this content that are on YouTube? How many scientific articles have video explainers associated with them?

Herein lies the problem. One of the best ways to seed agnotology is to make sure that doubtful and conspiratorial content is easier to reach than scientific material. And then to make sure that what scientific information is available, is undermined. One tactic is to exploit “data voids.” These are areas within a search ecosystem where there’s no relevant data; those who want to manipulate media purposefully exploit these. Breaking news is one example of this. Another is to co-opt a term that was left behind, like social justice. But let me offer you another. Some terms are strategically created to achieve epistemological fragmentation. In the 1990s, Frank Luntz was the king of doing this with terms like partial-birth abortion, climate change, and death tax. Every week, he coordinated congressional staffers and told them to focus on the term of the week and push it through the news media. All to create a drumbeat.

Illustration by Jim Cooke Today’s drumbeat happens online. The goal is no longer just to go straight to the news media. It’s to first create a world of content and then to push the term through to the news media at the right time so that people search for that term and receive specific content. Terms like caravan, incel, crisis actor. By exploiting the data void, or the lack of viable information, media manipulators can help fragment knowledge and seed doubt.

Media manipulators are also very good at messing with structure. Yes, they optimize search engines, just like marketers. But they also look to create networks that are hard to undo. YouTube has great scientific videos about the value of vaccination, but countless anti-vaxxers have systematically trained YouTube to make sure that people who watch the Center for Disease Control and Prevention’s videos also watch videos asking questions about vaccinations or videos of parents who are talking emotionally about what they believe to be the result of vaccination. They comment on both of these videos, they watch them together, they link them together. This is the structural manipulation of media. Journalists often get caught up in telling “both sides,” but the creation of sides is a political project.

The creation of sides is a political project.

And this is where you come in. You all believe in knowledge. You believe in making sure the public is informed. You understand that knowledge emerges out of contestation, debate, scientific pursuit, and new knowledge replacing old knowledge. Scholars are obsessed with nuance. Producers of knowledge are often obsessed with credit and ownership. All of this is being exploited to undo knowledge today. You will not achieve an informed public simply by making sure that high quality content is publicly available and presuming that credibility is enough while you wait for people to come find it. You have to understand the networked nature of the information war we’re in, actively be there when people are looking, and blanket the information ecosystem with the information people need to make informed decisions.

Thank you!

The Messy Fourth Estate

(This post was originally posted on Medium.)

For the second time in a week, my phone buzzed with a New York Times alert, notifying me that another celebrity had died by suicide. My heart sank. I tuned into the Crisis Text Line Slack channel to see how many people were waiting for a counselor’s help. Volunteer crisis counselors were pouring in, but the queue kept growing.

Celebrity suicides trigger people who are already on edge to wonder whether or not they too should seek death. Since the Werther effect study, in 1974, countless studies have conclusively and repeatedly shown that how the news media reports on suicide matters. The World Health Organization has adetailed set of recommendations for journalists and news media organizations on how to responsibly report on suicide so as to not trigger copycats. Yet in the past few years, few news organizations have bothered to abide by them, even as recent data shows that the reporting on Robin Williams’ death triggered an additional 10 percent increase in suicide and a 32 percent increase in people copying his method of death. The recommendations aren’t hard to follow — they focus on how to convey important information without adding to the problem.

Crisis counselors at the Crisis Text Line are on the front lines. As a board member, I’m in awe of their commitment and their willingness to help those who desperately need support and can’t find it anywhere else. But it pains me to watch as elite media amplifiers make counselors’ lives more difficult under the guise of reporting the news or entertaining the public.

Through data, we can see the pain triggered by 13 Reasons Why and the New York Times. We see how salacious reporting on method prompts people to consider that pathway of self-injury. Our volunteer counselors are desperately trying to keep people alive and get them help, while for-profit companies reap in dollars and clicks. If we’re lucky, the outlets triggering unstable people write off their guilt by providing a link to our services, with no consideration of how much pain they’ve caused or the costs we must endure.

I want to believe in journalism. But my faith is waning.

I want to believe in journalism. I want to believe in the idealized mandate of the fourth estateI want to trust that editors and journalists are doing their best to responsibly inform the public and help create a more perfect union.But my faith is waning.

Many Americans — especially conservative Americans — do not trust contemporary news organizations. This “crisis” is well-trod territory, but the focus on fact-checking, media literacy, and business models tends to obscure three features of the contemporary information landscape that I think are poorly understood:

  1. Differences in worldview are being weaponized to polarize society.
  2. We cannot trust organizations, institutions, or professions when they’re abstracted away from us.
  3. Economic structures built on value extraction cannot enable healthy information ecosystems.

Let me begin by apologizing for the heady article, but the issues that we’re grappling with are too heady for a hot take. Please read this to challenge me, debate me, offer data to show that I’m wrong. I think we’ve got an ugly fight in front of us, and I think we need to get more sophisticated about our thinking, especially in a world where foreign policy is being boiled down to 140 characters.

1. Your Worldview Is Being Weaponized

I was a teenager when I showed up at a church wearing jeans and a T-shirt to see my friend perform in her choir. The pastor told me that I was not welcomebecause this was a house of God, and we must dress in a manner that honors Him. Not good at following rules, I responded flatly, “God made me naked. Should I strip now?” Needless to say, I did not get to see my friend sing.

Faith is an anchor for many people in the United States, but the norms that surround religious institutions are man-made, designed to help people make sense of the world in which we operate. Many religions encourage interrogation and questioning, but only within a well-established framework.Children learn those boundaries, just as they learn what is acceptable insecular society. They learn that talking about race is taboo and that questioning the existence of God may leave them ostracized.

Like many teenagers before and after me, I was obsessed with taboos and forbidden knowledge. I sought out the music Tipper Gore hated, read the books my school banned, and tried to get answers to any question that made adults gasp. Anonymously, I spent late nights engaged in conversations on Usenet, determined to push boundaries and make sense of adult hypocrisy.

Following a template learned in Model UN, I took on strong positions in order to debate and learn. Having already lost faith in the religious leaders in my community, I saw no reason to respect the dogma of any institution. And because I made a hobby out of proving teachers wrong, I had little patience for the so-called experts in my hometown. I was intellectually ravenous, but utterly impatient with, if not outright cruel to the adults around me. I rebelled against hierarchy and was determined to carve my own path at any cost.

have an amazing amount of empathy for those who do not trust the institutions that elders have told them they must respect. Rage against the machine. We don’t need no education, no thought control. I’m also fully aware that you don’t garner trust in institutions through coercion or rational discussion. Instead, trust often emerges from extreme situations.

Many people have a moment where they wake up and feel like the world doesn’t really work like they once thought or like they were once told. That moment of cognitive reckoning is overwhelming. It can be triggered by any number of things — a breakup, a death, depression, a humiliating experience.Everything comes undone, and you feel like you’re in the middle of a tornado, unable to find the ground. This is the basis of countless literary classics, the crux of humanity. But it’s also a pivotal feature in how a society comes together to function.

Everyone needs solid ground, so that when your world has just been destabilized, what comes next matters. Who is the friend that picks you up and helps you put together the pieces? What institution — or its representatives — steps in to help you organize your thinking? What information do you grab onto in order to make sense of your experiences?

Contemporary propaganda isn’t about convincing someone to believe something, but convincing them to doubt what they think they know.

Countless organizations and movements exist to pick you up during your personal tornado and provide structure and a framework. Take a look at how Alcoholics Anonymous works. Other institutions and social bodies know how to trigger that instability and then help you find groundCheck out the dynamics underpinning military basic training. Organizations, movements, and institutions that can manipulate psychological tendencies toward a sociological end have significant power. Religious organizations, social movements, and educational institutions all play this role, whether or not they want to understand themselves as doing so.

Because there is power in defining a framework for people, there is good reason to be wary of any body that pulls people in when they are most vulnerable. Of course, that power is not inherently malevolentThere is fundamental goodness in providing structures to help those who are hurting make sense of the world around them. Where there be dragons is when these processes are weaponized, when these processes are designed to produce societal hatred alongside personal stability. After all, one of the fastest ways to bond people and help them find purpose is to offer up an enemy.

And here’s where we’re in a sticky spot right now. Many large institutions — government, the church, educational institutions, news organizations — are brazenly asserting their moral authority without grappling with their own shit.They’re ignoring those among them who are using hate as a tool, and they’re ignoring their own best practices and ethics, all to help feed a bottom line. Each of these institutions justifies itself by blaming someone or something to explain why they’re not actually that powerful, why they’re actually the victim. And so they’re all poised to be weaponized in a cultural war rooted in how we stabilize American insecurity.And if we’re completely honest with ourselves, what we’re really up against is how we collectively come to terms with a dying empire. But that’s a longer tangent.

Any teacher knows that it only takes a few students to completely disrupt a classroom. Forest fires spark easily under certain conditions, and the ripple effects are huge. As a child, when I raged against everyone and everything, it was my mother who held me into the night. When I was a teenager chatting my nights away on Usenet, the two people who most memorably picked me up and helped me find stable ground were a deployed soldier and a transgender woman, both of whom held me as I asked insane questions. They absorbed the impact and showed me a different way of thinking. They taught me the power of strangers counseling someone in crisis. As a college freshman, when I was spinning out of control, a computer science professor kept me solid and taught me how profoundly important a true mentor could be. Everyone needs someone to hold them when their world spins, whether that person be a friend, family, mentor, or stranger.

Fifteen years ago, when parents and the news media were panicking about online bullying, I saw a different risk. I saw countless kids crying out online in pain only to be ignored by those who preferred to prevent teachers from engaging with students online or to create laws punishing online bullies. We saw the suicides triggered as youth tried to make “It Gets Better” videos to find community, only to be further harassed at school. We saw teens studying the acts of Columbine shooters, seeking out community among those with hateful agendas and relishing the power of lashing out at those they perceived to be benefiting at their expense. But it all just seemed like a peculiar online phenomenon, proof that the internet was cruel. Too few of us tried to hold those youth who were unquestionably in pain.

Teens who are coming of age today are already ripe for instability. Their parents are stressed; even if they have jobs, nothing feels certain or stable. There doesn’t seem to be a path toward economic stability that doesn’t involve college, but there doesn’t seem to be a path toward college that doesn’t involve mind-bending debt. Opioids seem like a reasonable way to numb the pain in far too many communities. School doesn’t seem like a safe place, so teenagers look around and whisper among friends about who they believe to be the most likely shooter in their community. As Stephanie Georgopulos notesthe idea that any institution can offer security seems like a farce.

When I look around at who’s “holding” these youth, I can’t help but notice the presence of people with a hateful agenda. And they terrify me, in no small part because I remember an earlier incarnation.

In 1995, when I was trying to make sense of my sexuality, I turned to various online forums and asked a lot of idiotic questions. I was adopted by the aforementioned transgender woman and numerous other folks who heard me out, gave me pointers, and helped me think through what I felt. In 2001, when I tried to figure out what the next generation did, I realized thatstruggling youth were more likely to encounter a Christian gay “conversion therapy” group than a supportive queer peer. Queer folks were sick of being attacked by anti-LGBT groups, and so they had created safe spaces on private mailing lists that were hard for lost queer youth to find. And so it was that in their darkest hours, these youth were getting picked up by those with a hurtful agenda.

Teens who are trying to make sense of social issues aren’t finding progressive activists. They’re finding the so-called alt-right.

Fast-forward 15 years, and teens who are trying to make sense of social issues aren’t finding progressive activists willing to pick them up. They’re finding the so-called alt-right. I can’t tell you how many youth we’ve seen asking questions like I asked being rejected by people identifying with progressive social movements, only to find camaraderie among hate groupsWhat’s most striking is how many people with extreme ideas are willing to spend time engaging with folks who are in the tornado.

Spend time reading the comments below the YouTube videos of youth struggling to make sense of the world around them. You’ll quickly find comments by people who spend time in the manosphere or subscribe to white supremacist thinking. They are diving in and talking to these youth, offering a framework to make sense of the world, one rooted in deeply hateful ideas.These self-fashioned self-help actors are grooming people to see that their pain and confusion isn’t their fault, but the fault of feminists, immigrants, people of color. They’re helping them believe that the institutions they already distrust — the news media, Hollywood, government, school, even the church — are actually working to oppress them.

Most people who encounter these ideas won’t embrace them, but some will. Still, even those who don’t will never let go of the doubt that has been instilled in the institutions around them. It just takes a spark.

So how do we collectively make sense of the world around us? There isn’t one universal way of thinking, but even the act of constructing knowledge is becoming polarized. Responding to the uproar in the news media over “alternative facts,” Cory Doctorow noted:

We’re not living through a crisis about what is true, we’re living through a crisis about how we know whether something is true. We’re not disagreeing about facts, we’re disagreeing about epistemology. The “establishment” version of epistemology is, “We use evidence to arrive at the truth, vetted by independent verification (but trust us when we tell you that it’s all been independently verified by people who were properly skeptical and not the bosom buddies of the people they were supposed to be fact-checking).

The “alternative facts” epistemological method goes like this: “The ‘independent’ experts who were supposed to be verifying the ‘evidence-based’ truth were actually in bed with the people they were supposed to be fact-checking. In the end, it’s all a matter of faith, then: you either have faith that ‘their’ experts are being truthful, or you have faith that we are. Ask your gut, what version feels more truthful?”

Doctorow creates these oppositional positions to make a point and to highlight that there is a war over epistemology, or the way in which we produce knowledge.

The reality is much messier, because what’s at stake isn’t simply about resolving two competing worldviews. Rather, what’s at stake is how there is no universal way of knowing, and we have reached a stage in our political climate where there is more power in seeding doubt, destabilizing knowledge, and encouraging others to distrust other systems of knowledge production.

Contemporary propaganda isn’t about convincing someone to believe something, but convincing them to doubt what they think they know. Andonce people’s assumptions have come undone, who is going to pick them up and help them create a coherent worldview?

2. You Can’t Trust Abstractions

Deeply committed to democratic governance, George Washington believed that a representative government could only work if the public knew their representatives. As a result, our Constitution states that each member of the House should represent no more than 30,000 constituents. When we stopped adding additional representatives to the House in 1913 (frozen at 435), each member represented roughly 225,000 constituents. Today, the ratio of congresspeople to constituents is more than 700,000:1Most people will never meet their representative, and few feel as though Washington truly represents their interests. The democracy that we have is representational only in ideal, not in practice.

As our Founding Fathers knew, it’s hard to trust an institution when it feels inaccessible and abstract. All around us, institutions are increasingly divorced from the community in which they operate, with often devastating costs.Thanks to new models of law enforcement, police officers don’t typically come from the community they serve. In many poor communities, teachers also don’t come from the community in which they teach. The volunteer U.S. military hardly draws from all communities, and those who don’t know a solider are less likely to trust or respect the military.

Journalism can only function as the fourth estate when it serves as a tool to voice the concerns of the people and to inform those people of the issues that matter. Throughout the 20th century, communities of color challenged mainstream media’s limitations and highlighted that few newsrooms represented the diverse backgrounds of their audiences. As such, we saw the rise of ethnic media and a challenge to newsrooms to be smarter about their coverage. But let’s be real — even as news organizations articulate a commitment to the concerns of everyone, newsrooms have done a dreadful job of becoming more representativeOver the past decade, we’ve seen racial justice activists challenge newsrooms for their failure to cover Ferguson, Standing Rock, and other stories that affect communities of color.

Meanwhile, local journalism has nearly died. The success of local journalismdidn’t just matter because those media outlets reported the news, but because it meant that many more people were likely to know journalists. It’s easier to trust an institution when it has a human face that you know and respect. Andas fewer and fewer people know journalists, they trust the institution less and less. Meanwhile, the rise of social media, blogging, and new forms of talk radio has meant that countless individuals have stepped in to cover issues not being covered by mainstream news, often using a style and voice that is quite unlike that deployed by mainstream news media.

We’ve also seen the rise of celebrity news hosts. These hosts help push the boundaries of parasocial interactions, allowing the audience to feel deep affinity toward these individuals, as though they are true friends. Tabloid papers have long capitalized on people’s desire to feel close to celebrities by helping people feel like they know the royal family or the Kardashians. Talking heads capitalize on this, in no small part by how they communicate with their audiences. So, when people watch Rachel Maddow or listen to Alex Jones, they feel more connected to the message than they would when reading a news article. They begin to trust these people as though they are neighbors. They feel real.

No amount of drop-in journalism will make up for the loss of journalists within the fabric of local communities.

People want to be informed, but who they trust to inform them is rooted in social networks, not institutions. The trust of institutions stems from trust in people. The loss of the local paper means a loss of trusted journalists and a connection to the practices of the newsroom. As always, people turn to their social networks to get information, but what flows through those social networks is less and less likely to be mainstream news. But here’s where you also get an epistemological divide.

As Francesca Tripodi points out, many conservative Christians have developed a media literacy practice that emphasizes the “original” text rather than an intermediary. Tripodi points out that the same type of scriptural inference that Christians apply in Bible study is often also applied to reading the Constitution, tax reform bills, and Google results. This approach is radically different than the approach others take when they rely on intermediaries to interpret news for them.

As the institutional construction of news media becomes more and more proximately divorced from the vast majority of people in the United States, we can and should expect trust in news to decline. No amount of fact-checking will make up for a widespread feeling that coverage is biased. No amount of articulated ethical commitments will make up for the feeling that you are being fed clickbait headlines.

No amount of drop-in journalism will make up for the loss of journalists within the fabric of local communities. And while the population who believes that CNN and the New York Times are “fake news” are not demographically representative, the questionable tactics that news organizations use are bound to increase distrust among those who still have faith in them.

3. The Fourth Estate and Financialization Are Incompatible

If you’re still with me at this point, you’re probably deeply invested in scholarship or journalism. And, unless you’re one of my friends, you’re probably bursting at the seams to tell me that the reason journalism is all screwed up is because the internet screwed news media’s business model. So I want to ask a favor: Quiet that voice in your head, take a deep breath, and let me offer an alternative perspective.

There are many types of capitalism. After all, the only thing that defines capitalism is the private control of industry (as opposed to government control). Most Americans have been socialized into believing that all forms of capitalism are inherently good (which, by the way, was a propaganda project). But few are encouraged to untangle the different types of capitalism and different dynamics that unfold depending on which structure is operating.

I grew up in mom-and-pop America, where many people dreamed of becoming small business owners. The model was simple: Go to the bank and get a loan to open a store or a company. Pay back that loan at a reasonable interest rate — knowing that the bank was making money — until eventually you owned the company outright. Build up assets, grow your company, and create something of value that you could pass on to your children.

In the 1980s, franchises became all the rage. Wannabe entrepreneurs saw a less risky path to owning their own business. Rather than having to figure it out alone, you could open a franchise with a known brand and a clear process for running the business. In return, you had to pay some overhead to the parent company. Sure, there were rules to follow and you could only buy supplies from known suppliers and you didn’t actually have full control, but it kinda felt like you did. Like being an Uber driver, it was the illusion of entrepreneurship that was so appealing. And most new franchise owners didn’t know any better, nor were they able to read the writing on the wall when the water all around them started boiling their froggy self. I watched my mother nearly drown, and the scars are still visible all over her body.

I will never forget the U.S. Savings & Loan crisis, not because I understood it, but because it was when I first realized that my Richard Scarry impression of how banks worked was way wrong. Only two decades later did I learn to seethe FIRE industries (Finance, Insurance, and Real Estate) as extractive ones.They aren’t there to help mom-and-pop companies build responsible businesses, but to extract value from their naiveté. Like today’s post-college youth are learning, loans aren’t there to help you be smart, but to bend your will.

It doesn’t take a quasi-documentary to realize thatMcDonald’s is not a fast-food franchise; it’s a real estate business that uses a franchise structure to extract capital from naive entrepreneurs. Go talk to a wannabe restaurant owner in New York City and ask them what it takes to start a business these days. You can’t even get a bank loan or lease in 2018 without significant investor backing, which means that the system isn’t set up for you to build a business and pay back the bank, pay a reasonable rent, and develop a valuable asset.You are simply a pawn in a financialized game between your investors, the real estate companies, the insurance companies, and the bank, all of which want to extract as much value from your effort as possible. You’re just another brick in the wall.

Now let’s look at the local news ecosystem. Starting in the 1980s, savvy investors realized that many local newspapers owned prime real estate in the center of key towns. These prized assets would make for great condos and office rentals. Throughout the country, local news shops started getting eaten up by private equity and hedge funds — or consolidated by organizations controlled by the same forces. Media conglomerates sold off their newsrooms as they felt increased pressure to increase profits quarter over quarter.

Building a sustainable news business was hard enough when the news had a wealthy patron who valued the goals of the enterprise. But the finance industry doesn’t care about sustaining the news business; it wants a return on investment. And the extractive financiers who targeted the news business weren’t looking to keep the news alive. They wanted to extract as much value from those business as possible. Taking a page out of McDonald’s, they forced the newsrooms to sell their real estate. Often, news organizations had to rent from new landlords who wanted obscene sums, often forcing them to move out of their buildings. News outlets were forced to reduce staff, reproduce more junk content, sell more ads, and find countless ways to cut costs. Of course the news suffered — the goal was to push news outlets into bankruptcy or sell, especially if the companies had pensions or other costs that couldn’t be excised.

Yes, the fragmentation of the advertising industry due to the internet hastened this process. And let’s also be clear that business models in the news business have never been cleanBut no amount of innovative new business models will make up for the fact that you can’t sustain responsible journalism within a business structure that requires newsrooms to make more money quarter over quarter to appease investors. This does not mean that you can’t build a sustainable news business, but if the news is beholden to investors trying to extract value, it’s going to impossible. And if news companies have no assets to rely on (such as their now-sold real estate), they are fundamentally unstable and likely to engage in unhealthy business practices out of economic desperation.

Untangling our country from this current version of capitalism is going to be as difficult as curbing our addiction to fossil fuels. I’m not sure it can be done, but as long as we look at companies and blame their business models without looking at the infrastructure in which they are embedded, we won’t even begin taking the first steps. Fundamentally, both the New York Times and Facebook are public companies, beholden to investors and desperate to increase their market cap. Employees in both organizations believe themselves to be doing something important for society.

Of course, journalists don’t get paid well, while Facebook’s employees can easily threaten to walk out if the stock doesn’t keep rising, since they’re also investors. But we also need to recognize that the vast majority of Americans have a stake in the stock market. Pension plans, endowments, and retirement plans all depend on stocks going up — and those public companies depend on big investors investing in them. Financial managers don’t invest in news organizations that are happy to be stable break-even businesses. Heck, even Facebook is in deep trouble if it can’t continue to increase ROI, whether through attracting new customers (advertisers and users), increasing revenue per user, or diversifying its businesses. At some point, it too will get desperate, because no business can increase ROI forever.

ROI capitalism isn’t the only version of capitalism out there. We take it for granted and tacitly accept its weaknesses by creating binaries, as though the only alternative is Cold War Soviet Union–styled communism. We’re all frogs in an ocean that’s quickly getting warmer. Two degrees will affect a lot more than oceanfront properties.

Reclaiming Trust

In my mind, we have a hard road ahead of us if we actually want to rebuild trust in American society and its key institutions (which, TBH, I’m not sure is everyone’s goal). There are three key higher-order next steps, all of which are at the scale of the New Deal.

  1. Create a sustainable business structure for information intermediaries (like news organizations) that allows them to be profitable without the pressure of ROI. In the case of local journalism, this could involve subsidized rent, restrictions on types of investors or takeovers, or a smartly structured double bottom-line model. But the focus should be on strategically building news organizations as a national project to meet the needs of the fourth estateIt means moving away from a journalism model that is built on competition for scarce resources (ads, attention) to one that’s incentivized by societal benefits.
  2. Actively and strategically rebuild the social networks of America.Create programs beyond the military that incentivize people from different walks of life to come together and achieve something great for this country. This could be connected to job training programs or rooted in community service, but it cannot be done through the government alone or, perhaps, at all. We need the private sector, religious organizations, and educational institutions to come together and commit to designing programs that knit together America while also providing the tools of opportunity.
  3. Find new ways of holding those who are struggling. We don’t have a social safety net in America. For many, the church provides the only accessible net when folks are lost and struggling, but we need a lot more.We need to work together to build networks that can catch people when they’re falling. We’ve relied on volunteer labor for a long time in this domain—women, churches, volunteer civic organizations—but our current social configuration makes this extraordinarily difficult. We’re in the middle of an opiate crisis for a reason. We need to think smartly about how these structures or networks can be built and sustained so that we can collectively reach out to those who are falling through the cracks.

Fundamentally, we need to stop triggering one another because we’re facing our own perceived pain. This means we need to build large-scale cultural resilience. While we may be teaching our children “social-emotional learning”in the classroom, we also need to start taking responsibility at scale.Individually, we need to step back and empathize with others’ worldviews and reach out to support those who are struggling. But our institutions also have important work to do.

At the end of the day, if journalistic ethics means anythingnewsrooms cannot justify creating spectacle out of their reporting on suicide or other topics just because they feel pressure to create clicks. They have the privilege of choosing what to amplify, and they should focus on what is beneficial. If they can’t operate by those values, they don’t deserve our trust. While I strongly believe that technology companies have a lot of important work to do to be socially beneficial, I hold news organizations to a higher standard because of their own articulated commitments and expectations that they serve as the fourth estateAnd if they can’t operationalize ethical practices, I fear the society that must be knitted together to self-govern is bound to fragment even further.

Trust cannot be demanded. It’s only earned by being there at critical junctures when people are in crisis and need help. You don’t earn trust when things are going well; you earn trust by being a rock during a tornado. The winds are blowing really hard right now. Look around. Who is helping us find solid ground?