Crisis Text Line, from my perspective

Like everyone who cares about Crisis Text Line and the people we serve, I have spent the last few days reflecting on recent critiques about the organization’s practices. Having spent my career thinking about and grappling with tech ethics and privacy issues, I knew that – had I not been privy to the details and context that I know – I would be outraged by what folks heard this weekend. I would be doing what many of my friends and colleagues are doing, voicing anger and disgust. But as a founding board member of Crisis Text Line, who served as board chair from June 2020 until the beginning of January 2021, I also have additional information that shaped how I thought about these matters and informed my actions and votes over the last eight years. 

As a director, I am currently working with others on the board and in the organization to chart a path forward. As was just announced, we have concluded that we were wrong to share texter data with Loris.ai and have ended our data-sharing agreement, effective immediately. We had not shared data since we changed leadership; the board had chosen to prioritize other organizational changes to support our staff, but this call-to-action was heard loud and clear and shifted our priorities. But that doesn’t mean that the broader questions being raised are resolved. 

Texters come to us in their darkest moments. What it means to govern the traces they leave behind looks different than what it means to govern other types of data. We are always asking ourselves when, how, and should we leverage individual conversations borne out of crisis to better help that individual, our counselors, and others who are suffering. These are challenging ethical questions with no easy answer. 

What follows is how I personally thought through, balanced, and made decisions related to the trade-offs around data that we face every day at Crisis Text Line. This has been a journey for me and everyone else involved in this organization, precisely because we care so deeply. I owe it to the people we serve, the workers of Crisis Text Line, and the broader community who are challenging me to come forward to own my decisions and role in this conversation. This is my attempt to share both the role that I played and the framework that shaped my thinking. Since my peers are asking for this to be a case study in tech ethics, I am going into significant detail. For those not seeking such detail, I apologize for the length of this. 

Most of the current conversation is focused on the ethics of private-sector access to messages from texters in crisis. These are important issues that I will address, but I want to walk through how earlier decisions influenced that decision. I also want to share how the ethical struggles we face are not as simple as a binary around private-sector access. There are ethical questions all the way down.

What follows here is, I want to emphasize, my personal perspective, not the perspective of the organization or the board. As a director of Crisis Text Line, I have spent the last 8 years trying to put what I know about tech ethics into practice. I am grateful that those who care about tech ethics are passionate about us doing right by our texters. We have made changes based on what we have heard from folks this weekend. But those changes are not enough. We need to keep developing and honing guiding principles to govern our work. My goal has been and continues to be ensuring ethical practices while navigating the challenges of governing both an organization and data. Putting theory into practice continues to be more challenging than I ever imagined. Given what has unfolded, I would also love advice from those who care as I do about both mental health and tech ethics.

First: Why data?

Even before we launched the CTL service, I knew that data would play a significant role in the future of the organization. My experience with tech and youth culture was why I was asked to join the board. Delivering a service that involved asynchronous interactions via text would invariably result in the storage of data. Storing data would be needed to deliver the service; the entire system was necessarily designed to enable handoffs between counselors and to allow texters to pick up conversations hours (or days) later.

Storing data immediately prompted three key questions:

  1. How long would we store the data that users provided to us?
  2. Could we create a secure system?
  3. Under what conditions would we delete data?

As a board, we realized the operational necessity of stored data, which meant an investment in the creation of a secure system and deep debate over our data retention policies. We decided that anyone should have the right to remove their data at any point, a value I strongly agreed with. The implementation of this policy relied on training all crisis counselors how to share this info with texters if they asked for it; we chose to implement the procedure by introducing a codeword that users could share to trigger a deletion of their data. (This was also documented as part of the terms of service, which texters were pointed to when they first contacted us. I know that no one in crisis reads lawyer-speak to learn this, which is why I was more interested in ensuring that our counselors knew this.)

Conducting the service would require storing data, but addressing the needs of those in crises required grappling with how data would be used more generally. Some examples of how data are used in the service: 

  • When our counselors want to offer recommendations for external services, they pull on outside data to bring into the conversation; this involves using geographic information texters provide to us.
  • Our supervisors review conversations both to support counselors real-time and give feedback later with an eye towards always improving the quality of conversations.

Our initial training program was designed based on what we could learn from other services, academic literature, and guidance from those who had been trained in social work and psychology. Early on, we began to wonder how the conversations that took place on our platform could and should inform the training itself. We knew that counselors gained knowledge through experience, and that they regularly mentored new counselors on the platform. But could we construct our training so that all counselors got to learn from the knowledge developed by those who came before them? 

This would mean using texter data for a purpose that went beyond the care and support of that individual. Yes, the Terms of Service allowed this, but this is not just a legal question; it’s an ethical question. Given the trade-offs, I made a judgment call early on that not only was using texter data to strengthen training of counselors without their explicit consent ethical, but that to not do this would be unethical. Our mission is clear: help people in crisis. To do this, we need to help our counselors better serve texters. We needed to help counselors learn and grow and develop skills with which they can help others. I supported the decision to use our data in this way.

A next critical turning point concerned scale. My mantra at Crisis Text Line has always been to focus on responsible scaling, not just scaling for scaling sake. But we provide a service that requires a delicate balance of available counselors to meet the needs of incoming texters. This meant that we had to think about how to predict the need and how to incentivize counselors to help out at spike moments. And still, there were often spikes where the need exceeded the availability of counselors. This led us to think about our ethical responsibilities in these moments. And this led to another use of data: 

  • When there are spikes in the service without enough counselors, we triage incoming requests to ensure that those most at physical risk get served fastest; this requires analyzing the incoming texts even before a conversation starts.

This may not seem like a huge deal, but it’s an ethical decision that I’ve struggled with for years. How do you know who is in most need from just intake messages? Yes, there are patterns, but we’ve also learned over the years that these are not always predictable. More harrowingly, we know retrospectively that these signals can be biased. Needless to say, I would simply prefer for us to serve everyone, immediately. But when that’s not possible, what’s our moral and ethical responsibility? Responding to incoming requests in order might meet some people’s definition of “fair,” but is that ethical? Especially when we know that when people are in the throes of a suicide attempt, time is of the essence? I came to the conclusion that we have an ethical responsibility to use our data to work to constantly improve the triage algorithm, to do the best we can to identify those for whom immediate responses can save a life. This means using people’s data without their direct consent, to leverage one person’s data to help another. 

Responsible scaling has introduced a series of questions over the years. I’ve reflected in my head on one for years that we’ve never implemented: Should we attempt to match need to expertise? In other words, should our counselors specialize? To date, we haven’t, but it’s something I think a lot about. But there are also questions that have been raised that we have intentionally abandoned. For example, there was once a board meeting where the question of automation came up. We already use some automation tools in training and for intake; should some conversations be automated? This was one of those board meetings where I put my foot down. Absolutely not. Data could be used to give our counselors superpowers, but centering this service on humans was essential. In this context, my mantra has always been augmentation not automation. The board and organization embraced this mantra, and I’m glad for it.

Next: Data for Research

From early on, researchers came to Crisis Text Line asking for access to data. This prompted even more reflection. We had significant data and we were seeing trends that had significant implications for far more than our service. We started reporting out key trends, highlighting patterns that we then published on our website. I supported this effort because others in the ecosystem told us it helped them to learn from the patterns that we were seeing. This then led to the more complicated issue of whether or not to allow external researchers to study our data with an eye towards scholarship. 

I’m a scholar. I know how important research is and can be. I knew how little data exists in the mental health space, how much we had tried to learn from others, how beneficial knowledge could be to others working in the mental health ecosystem. I also knew that people who came to us in crisis were not consenting to be studied. Yes, there was a terms of service that could contractually permit such use, but I knew darn straight that no one would read it, and advised everyone involved to proceed as such. 

I have also tracked the use of corporate data for research for decades, speaking up against some of Facebook’s experiments. Academic researchers often want to advance knowledge by leveraging corporate data, but they do not necessary grapple with the consequences of using data beyond IRB requirements. There have been heated debates in my field about whether or not it is ethical to use corporate trace data without the consent of users to advance scientific knowledge. I have had a range of mixed feelings about this, but have generally come out in opposition to private trace data being used for research. 

So when faced with a similar question at Crisis Text Line, I had to do a lot of soul searching. Our mission is to help people. Our texters come to us in their darkest hours. Our data was opening up internal questions right and left about how to best support them. We don’t have the internal resources to analyze the data to answer all of our questions, to improve our knowledge base in ways that can help texters. I knew that having additional help from researchers could help us learn in ways that would improve training of counselors and help people down the line. I also knew that what we were learning internally might be useful to other service providers in the mental health space and I felt queasy that we were not sharing what we had learned to help others.

Our organization does not exist for researchers to research. Our texters do not come to us to be research subjects. But our texters do come to us for help. And we do help them by leveraging what we learn helping others, including researchers. Texters may not come to us to pay it forward for the next person in need, but in effect, that’s what their engagement with us was enabling. I see that as an ethical use of data, one predicated on helping counselors and texters through experience mediated by data. The question in my mind then was: what is the relationship of research to this equation?

I elected to be the board member overseeing the research efforts. We have explored – and continue to explore – the right way to engage researchers in our work. We know that they are seeking data for their own interests, but our interest is clear: can their learnings benefit our texters and counselors, in addition to other service providers and the public health and mental health ecosystem. To this end, we have always vetted research proposals and focused on research that could help our mission, not just satisfy researcher curiosity. 

Needless to say, privacy was a major concern from day one. Privacy was a concern even before we talked about research; we built privacy processes even for internal analyses of data. But when research is involved, privacy concerns are next-level. Lots of folks have accused us of being naive about reidentification over the last few days, which I must admit has been painful to hear given how much time I spend thinking about and dealing with reidentification in other contexts. I know that reidentification is possible and that was at the heart and soul of our protocols. Researchers have constrained access to scrubbed data under contract precisely because there’s a possibility that, even with our scrubbing procedures, reidentification might be possible. But, we limited data to minimize reidentification risks and added contractual procedures to explicitly prevent reidentification.

When designing these protocols, my goal was to create the conditions where we could learn from people in crisis to help others in crisis without ever, in any way, adding to someone’s crisis. And this means privacy-first.

More generally though, the research question opened up a broader set of issues in my mind. Our service can directly help individuals. What can and should we do to advance mental health more generally? What can and should we be providing to the field? What is our responsibility to society outside our organization?

Next: Training as a Service

Our system is based on volunteers who we train to give counsel. As is true in any volunteer-heavy contexts, volunteers come and go. Training is resource intensive, but essential for the service. Repeatedly, volunteers approached us as a board to tell us about the secondary benefits of the training. Yes, the training was designed to empower a counselor to communicate with a person who was in crisis, but these same skills were beneficial at work and in personal relationships. Our counselors kept telling us that crisis management training has value in the world outside our doors. This prompted us to reflect on the potential benefit of training far more people to manage crises, even if they did not want to volunteer for our service.

The founder of Crisis Text Line saw an opportunity and came to the board. We did not have the resources to simply train anyone who was interested. But HR teams at companies had both the need for, and the resources for, larger training systems. The founder proposed building a service that could provide us with a needed revenue stream. I don’t remember every one of the options we discussed, but I do know that we talked about building a separate unit in the organization to conduct training for a fee. This raised the worry that this would be a distraction to our core focus. We did all see training as mission-aligned, but we needed to focus on the core service CTL was providing. 

We were also struggling, as all non-profits do, with how to be sustainable. Non-profit fundraising is excruciating and fraught. We were grateful for all of the philanthropic organizations who made starting the organization possible, but sustaining philanthropic funding is challenging and has significant burdens. Program officers always want grantees to find other sources of money. There are traditional sources: foundations, individual donors, corporate social responsibility donations. In some contexts, there’s government funding, though at that time, government was slashing funding not increasing it. Funding in the mental health space is always scarce. And yet, as a board, we always had a fiduciary responsibility to think about sustainability.  

Many of the options in front of us concerned me deeply. We could pursue money by billing insurance companies, but this had a lot of obvious downsides to it. Many of the people we serve do not have access to insurance. Moreover, what insurers really want is our data, which we were strongly against. They weren’t alone – many groups wanted to buy our data outright. We were strongly against those opportunities as well. No selling of data, period. 

Big tech companies and other players were increasingly relying on CTL as their first response for people in crisis, without committing commensurate (or sometimes, any) resources to help offset that burden. This was especially frustrating because they had the resources to support those in crisis but had chosen not to, preferring to outsource the work but not support it. They believed that traffic was a good enough gift.

This was why we, as a board, were reflecting on whether or not we could build a revenue stream out of training people based on what we learned from training counselors. In the end, we opted not to run such an effort from within Crisis Text Line, to reduce the likelihood of distracting from our mission. Instead, we gave the founder of Crisis Text Line permission to start a new organization, with us retaining a significant share in the company; we also retained the right to a board seat. This new entity was structured as a for-profit company designed to provide a service to businesses, leveraging what we had learned helping people. This company is called Loris.ai.

Loris.ai planned on learning from us to build training tools for people who were not going to serve as volunteers for our service. Yet, the company was a separate entity and the board rejected any plan that involved full access to our systems. Instead, we opted to create a data-sharing agreement that paralleled the agreement we had created with researchers: controlled access to scrubbed data solely to build models for training that would improve mental health more broadly. We knew that it did not make sense for them to directly import our training modules; they would be training people in a different context. Yet, both they and we believed that there were lessons to be learned from our experiences, both qualitatively and quantitatively.

I struggled with this decision at the time and ever since. I could see both benefits and risks in sharing our data with another organization, regardless of how mission-aligned we were. We debated this in the boardroom; I pushed back around certain proposals. In the end, some of the board members at the time saw this decision through the lens of a potential financial risk reduction. If the for-profit company did well, we could receive dividends or sell our stake in order to fund the crisis work we were doing. I voted in favor of creating Loris.ai for a different reason.  If another entity could train more people to develop the skills our crisis counselors were developing, perhaps the need for a crisis line would be reduced. After all, I didn’t want our service to be needed; the fact that it is stems from a system that is deeply flawed. If we could build tools that combat the cycles of pain and suffering, we could pay forward what we were learning from those we served. I wanted to help others develop and leverage empathy. 

This decision weighed heavily on me, but I did vote in favor of it. Knowing what I know now, I would not have. But hindsight is always clearer.

Existential Crisis

In June of 2020, our employees came to us with grave concerns about the state of the organization. This triggered many changes to the organization and a reckoning as a board. I stepped in as board chair. As we focused on addressing the issues raised by employees, I felt as though we needed to prioritize what they were telling us. My priority was to listen to our staff, center the needs of our workers and texters, learn from them, and focus on our team, core business, and organizational processes. We also needed to hire a permanent CEO. The concerns we received were varied and diverse, requiring us to prioritize what to focus on when. 

Data practices were not among the dominant concerns, but they were among the issues raised. The most significant data concern raised to us was whether our data practices were as strong as the board believed them to be. This prompted three separate, interlocking audits. We had already conducted a privacy and security audit, but we revisited it in greater depth. We also hired two additional independent teams to conduct audits around 1) data governance and 2) ethical use of and bias in data. I was the board member overseeing this work, pushing each of these efforts to probe more deeply, engaging a range of stakeholders along the way (including counselors, staff, partners, and domain experts).

I quickly learned that as much as scholars talk about the need to do audits of ethics/biases, there is not a good roadmap out there for doing this work, especially in the context of a fairly large-scale organization. As someone who cares deeply about this, I was glad to be pushing the edges and interrogating every process, but I also wanted us to have guidance on how to strengthen our efforts even further. There is always room to improve, and there isn’t yet a community of practice for people doing this in real-time while people are depending on an organization’s work. Still, we got great feedback from the audits and set about to prioritize the changes that needed to be implemented.

Aside from the data audits, most of our changes over the last 18 months have been organizational and infrastructural, focused on strengthening our team, processes, and tools. As the board chair, I deliberately chose not to prioritize any changes to our contractual relationship with Loris.ai, in favor of prioritizing the human concerns raised by our staff. We focused our energies internally and on our core mission. When Loris asked the Crisis Text Line founder to leave the board, we chose not to offer up a replacement. Our most proactive stance over the last 18 months was to freeze the agreement with Loris, with an explicit commitment to reconsider the relationship in 2022 once a new CEO was in place. As a result of these decisions, we have not shared any data since the change in leadership. 

Governance 

The practice of non-profit governance requires collectively grappling with trade-off after trade-off. I have been a volunteer director of the board of Crisis Text Line for 8 years both because I believe in the mission and because I have been grateful to govern alongside amazing directors from whom I constantly learn. This doesn’t mean it’s been easy and it definitely doesn’t mean we always agree. But we do push each other and I learn a lot in the process. We strived to govern ethically, but that doesn’t mean others would see our decisions as such. We also make decisions that do not pan out as expected, requiring us to own our mistakes even as we change course. Sometimes, we can be fully transparent about our decisions; in other situations – especially when personnel matters are involved – we simply can’t. That is the hardest part of governance, both for our people and for myself personally. 

I want to own my decisions as a director of Crisis Text Line. I voted in favor of our internal uses of data, our collaborations with researchers, and our decision to contribute to the founding of Loris.ai. I did so based on a calculation of ethical trade-offs informed by my research and experiences. I want to share some aspects of the rubric in my mind: 

1. Consent. Consent in my mind exists in a more complex context than the simpler view I had before I began volunteering at CTL. I believe in the ideal of informed consent, which has informed my research. (A ToS is not consent.) But I have also learned from our clinical team about the limits of consent and when consent undermines ethical action. I have also come to believe that there are times when other ethical values must be prioritized against an ideal of consent. For example, I support Crisis Text Line’s decision to activate Public Safety Answering Points (PSAPs) when a texter presents an imminent life-or-death risk to themselves or to someone else, even when they have not consented to such an activation. Many members of our staff and volunteers are mandatory reporters who have the legal as well as ethical obligation to report. At the same time, I also support our ongoing work to reduce reliance on PSAPs and our policy efforts to have PSAPs center mental health more.  

2. Present and future. Our mission is to help individuals who come to us in need and to improve the state of mental health for people more generally. I would like to create a world in which we are not needed. To that end, I am always thinking about what benefits individuals and the collective. I’m also thinking about future individuals. What can we learn now that will help the next person who comes to us? And what can we do now so that fewer people need us? I believe in a moral imperative of paying it forward and I approach data ethics with this in mind. There is undeniably a tension between the obligation to the individual and the obligation to the collective, one that I regularly reflect on.

3. The field matters. We are a non-profit and part of a broader ecosystem of mental health services. We cannot serve everyone; even for those whom we do serve in crisis, we cannot be their primary mental health provider. We want there to be an entire ecosystem of support for people in crisis, of which we play just one part. We have a responsibility to the individual in the moment of crisis and we have a responsibility to learn from and strengthen the field to help individuals downstream. To this end, I think we have an ethical responsibility to give back to the ecosystem, not just to the individual in the moment. But we need to balance this imperative with respect for the individuals during their darkest moments.

4. Improve over time. Much of our data begins as conversations, involving data from both texters and counselors. As you might imagine, when our counselors’ attempts to help someone need improvement, it weighs deeply on our entire staff. Both counselors and texters benefit when counselors learn from reviewing their conversations, from reviewing what worked or didn’t work in others’ conversations, and from lessons learned being fed back into training. My eye is always on what will improve those conversations. (This is why an obsession at the board level is quality over quantity.)

The responsibility of CTL is a heavy one, in ways that may not be obvious to those who haven’t worked in this field or seen the sometimes-counterintuitive challenges of serving people in crisis. I use the needs and prioritizations of our texters and team as my first and most important filter when judging what decisions to make. I see helping counselors and staff succeed as key to helping serve people in need. This sometimes requires thinking about how texter data can help strengthen our counselors; this sometimes requires asking if conducting research will help them grow; and this sometimes requires asking what is needed to strengthen the broader ecosystem.

When it comes to thinking about texters, I’m focused on the quality of the conversation and the safety of the texter. When it comes to safety, I’m often confronted with non-knowledge, which is harrowing. (Did someone who was attempting suicide survive the night? Emergency responders don’t necessarily tell us, so we rely on hearing back from the texter, but what’s the healthiest way to followup with a texter?) I still don’t know the best way to measure quality; I have scoured the literature and sought advice from many to guide my thinking, but I am still struggling there and in conversation with others to try to crack this nut. I’m also thankful that there’s an entire team at Crisis Text Line dedicated to thinking about, evaluating, and improving conversation quality.

I regularly hear from both texters and counselors, whose experiences shape my thinking, but I also know that these are but a few perspectives. I read the feedback from our surveys, trying to grapple with the limitations and biases of those responses. There is no universal texter or counselor experience, which means that I have to constantly remind myself about the diversity of perspectives among texters and counselors. I cannot govern by focusing on the average; I must strive to think holistically about the diversity of viewpoints. When it comes to governance, I am always making trade-offs – often with partial information – which is hard. I also know that I sometimes get it wrong and I try to learn from those mistakes. 

These are some of the factors that go through my head when I’m thinking about our data practices. And of course, I’m also thinking about our legal and fiduciary responsibilities. But the decisions I make regarding our data start from thinking through the ethics and then I factor in financial or legal considerations. 

As I listen and learn from how people are responding to this conversation and from the decisions that I contributed to, it is clear to me that we have not done enough to share what we are doing with data and why. It’s also clear to me that I made mistakes and change is necessary. I know that after the challenges of the last year, I have erred on the side of doing the work inside the organization rather than grappling with the questions raised our arrangement with Loris.ai. 

In order to continue serving Crisis Text Line, I need to figure out what we – and I – can do better. I am fascinated by my peers calling to make this a case study in tech ethics. I think that’s quite interesting, and I hope that my detailing this thinking can contribute to that effort. I hope to learn from whatever case study emerges.

To that end, to my peers and colleagues, I also have some honest questions for all of you who are frustrated, angry, disappointed, or simply unsure about us: 

  • What is the best way to balance the implicit consent of users in crisis with other potentially beneficial uses of data which they likely will not have intentionally consented to but which can help them or others? 
  • Given that people come to us in their darkest moments, can/should we enable research on the traces that they produce? If so, how should this be structured? 
  • Is there any structure in which lessons learned from a non-profit service provider can be transferred to a for-profit entity? Also, how might this work with partner organizations, foundations, government agencies, sponsors, or subsidiaries, and are the answers different?
  • Given the data we have, how do we best serve our responsibility to others in the mental health ecosystem?
  • What can better community engagement and participatory decision-making in this context look like? How do we engage people to think holistically about the risks to life that we are balancing and that are shaping our decisions?  (And how do we not absolve our governance responsibilities to perform ethics, as we’ve seen play out in other contexts?)

There are also countless other questions that I struggle with that go beyond the data issues, but also shape them. For example, as always, I will continue to push up against the persistent and endemic question that plagues all non-profits: How can we build a financially sustainable service organization that is able to scale to meet people’s needs? I also struggle every day with broader dynamics in which tech, data, ethics, and mental health are entangled. For example, how do we collectively respond to mental health crises that are amplified by decisions made by for-profit entities? What is our collective responsibility in a society where mental health access is so painfully limited? 

These questions aren’t just important for a case study. These are questions I struggle with every day in practice and I would be grateful to learn from others’ journeys. I know I will make mistakes, but I hope that I can learn from them and, with your guidance, make fewer.

I’m grateful to everyone who cares enough about the texters we serve to engage in this conversation. I’m particularly grateful to be in a community that will call in anyone whom they feel isn’t exercising proper care with people’s data and privacy. And most of all I am thankful for the counsel, guidance, and clarity of our workers at Crisis Text Line, who do the hard work of caring for texters every day, while also providing clear feedback to help drive the future of the organization. I can only help that my decisions help them succeed at the hard work they do.

I warmly welcome any advice from all of you who’ve been watching the conversation and who care about seeing CTL succeed in its mission.

Print Friendly, PDF & Email