Tag Archives: research

Joyfully Geeking Out

2020 US Census: Everybody counts!

In 2015, I was invited to join the Commerce Department’s Data Advisory Council. Truth be told, I was kinda oblivious to what this was all about. I didn’t know much about how the government functioned. I didn’t know what a “FACA” was. (Turns out that the “Federal Advisory Committee Act” is a formal government thing.) Heck, I only had the most cursory of understanding about the various agencies and bureaus associated with the Commerce Department. But I did understand one thing: the federal government has some of the most important data infrastructure out there. Long before discussions about our current tech industry, government agencies have been trying to wrangle data to help both the public and industry. The Weather Channel wouldn’t be able to do its work without NOAA (National Oceanographic and Atmospheric Administration). Standards would go haywire without NIST (National Institute of Standards and Technology). And we wouldn’t be able to apportion our representatives without Census. 

Over the last few years, I have fallen madly in love with the data puzzles that underpin the census. Thanks to Margo Anderson’s “The American Census,” I learned that the history of the census is far far far messier than I ever could’ve imagined.  An amazing network of people dedicated to helping ensure that people are represented have given me a crash course into the longstanding battle over collecting the best data possible. As the contours of the 2020 census became more visible, it also became clear that it would be the perfect networked fieldsite for trying to understand two questions that have been tickling my brain: 

  1. What makes data legitimate?
  2. What does it take to secure data infrastructure? 

(For any STS scholar reading this, add scare-quotes to all of the words that make you want to scream.)

Over the last two years, I’ve been learning as much as I could possibly learn about the census. I’ve also been dipping my toe into archival work and trying to strengthen my theoretical toolkit to handle the study of organizations and large scale operations. And now we’re a matter of days away from when everyone in the country will receive their invitation to participate in the census, and so I’m throwing myself into what is bound to be a whirlwind in order to fully understand how an operation of this magnitude unfolds.  

While I have produced a living document to explain how differential privacy is part of the 2020 census, I’ve mostly not been writing much about the research I’m doing. To be honest, I’m relishing taking the time to deeply understand something and to do the deep reflection I haven’t had the privilege of doing in almost a decade. 

If I’ve learned anything from the world of census junkies, this decadal process is raw insanity and full of unexpected twists and turns. Yet, what I can say is that it’s also filled with some of the most civic-minded people that I’ve ever encountered. There are so many different stakeholders trying to ensure that we get a good count in order to guarantee that everyone in this country is counted, represented, and acknowledged. This is important, not just for Congressional apportionment and redistricting, but also to make sure that funding is properly allocated, that social science research can inform important decision-making processes, and that laws designed to combat discrimination are enforced.

I’m sharing this now, not because I have new thinking to offer, but because I want folks to understand why I might be rather unresponsive to non-census-obsessives over the next few months. I want to dive head-first into this research and relish the opportunity to be surrounded by geeks engaged in a phenomenal civic effort. For those who aren’t thinking full-time about the census, please understand that I’m going to turn down requests for my time this spring and my email response time may also falter. 

Of course.. if you want to make me smile, send me photographs of cool census stuff happening in your community! Or interesting census content that comes through your feeds! And if you want to go hog wild, get involved. Census is hiring. Or you could make census-related content to encourage others to participate. Or at the very least, tell everyone you know to participate; they’ll get their official invitation starting March 12. 

The US census has been taking place every 10 years since 1790. It is our democracy’s data infrastructure. And it is “big data” before there was big data. It’s also the cornerstone of countless advances in statistics and social scientific knowledge. Understanding the complexity of the census is part-and-parcel with understanding where our data-driven world is headed. When this is all over, I hope that I’ll have a lot more to contribute to that conversation. In the meantime, forgive me for relishing my obsessive focus. 

New book: Participatory Culture in a Networked Era by Henry Jenkins, Mimi Ito, and me!

In 2012, Henry Jenkins approached Mimi Ito and I with a crazy idea that he’d gotten from talking to the folks at Polity. Would we like to sit down and talk through our research and use that as the basis of a book? I couldn’t think of anything more awesome than spending time with two of my mentors and teasing out the various strands of our interconnected research. I knew that there were places where we were aligned and places where we disagreed or, at least, where our emphases provided different perspectives. We’d all been running so fast in our own lives that we hadn’t had time to get to that level of nuance and this crazy project would be the perfect opportunity to do precisely that.

We started by asking our various communities what questions they would want us to address. And then we sat down together, face-to-face, for two days at a time over a few months. And we talked. And talked. And talked. In the process, we started identifying themes and how our various areas of focus were woven together.

Truth be told, I never wanted it to end. Throughout our conversations, I kept flashing back to my years at MIT when Henry opened my eyes to fan culture and a way of understanding media that seeped deep inside my soul. I kept remembering my trips to LA where I’d crash in Mimi’s guest room, talking research late into the night and being woken in the early hours by a bouncy child who never understood why I didn’t want to wake up at 6AM. But above everything else, the sheer delight of brainjamming with two people whose ideas and souls I knew so well was ecstasy.

And then the hard part started. We didn’t want this project to be the output of self-indulgence and inside baseball. We wanted it to be something that helped others see how research happens, how ideas form, and how collaborations and disagreements strengthen seemingly independent work. And so we started editing. And editing. And editing. Getting help editing. And then editing some more.

The result is Participatory Culture in a Networked Era and it is unlike any project I’ve ever embarked on or read. The book is written as a conversation and it was the product of a conversation. Except we removed all of the umms and uhhs and other annoying utterances and edited it in an attempt to make the conversation make sense for someone who is trying to understand the social and cultural contexts of participation through and by media. And we tried to weed out the circular nature of conversation as we whittled down dozens of hours of recorded conversation into a tangible artifact that wouldn’t kill too many trees.

What makes this book neat is that it sheds light on all of the threads of conversation that helped the work around participatory culture, connected learning, and networked youth practices emerge. We wanted to make the practice of research as visible as our research and reveal the contexts in which we are operating alongside our struggles to negotiate different challenges in our work. If you’re looking for classic academic output, you’re going to hate this book. But if you want to see ideas in context, it sure is fun. And in the conversational product, you’ll learn new perspectives on youth practices, participatory culture, learning, civic engagement, and the commercial elements of new media.

OMG did I fall in love with Henry and Mimi all over again doing this project. Seeing how they think just tickles my brain in the best ways possible. And I suspect you’ll love what they have to say too.

The book doesn’t officially release for a few more weeks, but word on the street is that copies of this book are starting to ship. Check it out!

What does the Facebook experiment teach us?

I’m intrigued by the reaction that has unfolded around the Facebook “emotion contagion” study. (If you aren’t familiar with this, read this primer.) As others have pointed out, the practice of A/B testing content is quite common. And Facebook has a long history of experimenting on how it can influence people’s attitudes and practices, even in the realm of research. An earlier study showed that Facebook decisions could shape voters’ practices. But why is it that *this* study has sparked a firestorm?

In asking people about this, I’ve been given two dominant reasons:

  1. People’s emotional well-being is sacred.
  2. Research is different than marketing practices.

I don’t find either of these responses satisfying.

The Consequences of Facebook’s Experiment

Facebook’s research team is not truly independent of product. They have a license to do research and publish it, provided that it contributes to the positive development of the company. If Facebook knew that this research would spark the negative PR backlash, they never would’ve allowed it to go forward or be published. I can only imagine the ugliness of the fight inside the company now, but I’m confident that PR is demanding silence from researchers.

I do believe that the research was intended to be helpful to Facebook. So what was the intended positive contribution of this study? I get the sense from Adam Kramer’s comments that the goal was to determine if content sentiment could affect people’s emotional response after being on Facebook. In other words, given that Facebook wants to keep people on Facebook, if people came away from Facebook feeling sadder, presumably they’d not want to come back to Facebook again. Thus, it’s in Facebook’s better interest to leave people feeling happier. And this study suggests that the sentiment of the content influences this. This suggests that one applied take-away for product is to downplay negative content. Presumably this is better for users and better for Facebook.

We can debate all day long as to whether or not this is what that study actually shows, but let’s work with this for a second. Let’s say that pre-study Facebook showed 1 negative post for every 3 positive and now, because of this study, Facebook shows 1 negative post for every 10 positive ones. If that’s the case, was the one week treatment worth the outcome for longer term content exposure? Who gets to make that decision?

Folks keep talking about all of the potential harm that could’ve happened by the study – the possibility of suicides, the mental health consequences. But what about the potential harm of negative content on Facebook more generally? Even if we believe that there were subtle negative costs to those who received the treatment, the ongoing costs of negative content on Facebook every week other than that 1 week experiment must be more costly. How then do we account for positive benefits to users if Facebook increased positive treatments en masse as a result of this study? Of course, the problem is that Facebook is a black box. We don’t know what they did with this study. The only thing we know is what is published in PNAS and that ain’t much.

Of course, if Facebook did make the content that users see more positive, should we simply be happy? What would it mean that you’re more likely to see announcements from your friends when they are celebrating a new child or a fun night on the town, but less likely to see their posts when they’re offering depressive missives or angsting over a relationship in shambles? If Alice is happier when she is oblivious to Bob’s pain because Facebook chooses to keep that from her, are we willing to sacrifice Bob’s need for support and validation? This is a hard ethical choice at the crux of any decision of what content to show when you’re making choices. And the reality is that Facebook is making these choices every day without oversight, transparency, or informed consent.

Algorithmic Manipulation of Attention and Emotions

Facebook actively alters the content you see. Most people focus on the practice of marketing, but most of what Facebook’s algorithms do involve curating content to provide you with what they think you want to see. Facebook algorithmically determines which of your friends’ posts you see. They don’t do this for marketing reasons. They do this because they want you to want to come back to the site day after day. They want you to be happy. They don’t want you to be overwhelmed. Their everyday algorithms are meant to manipulate your emotions. What factors go into this? We don’t know.

Facebook is not alone in algorithmically predicting what content you wish to see. Any recommendation system or curatorial system is prioritizing some content over others. But let’s compare what we glean from this study with standard practice. Most sites, from major news media to social media, have some algorithm that shows you the content that people click on the most. This is what drives media entities to produce listicals, flashy headlines, and car crash news stories. What do you think garners more traffic – a detailed analysis of what’s happening in Syria or 29 pictures of the cutest members of the animal kingdom? Part of what media learned long ago is that fear and salacious gossip sell papers. 4chan taught us that grotesque imagery and cute kittens work too. What this means online is that stories about child abductions, dangerous islands filled with snakes, and celebrity sex tape scandals are often the most clicked on, retweeted, favorited, etc. So an entire industry has emerged to produce crappy click bait content under the banner of “news.”

Guess what? When people are surrounded by fear-mongering news media, they get anxious. They fear the wrong things. Moral panics emerge. And yet, we as a society believe that it’s totally acceptable for news media – and its click bait brethren – to manipulate people’s emotions through the headlines they produce and the content they cover. And we generally accept that algorithmic curators are perfectly well within their right to prioritize that heavily clicked content over others, regardless of the psychological toll on individuals or the society. What makes their practice different? (Other than the fact that the media wouldn’t hold itself accountable for its own manipulative practices…)

Somehow, shrugging our shoulders and saying that we promoted content because it was popular is acceptable because those actors don’t voice that their intention is to manipulate your emotions so that you keep viewing their reporting and advertisements. And it’s also acceptable to manipulate people for advertising because that’s just business. But when researchers admit that they’re trying to learn if they can manipulate people’s emotions, they’re shunned. What this suggests is that the practice is acceptable, but admitting the intention and being transparent about the process is not.

But Research is Different!!

As this debate has unfolded, whenever people point out that these business practices are commonplace, folks respond by highlighting that research or science is different. What unfolds is a high-browed notion about the purity of research and its exclusive claims on ethical standards.

Do I think that we need to have a serious conversation about informed consent? Absolutely. Do I think that we need to have a serious conversation about the ethical decisions companies make with user data? Absolutely. But I do not believe that this conversation should ever apply just to that which is categorized under “research.” Nor do I believe that academe is necessarily providing a golden standard.

Academe has many problems that need to be accounted for. Researchers are incentivized to figure out how to get through IRBs rather than to think critically and collectively about the ethics of their research protocols. IRBs are incentivized to protect the university rather than truly work out an ethical framework for these issues. Journals relish corporate datasets even when replicability is impossible. And for that matter, even in a post-paper era, journals have ridiculous word count limits that demotivate researchers from spelling out all of the gory details of their methods. But there are also broader structural issues. Academe is so stupidly competitive and peer review is so much of a game that researchers have little incentive to share their studies-in-progress with their peers for true feedback and critique. And the status games of academe reward those who get access to private coffers of data while prompting those who don’t to chastise those who do. And there’s generally no incentive for corporates to play nice with researchers unless it helps their prestige, hiring opportunities, or product.

IRBs are an abysmal mechanism for actually accounting for ethics in research. By and large, they’re structured to make certain that the university will not be liable. Ethics aren’t a checklist. Nor are they a universal. Navigating ethics involves a process of working through the benefits and costs of a research act and making a conscientious decision about how to move forward. Reasonable people differ on what they think is ethical. And disciplines have different standards for how to navigate ethics. But we’ve trained an entire generation of scholars that ethics equals “that which gets past the IRB” which is a travesty. We need researchers to systematically think about how their practices alter the world in ways that benefit and harm people. We need ethics to not just be tacked on, but to be an integral part of how *everyone* thinks about what they study, build, and do.

There’s a lot of research that has serious consequences on the people who are part of the study. I think about the work that some of my colleagues do with child victims of sexual abuse. Getting children to talk about these awful experiences can be quite psychologically tolling. Yet, better understanding what they experienced has huge benefits for society. So we make our trade-offs and we do research that can have consequences. But what warms my heart is how my colleagues work hard to help those children by providing counseling immediately following the interview (and, in some cases, follow-up counseling). They think long and hard about each question they ask, and how they go about asking it. And yet most IRBs wouldn’t let them do this work because no university wants to touch anything that involves kids and sexual abuse. Doing research involves trade-offs and finding an ethical path forward requires effort and risk.

It’s far too easy to say “informed consent” and then not take responsibility for the costs of the research process, just as it’s far too easy to point to an IRB as proof of ethical thought. For any study that involves manipulation – common in economics, psychology, and other social science disciplines – people are only so informed about what they’re getting themselves into. You may think that you know what you’re consenting to, but do you? And then there are studies like discrimination audit studies in which we purposefully don’t inform people that they’re part of a study. So what are the right trade-offs? When is it OK to eschew consent altogether? What does it mean to truly be informed? When it being informed not enough? These aren’t easy questions and there aren’t easy answers.

I’m not necessarily saying that Facebook made the right trade-offs with this study, but I think that the scholarly reaction of research is only acceptable with IRB plus informed consent is disingenuous. Of course, a huge part of what’s at stake has to do with the fact that what counts as a contract legally is not the same as consent. Most people haven’t consented to all of Facebook’s terms of service. They’ve agreed to a contract because they feel as though they have no other choice. And this really upsets people.

A Different Theory

The more I read people’s reactions to this study, the more that I’ve started to think that the outrage has nothing to do with the study at all. There is a growing amount of negative sentiment towards Facebook and other companies that collect and use data about people. In short, there’s anger at the practice of big data. This paper provided ammunition for people’s anger because it’s so hard to talk about harm in the abstract.

For better or worse, people imagine that Facebook is offered by a benevolent dictator, that the site is there to enable people to better connect with others. In some senses, this is true. But Facebook is also a company. And a public company for that matter. It has to find ways to become more profitable with each passing quarter. This means that it designs its algorithms not just to market to you directly but to convince you to keep coming back over and over again. People have an abstract notion of how that operates, but they don’t really know, or even want to know. They just want the hot dog to taste good. Whether it’s couched as research or operations, people don’t want to think that they’re being manipulated. So when they find out what soylent green is made of, they’re outraged. This study isn’t really what’s at stake. What’s at stake is the underlying dynamic of how Facebook runs its business, operates its system, and makes decisions that have nothing to do with how its users want Facebook to operate. It’s not about research. It’s a question of power.

I get the anger. I personally loathe Facebook and I have for a long time, even as I appreciate and study its importance in people’s lives. But on a personal level, I hate the fact that Facebook thinks it’s better than me at deciding which of my friends’ posts I should see. I hate that I have no meaningful mechanism of control on the site. And I am painfully aware of how my sporadic use of the site has confused their algorithms so much that what I see in my newsfeed is complete garbage. And I resent the fact that because I barely use the site, the only way that I could actually get a message out to friends is to pay to have it posted. My minimal use has made me an algorithmic pariah and if I weren’t technologically savvy enough to know better, I would feel as though I’ve been shunned by my friends rather than simply deemed unworthy by an algorithm. I also refuse to play the game to make myself look good before the altar of the algorithm. And every time I’m forced to deal with Facebook, I can’t help but resent its manipulations.

There’s also a lot that I dislike about the company and its practices. At the same time, I’m glad that they’ve started working with researchers and started publishing their findings. I think that we need more transparency in the algorithmic work done by these kinds of systems and their willingness to publish has been one of the few ways that we’ve gleaned insight into what’s going on. Of course, I also suspect that the angry reaction from this study will prompt them to clamp down on allowing researchers to be remotely public. My gut says that they will naively respond to this situation as though the practice of research is what makes them vulnerable rather than their practices as a company as a whole. Beyond what this means for researchers, I’m concerned about what increased silence will mean for a public who has no clue of what’s being done with their data, who will think that no new report of terrible misdeeds means that Facebook has stopped manipulating data.

Information companies aren’t the same as pharmaceuticals. They don’t need to do clinical trials before they put a product on the market. They can psychologically manipulate their users all they want without being remotely public about exactly what they’re doing. And as the public, we can only guess what the black box is doing.

There’s a lot that needs reformed here. We need to figure out how to have a meaningful conversation about corporate ethics, regardless of whether it’s couched as research or not. But it’s not so simple as saying that a lack of a corporate IRB or a lack of a golden standard “informed consent” means that a practice is unethical. Almost all manipulations that take place by these companies occur without either one of these. And they go unchecked because they aren’t published or public.

Ethical oversight isn’t easy and I don’t have a quick and dirty solution to how it should be implemented. But I do have a few ideas. For starters, I’d like to see any company that manipulates user data create an ethics board. Not an IRB that approves research studies, but an ethics board that has visibility into all proprietary algorithms that could affect users. For public companies, this could be done through the ethics committee of the Board of Directors. But rather than simply consisting of board members, I think that it should consist of scholars and users. I also think that there needs to be a mechanism for whistleblowing regarding ethics from within companies because I’ve found that many employees of companies like Facebook are quite concerned by certain algorithmic decisions, but feel as though there’s no path to responsibly report concerns without going fully public. This wouldn’t solve all of the problems, nor am I convinced that most companies would do so voluntarily, but it is certainly something to consider. More than anything, I want to see users have the ability to meaningfully influence what’s being done with their data and I’d love to see a way for their voices to be represented in these processes.

I’m glad that this study has prompted an intense debate among scholars and the public, but I fear that it’s turned into a simplistic attack on Facebook over this particular study rather than a nuanced debate over how we create meaningful ethical oversight in research and practice. The lines between research and practice are always blurred and information companies like Facebook make this increasingly salient. No one benefits by drawing lines in the sand. We need to address the problem more holistically. And, in the meantime, we need to hold companies accountable for how they manipulate people across the board, regardless of whether or not it’s couched as research. If we focus too much on this study, we’ll lose track of the broader issues at stake.

Is the Oculus Rift sexist? (plus response to criticism)

Last week, I wrote a provocative opinion piece for Quartz called “Is the Oculus Rift sexist?” I’m reposting it on my blog for posterity, but also because I want to address some of the critiques that I received. First, the piece itself:

Is the Oculus Rift sexist?

In the fall of 1997, my university built a CAVE (Cave Automatic Virtual Environment) to help scientists, artists, and archeologists embrace 3D immersion to advance the state of those fields. Ecstatic at seeing a real-life instantiation of the Metaverse, the virtual world imagined in Neal Stephenson’s Snow Crash, I donned a set of goggles and jumped inside. And then I promptly vomited.

I never managed to overcome my nausea. I couldn’t last more than a minute in that CAVE and I still can’t watch an IMAX movie. Looking around me, I started to notice something. By and large, my male friends and colleagues had no problem with these systems. My female peers, on the other hand, turned green.

What made this peculiar was that we were all computer graphics programmers. We could all render a 3D scene with ease. But when asked to do basic tasks like jump from Point A to Point B in a Nintendo 64 game, I watched my female friends fall short. What could explain this?

At the time any notion that there might be biological differences underpinning computing systems was deemed heretical. Discussions of gender and computing centered around services like Purple Moon, a software company trying to entice girls into gaming and computing. And yet, what I was seeing gnawed at me.

That’s when a friend of mine stumbled over a footnote in an esoteric army report about simulator sickness in virtual environments. Sure enough, military researchers had noticed that women seemed to get sick at higher rates in simulators than men. While they seemed to be able to eventually adjust to the simulator, they would then get sick again when switching back into reality.

Being an activist and a troublemaker, I walked straight into the office of the head CAVE researcher and declared the CAVE sexist. He turned to me and said: “Prove it.”

The gender mystery

Over the next few years, I embarked on one of the strangest cross-disciplinary projects I’ve ever worked on. I ended up in a gender clinic in Utrecht, in the Netherlands, interviewing both male-to-female and female-to-male transsexuals as they began hormone therapy. Many reported experiencing strange visual side effects. Like adolescents going through puberty, they’d reach for doors—only to miss the door knob. But unlike adolescents, the length of their arms wasn’t changing—only their hormonal composition.

Scholars in the gender clinic were doing fascinating research on tasks like spatial rotation skills. They found that people taking androgens (a steroid hormone similar to testosterone) improved at tasks that required them to rotate Tetris-like shapes in their mind to determine if one shape was simply a rotation of another shape. Meanwhile, male-to-female transsexuals saw a decline in performance during their hormone replacement therapy.

Along the way, I also learned that there are more sex hormones on the retina than in anywhere else in the body except for the gonads. Studies on macular degeneration showed that hormone levels mattered for the retina. But why? And why would people undergoing hormonal transitions struggle with basic depth-based tasks?

Two kinds of depth perception

Back in the US, I started running visual psychology experiments. I created artificial situations where different basic depth cues—the kinds of information we pick up that tell us how far away an object is—could be put into conflict. As the work proceeded, I narrowed in on two key depth cues – “motion parallax” and “shape-from-shading.”

Motion parallax has to do with the apparent size of an object. If you put a soda can in front of you and then move it closer, it will get bigger in your visual field. Your brain assumes that the can didn’t suddenly grow and concludes that it’s just got closer to you.

Shape-from-shading is a bit trickier. If you stare at a point on an object in front of you and then move your head around, you’ll notice that the shading of that point changes ever so slightly depending on the lighting around you. The funny thing is that your eyes actually flicker constantly, recalculating the tiny differences in shading, and your brain uses that information to judge how far away the object is.

In the real world, both these cues work together to give you a sense of depth. But in virtual reality systems, they’re not treated equally.

The virtual-reality shortcut

When you enter a 3D immersive environment, the computer tries to calculate where your eyes are at in order to show you how the scene should look from that position. Binocular systems calculate slightly different images for your right and left eyes. And really good systems, like good glasses, will assess not just where your eye is, but where your retina is, and make the computation more precise.

It’s super easy—if you determine the focal point and do your linear matrix transformations accurately, which for a computer is a piece of cake—to render motion parallax properly. Shape-from-shading is a different beast. Although techniques for shading 3D models have greatly improved over the last two decades—a computer can now render an object as if it were lit by a complex collection of light sources of all shapes and colors—what they they can’t do is simulate how that tiny, constant flickering of your eyes affects the shading you perceive. As a result, 3D graphics does a terrible job of truly emulating shape-from-shading.

Tricks of the light

In my experiment, I tried to trick people’s brains. I created scenarios in which motion parallax suggested an object was at one distance, and shape-from-shading suggested it was further away or closer. The idea was to see which of these conflicting depth cues the brain would prioritize. (The brain prioritizes between conflicting cues all the time; for example, if you hold out your finger and stare at it through one eye and then the other, it will appear to be in different positions, but if you look at it through both eyes, it will be on the side of your “dominant” eye.)

What I found was startling (pdf). Although there was variability across the board, biological men were significantly more likely to prioritize motion parallax. Biological women relied more heavily on shape-from-shading. In other words, men are more likely to use the cues that 3D virtual reality systems relied on.

This, if broadly true, would explain why I, being a woman, vomited in the CAVE: My brain simply wasn’t picking up on signals the system was trying to send me about where objects were, and this made me disoriented.

My guess is that this has to do with the level of hormones in my system. If that’s true, someone undergoing hormone replacement therapy, like the people in the Utrecht gender clinic, would start to prioritize a different cue as their therapy progressed. 1
We need more research

However, I never did go back to the clinic to find out. The problem with this type of research is that you’re never really sure of your findings until they can be reproduced. A lot more work is needed to understand what I saw in those experiments. It’s quite possible that I wasn’t accounting for other variables that could explain the differences I was seeing. And there are certainly limitations to doing vision experiments with college-aged students in a field whose foundational studies are based almost exclusively on doing studies solely with college-age males. But what I saw among my friends, what I heard from transsexual individuals, and what I observed in my simple experiment led me to believe that we need to know more about this.

I’m excited to see Facebook invest in Oculus, the maker of the Rift headset. No one is better poised to implement Stephenson’s vision. But if we’re going to see serious investments in building the Metaverse, there are questions to be asked. I’d posit that the problems of nausea and simulator sickness that many people report when using VR headsets go deeper than pixel persistence and latency rates.

What I want to know, and what I hope someone will help me discover, is whether or not biology plays a fundamental role in shaping people’s experience with immersive virtual reality. In other words, are systems like Oculus fundamentally (if inadvertently) sexist in their design?

Response to Criticism

1. “Things aren’t sexist!”

Not surprisingly, most people who responded negatively to my piece were up in arms about the title. Some people directed that at Quartz which was somewhat unfair. Although they originally altered the title, they reverted to my title within a few hours. My title was intentionally, “Is the Oculus Rift sexist?” This is both a genuine question and a provocation. I’m not naive enough to not think that people would react strongly to the question, just as my advisor did when I declared VR sexist almost two decades ago. But I want people to take that question seriously precisely because more research needs to be done.

Sexism is prejudice or discrimination on the basis of sex (typically against women). For sexism to exist, there does not need to be an actor intending to discriminate. People, systems, and organizations can operate in sexist manners without realizing it. This is the basis of implicit or hidden biases. Addressing sexism starts by recognizing bias within systems and discrimination as a product of systems in society.

What was interesting about what I found and what I want people to investigate further is that the discrimination that I identified is not intentional by scientists or engineers or simply the product of cultural values. It is a byproduct of a research and innovation cycle that has significant consequences as society deploys the resultant products. The discriminatory potential of deployment will be magnified if people don’t actively seek to address it, which is precisely why I drudged up this ancient work in this moment in time.

I don’t think that the creators of Oculus Rift have any intentions to discriminate against women (let alone the wide range of people who currently get nauseous in their system which is actually quite broad), but I think that if they don’t pay attention to the depth cue prioritization issues that I’m highlighting or if they fail to actively seek technological redress, they’re going to have a problem. More importantly, many of us are going to have a problem. All too often, systems get shipped with discriminatory byproducts and people throw their hands in the air and say, “oops, we didn’t intend that.”

I think that we have a responsibility to identify and call attention to discrimination in all of its forms. Perhaps I should’ve titled the piece “Is Oculus Rift unintentionally discriminating on the basis of sex?” but, frankly, that’s nothing more than an attempt to ask the question I asked in a more politically correct manner. And the irony of this is that the people who most frequently complained to me about my titling are those who loathe political correctness in other situations.

I think it’s important to grapple with the ways in which sexism is not always intentional but at the vary basis of our organizations and infrastructure, as well as our cultural practices.

2. The language of gender

I ruffled a few queer feathers by using the terms “transsexual” and “biological male.” I completely understand why contemporary transgender activists (especially in the American context) would react strongly to that language, but I also think it’s important to remember that I’m referring to a study from 1997 in a Dutch gender clinic. The term “cisgender” didn’t even exist. And at that time, in that setting, the women and men that I met adamantly deplored the “transgender” label. They wanted to make it crystal clear that they were transsexual, not transgender. To them, the latter signaled a choice.

I made a choice in this essay to use the language of my informants. When referring to men and women who had not undergone any hormonal treatment (whether they be cisgender or not), I added the label of “biological.” This was the language of my transsexually-identified informants (who, admittedly, often shortened it to “bio boys” and “bio girls”). I chose this route because the informants for my experiment identified as female and male without any awareness of the contested dynamics of these identifiers.

Finally, for those who are not enmeshed in the linguistic contestations over gender and sex, I want to clarify that I am purposefully using the language of “sex” and not “gender” because what’s at stake has to do with the biological dynamics surrounding sex, not the social construction of gender.

Get angry, but reflect and engage

Critique me, challenge me, tell me that I’m a bad human for even asking these questions. That’s fine. I want people to be provoked, to question their assumptions, and to reflect on the unintentional instantiation of discrimination. More than anything, I want those with the capacity to take what I started forward. There’s no doubt that my pilot studies are the beginning, not the end of this research. If folks really want to build the Metaverse, make sure that it’s not going to unintentionally discriminate on the basis of sex because no one thought to ask if the damn thing was sexist.

Parentology: The first parenting book I actually liked

As a researcher and parent, I quickly learned that I have no patience for parenting books. When I got pregnant, I started trying to read parenting books and I threw more than my fair share of them across the room. I either get angry at the presentation of the science or annoyed at the dryness of the writing. Worse, the prescriptions make me furious because anyone who tells you that there’s a formula to parenting is lying. My hatred of parenting books was really disappointing because I didn’t want to have to do a literature review whenever I wanted to know what research said about XYZ. I actually want to understand what the science says about key issues of child development, childrearing, and parenting. But I can’t stomach the tone of what I normally encounter.

So when I learned that Dalton Conley was writing a book on parenting, my eyebrows went up. I’ve always been a huge fan of his self-deprecating autobiographical book Honky because it does such a fantastic job of showcasing research on race and class. This made me wonder what he was going to do with a book on parenting.

Conley did not disappoint. His new book Parentology is the first parenting book that I’ve read that I actually enjoyed and am actively recommending to others. Conley’s willingness to detail his own failings, neuroses, and foolish logic (and to smack himself upside the head with research data in the process) showcases the trials and tribulations of parenting. Even experts make a mess of everything, but watching them do so so spectacularly lets us all off the hook. If you read this book, you will learn a lot about parenting, even if it doesn’t present the material in a how-to fashion. Instead, this book highlights the chaos that ensues when you try to implement science on the ground. Needless to say, hilarity ensues.

If you need some comedy relief, pick up this book. It’s a fantastic traversal of contemporary research presented in a fashion that will have you rolling on the floor laughing. Lesson #1: If you buy your children pet guinea pigs to increase their exposure to allergens, make sure that they’re unable to mate.

What’s Behind the Free PDF of “It’s Complicated” (no, no, not malware…)

As promised, I put a free PDF copy of “It’s Complicated” on my website the day the book officially launched. But as some folks noticed, I didn’t publicize this when I did so. For those who are curious as to why, I want to explain. And I want you to understand the various issues at play for me as an author and a youth advocate.

I didn’t write this book to make money. I wrote this book to reach as wide of an audience as I possibly could. This desire to get as many people as engaged as possible drove every decision I made throughout this process. One of the things that drew me to Yale was their willingness to let me put a freely downloadable CC-licensed copy of the book online on the day the book came out. I knew that trade presses wouldn’t let a first time author pull that one off. Heck, they still get mad at Paulo Coelho for releasing his books online and he’s sold more books worldwide than anyone else!

As I prepared for publication, it became clear that I really needed other people’s help in getting the word out. I needed journalistic enterprises to cover the book. I needed booksellers to engage with the book. I needed people to collectively signal that this book was important. I needed people to be willing to take a bet on me. When one of those allies asked me to wait a week before publicizing the free book, I agreed.

If you haven’t published a book before, it’s pretty unbelievable to see all of the machinery that goes into getting the book out once the book exists in physical form. News organizations want to promote books that will be influential or spark a conversation, but they are also anxious about having their stories usurped by others. Booksellers make risky decisions about how many copies they think they can sell ahead of time and order accordingly. (And then there’s the world of paying for placement which I simply didn’t do.) Booksellers’ orders – as well as actual presales – are influential in shaping the future of a book, just like first weekend movie sales matter. For example, these sales influence bestseller and recommendation lists. These lists are key to getting broader audiences’ attention (and for getting the attention of certain highly influential journalistic enterprises). And, as an author trying to get a message out, I realized that I needed to engage with this ecosystem and I needed all of these actors to believe in my book.

The bestseller aspect of this is the part that I struggle with the most. I don’t actually care whether or not my book _sells_ a lot; I care whether or not it’s _read_ a lot. But there’s no bestread-ed list (except maybe Goodreads). And while many books that are widely sold aren’t widely read, most books that are widely read are widely sold. My desire to be widely read is why I wanted to make the book freely available from the getgo. I get that not everyone can afford to buy the book. I get that it’s not available in certain countries. I get that people want to check it out first. I get that we haven’t figured out how to implement ‘grep’ in physical books. So I really truly get the importance of making the book accessible.

But what I started to realize is that when people purchase the book, they signal to outside folks that the book is important. This is one of the reasons that I asked people who value this book to buy it. For them or for others. I love it when people buy the book and give it away to a poor grad student, struggling parent, or library. I don’t know if I’ll make any bestseller list, but the reason I decided to try is because sales rankings – especially in the first few weeks of a book’s life – really do help attract more attention which is key to getting the word out. And so I’ve begged and groveled, asking people to buy my book even though it makes me feel squeamish, solely because I know that the message that I want to offer is important. So, to be honest, if you are going to buy the book at some point, I’d really really appreciate it if you’d buy a copy. And sooner rather than later. Your purchasing decisions help me signal to the powers that be that this book is important, that the message in the book is valuable.

That said, if you don’t have the resources or simply don’t want to, don’t buy it. I’m cool with that. I’m beyond delighted to give the book away for free to anyone who wants to read it, assign it in their classes, or otherwise engage with it. If you choose to download it, thank you! I’m glad you find it valuable!

If you feel like giving back, I have a request. Please help support all of the invisible people and organizations that helped get word of my book out there. I realize that there are folks out there who want to “support the author,” but my ask of you is to help me support the whole ecosystem that made this possible.

Go buy a different book from Yale University Press to thank them for being willing to publish me. Buy a random book from an independent bookseller to say thank you (especially if you live near Harvard Book Store, Politics & Prose, or Book People). Visit The Guardian and click on their ads to thank them for running a first serial. Donate to NPR for their unbelievable support in getting the word out. Buy a copy or click on the ads of BoingBoing, Cnet, Fast Company, Financial Times, The Globe & Mail, LA Times, Salon, Slate, Technology Review, The Telegraph, USA Today, Wired, and the other journalistic venues whose articles aren’t yet out to thank them for being so willing to cover this book. Watch the ads on Bloomberg and MSNBC to send them a message of thanks. And take the time to retweet the tweets or write a comment on the blogs of the hundreds of folks who have been so kind to write about this book in order to get the word out. I can’t tell you how grateful I am to all of the amazing people and organizations who have helped me share what I’ve learned. Please shower them in love.

If you want to help me, spread the message of my book as wide as you possibly can. I wrote this book so that more people will step back, listen, and appreciate the lives of today’s teenagers. I want to start a conversation so that we can think about the society that we’re creating. I will be forever grateful for anything that you can do to get that message out, especially if you can help me encourage people to calm down and let teenagers have some semblance of freedom.

More than anything, thank *you* soooo much for your support over the years!!! I am putting this book up online as a gift to all of the amazing people who have been so great to me for so long, including you. Thank you thank you thank you.

{{hug}}

PS: Some folks have noticed that Amazon seems to not have any books in stock. There was a hiccup but more are coming imminently. You could wait or you could support IndieBound, Powell’s, Barnes & Noble, or your local bookstore.

Data & Society: Call for Fellows

Over the last six months, I’ve been working to create the Data & Society Research Institute to address the social, technical, ethical, legal, and policy issues that are emerging because of data-centric technological development.  We’re still a few months away from launching the Institute, but we’re looking to identify the inaugural class of fellows. If you know innovative thinkers and creators who have a brilliant idea that needs a good home and are excited by the possibility of helping shape a new Institute, can you let them know about this opportunity?

The Data & Society Research Institute is a new think/do tank in New York City dedicated to addressing social, technical, ethical, legal, and policy issues that are emerging because of data-centric technological development.

Data & Society is currently looking to assemble its inaugural class of fellows. The fellowship program is intended to bring together an eclectic network of researchers, entrepreneurs, activists, policy creators, journalists, geeks, and public intellectuals who are interested in engaging one another on the key issues introduced by the increasing availability of data in society. We are looking for a diverse group of people who can see both the opportunities and challenges presented by access to data and who have a vision for a project that can inform the public or shape the future of society.

Applications for fellowships are due January 24, 2014. To learn more about this opportunity, please see our call for fellows.

On a separate, but related note, I lurve my employer; my ability to create this Institute is only possible because of a generous gift from Microsoft.

why I’m quitting Mendeley (and why my employer has nothing to do with it)

Earlier this week, Mendeley was bought by Elsevier. I posted the announcement on Twitter to state that I would be quitting Mendeley. This tweet sparked a conversation between me and the head of academic outreach at Mendeley (William Gunn) that could only go so far in 140 character chunks. I was trying to highlight that, while I respected the Mendeley team’s decision to do what’s best for them, I could not support them as a customer knowing that this would empower a company that I think undermines scholarship, scholars, and the future of research.

Today, Gunn posted the following tweet: “All you folks retweeting @zephoria know who she works for, right?” before justifying his implied critique by highlighting that he personally respects MSR.

I feel the need to respond to this implicit attack on my character and affiliation. When I’m critical of Elsevier, I’m speaking as a scholar, not on behalf of Microsoft or even Microsoft Research. That said, I get that everyone’s associations shapes how they’re perceived. But I’m not asking people to buy my ?product? or even the products of my employer. I’m making a public decision as a scholar who is committed to the future for research. I believe in making my research publicly available through open access initiatives and I’m proud to work for and be associated with an organization that is committed to transforming scholarly publishing.  I’m also committed to boycotting organizations that undermine research, scholarship, libraries, and the production of knowledge.

I also think that it’s important to explain that there are huge differences between Microsoft and Elsevier.  I fully recognize that I work for a company that many people think is evil. When I joined Microsoft four years ago, I did a lot of poking around and personal soul-searching. Like many other geeks of my age, I spent my formative years watching an arrogant Microsoft engage in problematic activities only to be humiliated by an anti-trust case. Then I watched the same company, with its tail between its legs, grow up. The company I was looking to join four years ago was not the company that I boycotted in college. It had been a decade since United States vs. Microsoft and even though many of my peers are never going to forgive my employer for its activities in the 90s, I am willing to accept that companies change.

There are many aspects of Microsoft that I absolutely love. For starters, Microsoft Research (MSR) is heaven on earth. Overall, MSR offers more freedom, flexibility, and opportunities to scholars than even the best academic institutions. They share my values regarding making scholarship widely accessible (see: Tony Hey’s 6-part series on open access). And, unlike research entities at other major corporations, Microsoft Research has supported me in doing research that’s critical of Microsoft (even when I get nastygrams from corporate executives). Beyond my home division, there are other sparkly beacons of awesome. I love that Microsoft has made privacy a central value, even as it struggles to ethically negotiate the opportunities presented by data mining. I have been in awe of some of the thoughtful and innovative approaches taken by the folks at Bing, in mobile, and in Xbox. Even more than the work that everyone sees, I get excited by some of the visioning that happens behind closed doors.

Don’t get me wrong. Like all big companies, Microsoft still screws up. I’ve facepalmed on plenty of occasions, embarrassed to be associated with particular company decisions, messages, or tactics. But I genuinely believe that the overall company means well and is pointed in a positive, productive, and ethical direction. Sure, there are some strategies that don’t excite me, but I think that the leadership is trying to move the company to a future I can buy into. I’m proud of where the company is going even if I can’t justify its past.

I cannot say the same thing for Elsevier. As most academics and many knowledge activists know, Elsevier has engaged in some pretty evil maneuvers. Elsevier published fake journals until it got caught. Its parent company was involved in the arms trade until it got caught. Elsevier played an unrepentant and significant role in advancing SOPA/PIPA/RWA and continues to lobby on issues that undermine scholarship. Elsevier currently actively screws over academic libraries and scholars through its bundling practices. There is no sign that the future of Elsevier is pro-researchers. There is zero indicator that Mendeley’s acquisition is anything other an attempt to placate the academics who are refusing to do free labor for Elsevier (editorial boards, reviewers, academics). There’s no attempt at penance, no apology, not even a promise of a future direction. Just an acquisition of a beloved company as though that makes up for all of the ways in which Elsevier has in the past _and continues to_ screw over scholars.

Elsevier’s practices make me deeply deeply angry. While academic publishing as a whole is pretty flawed, Elsevier takes the most insidious practices further at each and every turn, always at the expense of those of us who are trying to produce, publish, and distribute research. Their prices are astronomical, bankrupting libraries and siloing knowledge for private profit off of free labor. As a result, many mathematicians and other scientists have begun stepping off of their editorial boards in protest. Along with over 13,000 other scholars, I too signed the Cost of Knowledge boycott.

I see no indication of a reformed Elsevier, no indication of a path forward that is actually respectful of scholars, scholarship, librarians, or universities. All I see is a company looking to make a profit in an unethical manner and trying to assuage angry customers and laborers with small tokens.

Mendeley’s leadership is aware of how many academics despise Elsevier. In their announcement of their sale, they justify Elsevier through some of the technologies they developed. There’s no indication that the “partnership” is going to make Elsevier more thoughtful towards academics. Mendeley’s reps try to explain that the company is a “large, complex organization” full of good people as though this should relieve those of us who are tired of having our labor and ideas abused for profit.

All companies have good people in them. All companies are complex. This is not enough. What matters is the direction of the leadership and what kinds of future a company is trying to create. People may not like either Microsoft or Elsevier’s past, but what about the future?

In Mendeley’s post, they indicate overlap in their vision and Elsevier’s vision as a company. This does not make me more hopeful of Elsevier; this makes me even more dubious of Mendeley. Elsevier has a long track record with no indication of change. It is the parent company. Startups don’t get bought by big companies to blow up the core company. New division presidents or vice-presidents do not have penultimate power in big companies, particularly not when their revenue pales in comparison to the parent company’s. I wish Mendeley employees the best, but I think that they’re naive if they believe that they can start a relationship with the devil hoping he’ll change his ways because of their goodness. This isn’t a Disney fairy tale. This is business.

I genuinely like Mendeley as a product, but I will not support today’s Elsevier no matter how good a product of theirs is. Perhaps they’ll change. I wouldn’t bet on it, but I am open to the possibility.  But right now, I don’t believe in the ethics and commitments of the company nor do I believe that they’re on the precipice of meaningful change. As minimally symbolic as it is, I refuse to strengthen them with my data or money. This means that I will quit Mendeley now that they’re part of Elsevier. In the same vein, I respect people who disagree with my view on the future of Microsoft and choose to not to use their products. I believe in consumer choice. I’m just startled that a head of academic outreach would try to brush off my critique of his new employer by implicating mine. I guess that’s the way things work.

I believe that the next place for me is probably Zotero, but I’m trying to figure out how to get my data (including the PDFs) over there. I’m hopeful that someone will write the scripts soon so that I don’t have to do this manually. If you’ve got other suggestions or advice, I’m all ears.

The Kinder & Braver World Project: Research Series – Eight Papers on The Role of Youth Organizations and Youth Movements for Social Change

The Berkman Center for Internet & Society at Harvard University is pleased to announce the publication of eight new of papers in The Kinder & Braver World Project: Research Series (danah boyd, John Palfrey, and Dena Sacco, editors) as part of its collaboration with the Born This Way Foundation (BTWF), and generously supported by the John D. & Catherine T. MacArthur Foundation.  The Kinder & Braver World Project: Research Series is comprised of short papers that are intended to help synthesize research and provide research-grounded insight to the variety of stakeholders working on issues related to youth empowerment and action towards creating a kinder, braver world.

The eight new papers focus on The Role of Youth Organizations and Youth Movements for Social Change, and were selected among submissions from a call for papers that the Berkman Center put out in June 2012.  They include:

In addition to being published on the The Kinder & Braver World Project: Research Series site, the eight new papers soon will be published on SSRN as part of the Berkman Center’s Working Paper Series.  Stay tuned for details.

In early 2012 we published a group of papers related to Meanness and Cruelty, including:

We welcome ongoing conversations about these topics.

Best,

danah boyd, John Palfrey, and Dena Sacco

“Socially Mediated Publicness”: an open-access issue of JOBEM

I love being a scholar, but one thing that really depresses me about research is that so much of what scholars produce is rendered inaccessible to so many people who might find it valuable, inspiring, or thought-provoking. This is at the root of what drives my commitment to open-access. When Zizi Papacharissi asked Nancy Baym and I if we’d be willing to guest edit the Journal of Broadcasting & Electronic Media (JOBEM), we agreed under one condition: the issue had to be open-access (OA). Much to our surprise and delight, Taylor and Francis agreed to “test” that strange and peculiar OA phenomenon by allowing us to make this issue OA.

Nancy and I decided to organize the special issue around “socially mediated publicness,” both because we find that topic to be of great interest and because we felt like there was something fun about talking about publicness in truly public form. We weren’t sure what the response to our call would be, but were overwhelmed with phenomenal submissions and had to reject many interesting articles.

But we are completely delighted to publish a collection of articles that we think are timely, interesting, insightful, and downright awesome. If you would like to get a sense of the arguments made in these articles, make sure to check out our introduction. The seven pieces in this guest-edited issue of JOBEM are:

We hope that you’ll find them fun to read and that you’ll share them with others that might enjoy them too!