Tag Archives: design

Is the Oculus Rift sexist? (plus response to criticism)

Last week, I wrote a provocative opinion piece for Quartz called “Is the Oculus Rift sexist?” I’m reposting it on my blog for posterity, but also because I want to address some of the critiques that I received. First, the piece itself:

Is the Oculus Rift sexist?

In the fall of 1997, my university built a CAVE (Cave Automatic Virtual Environment) to help scientists, artists, and archeologists embrace 3D immersion to advance the state of those fields. Ecstatic at seeing a real-life instantiation of the Metaverse, the virtual world imagined in Neal Stephenson’s Snow Crash, I donned a set of goggles and jumped inside. And then I promptly vomited.

I never managed to overcome my nausea. I couldn’t last more than a minute in that CAVE and I still can’t watch an IMAX movie. Looking around me, I started to notice something. By and large, my male friends and colleagues had no problem with these systems. My female peers, on the other hand, turned green.

What made this peculiar was that we were all computer graphics programmers. We could all render a 3D scene with ease. But when asked to do basic tasks like jump from Point A to Point B in a Nintendo 64 game, I watched my female friends fall short. What could explain this?

At the time any notion that there might be biological differences underpinning computing systems was deemed heretical. Discussions of gender and computing centered around services like Purple Moon, a software company trying to entice girls into gaming and computing. And yet, what I was seeing gnawed at me.

That’s when a friend of mine stumbled over a footnote in an esoteric army report about simulator sickness in virtual environments. Sure enough, military researchers had noticed that women seemed to get sick at higher rates in simulators than men. While they seemed to be able to eventually adjust to the simulator, they would then get sick again when switching back into reality.

Being an activist and a troublemaker, I walked straight into the office of the head CAVE researcher and declared the CAVE sexist. He turned to me and said: “Prove it.”

The gender mystery

Over the next few years, I embarked on one of the strangest cross-disciplinary projects I’ve ever worked on. I ended up in a gender clinic in Utrecht, in the Netherlands, interviewing both male-to-female and female-to-male transsexuals as they began hormone therapy. Many reported experiencing strange visual side effects. Like adolescents going through puberty, they’d reach for doors—only to miss the door knob. But unlike adolescents, the length of their arms wasn’t changing—only their hormonal composition.

Scholars in the gender clinic were doing fascinating research on tasks like spatial rotation skills. They found that people taking androgens (a steroid hormone similar to testosterone) improved at tasks that required them to rotate Tetris-like shapes in their mind to determine if one shape was simply a rotation of another shape. Meanwhile, male-to-female transsexuals saw a decline in performance during their hormone replacement therapy.

Along the way, I also learned that there are more sex hormones on the retina than in anywhere else in the body except for the gonads. Studies on macular degeneration showed that hormone levels mattered for the retina. But why? And why would people undergoing hormonal transitions struggle with basic depth-based tasks?

Two kinds of depth perception

Back in the US, I started running visual psychology experiments. I created artificial situations where different basic depth cues—the kinds of information we pick up that tell us how far away an object is—could be put into conflict. As the work proceeded, I narrowed in on two key depth cues – “motion parallax” and “shape-from-shading.”

Motion parallax has to do with the apparent size of an object. If you put a soda can in front of you and then move it closer, it will get bigger in your visual field. Your brain assumes that the can didn’t suddenly grow and concludes that it’s just got closer to you.

Shape-from-shading is a bit trickier. If you stare at a point on an object in front of you and then move your head around, you’ll notice that the shading of that point changes ever so slightly depending on the lighting around you. The funny thing is that your eyes actually flicker constantly, recalculating the tiny differences in shading, and your brain uses that information to judge how far away the object is.

In the real world, both these cues work together to give you a sense of depth. But in virtual reality systems, they’re not treated equally.

The virtual-reality shortcut

When you enter a 3D immersive environment, the computer tries to calculate where your eyes are at in order to show you how the scene should look from that position. Binocular systems calculate slightly different images for your right and left eyes. And really good systems, like good glasses, will assess not just where your eye is, but where your retina is, and make the computation more precise.

It’s super easy—if you determine the focal point and do your linear matrix transformations accurately, which for a computer is a piece of cake—to render motion parallax properly. Shape-from-shading is a different beast. Although techniques for shading 3D models have greatly improved over the last two decades—a computer can now render an object as if it were lit by a complex collection of light sources of all shapes and colors—what they they can’t do is simulate how that tiny, constant flickering of your eyes affects the shading you perceive. As a result, 3D graphics does a terrible job of truly emulating shape-from-shading.

Tricks of the light

In my experiment, I tried to trick people’s brains. I created scenarios in which motion parallax suggested an object was at one distance, and shape-from-shading suggested it was further away or closer. The idea was to see which of these conflicting depth cues the brain would prioritize. (The brain prioritizes between conflicting cues all the time; for example, if you hold out your finger and stare at it through one eye and then the other, it will appear to be in different positions, but if you look at it through both eyes, it will be on the side of your “dominant” eye.)

What I found was startling (pdf). Although there was variability across the board, biological men were significantly more likely to prioritize motion parallax. Biological women relied more heavily on shape-from-shading. In other words, men are more likely to use the cues that 3D virtual reality systems relied on.

This, if broadly true, would explain why I, being a woman, vomited in the CAVE: My brain simply wasn’t picking up on signals the system was trying to send me about where objects were, and this made me disoriented.

My guess is that this has to do with the level of hormones in my system. If that’s true, someone undergoing hormone replacement therapy, like the people in the Utrecht gender clinic, would start to prioritize a different cue as their therapy progressed. 1
We need more research

However, I never did go back to the clinic to find out. The problem with this type of research is that you’re never really sure of your findings until they can be reproduced. A lot more work is needed to understand what I saw in those experiments. It’s quite possible that I wasn’t accounting for other variables that could explain the differences I was seeing. And there are certainly limitations to doing vision experiments with college-aged students in a field whose foundational studies are based almost exclusively on doing studies solely with college-age males. But what I saw among my friends, what I heard from transsexual individuals, and what I observed in my simple experiment led me to believe that we need to know more about this.

I’m excited to see Facebook invest in Oculus, the maker of the Rift headset. No one is better poised to implement Stephenson’s vision. But if we’re going to see serious investments in building the Metaverse, there are questions to be asked. I’d posit that the problems of nausea and simulator sickness that many people report when using VR headsets go deeper than pixel persistence and latency rates.

What I want to know, and what I hope someone will help me discover, is whether or not biology plays a fundamental role in shaping people’s experience with immersive virtual reality. In other words, are systems like Oculus fundamentally (if inadvertently) sexist in their design?

Response to Criticism

1. “Things aren’t sexist!”

Not surprisingly, most people who responded negatively to my piece were up in arms about the title. Some people directed that at Quartz which was somewhat unfair. Although they originally altered the title, they reverted to my title within a few hours. My title was intentionally, “Is the Oculus Rift sexist?” This is both a genuine question and a provocation. I’m not naive enough to not think that people would react strongly to the question, just as my advisor did when I declared VR sexist almost two decades ago. But I want people to take that question seriously precisely because more research needs to be done.

Sexism is prejudice or discrimination on the basis of sex (typically against women). For sexism to exist, there does not need to be an actor intending to discriminate. People, systems, and organizations can operate in sexist manners without realizing it. This is the basis of implicit or hidden biases. Addressing sexism starts by recognizing bias within systems and discrimination as a product of systems in society.

What was interesting about what I found and what I want people to investigate further is that the discrimination that I identified is not intentional by scientists or engineers or simply the product of cultural values. It is a byproduct of a research and innovation cycle that has significant consequences as society deploys the resultant products. The discriminatory potential of deployment will be magnified if people don’t actively seek to address it, which is precisely why I drudged up this ancient work in this moment in time.

I don’t think that the creators of Oculus Rift have any intentions to discriminate against women (let alone the wide range of people who currently get nauseous in their system which is actually quite broad), but I think that if they don’t pay attention to the depth cue prioritization issues that I’m highlighting or if they fail to actively seek technological redress, they’re going to have a problem. More importantly, many of us are going to have a problem. All too often, systems get shipped with discriminatory byproducts and people throw their hands in the air and say, “oops, we didn’t intend that.”

I think that we have a responsibility to identify and call attention to discrimination in all of its forms. Perhaps I should’ve titled the piece “Is Oculus Rift unintentionally discriminating on the basis of sex?” but, frankly, that’s nothing more than an attempt to ask the question I asked in a more politically correct manner. And the irony of this is that the people who most frequently complained to me about my titling are those who loathe political correctness in other situations.

I think it’s important to grapple with the ways in which sexism is not always intentional but at the vary basis of our organizations and infrastructure, as well as our cultural practices.

2. The language of gender

I ruffled a few queer feathers by using the terms “transsexual” and “biological male.” I completely understand why contemporary transgender activists (especially in the American context) would react strongly to that language, but I also think it’s important to remember that I’m referring to a study from 1997 in a Dutch gender clinic. The term “cisgender” didn’t even exist. And at that time, in that setting, the women and men that I met adamantly deplored the “transgender” label. They wanted to make it crystal clear that they were transsexual, not transgender. To them, the latter signaled a choice.

I made a choice in this essay to use the language of my informants. When referring to men and women who had not undergone any hormonal treatment (whether they be cisgender or not), I added the label of “biological.” This was the language of my transsexually-identified informants (who, admittedly, often shortened it to “bio boys” and “bio girls”). I chose this route because the informants for my experiment identified as female and male without any awareness of the contested dynamics of these identifiers.

Finally, for those who are not enmeshed in the linguistic contestations over gender and sex, I want to clarify that I am purposefully using the language of “sex” and not “gender” because what’s at stake has to do with the biological dynamics surrounding sex, not the social construction of gender.

Get angry, but reflect and engage

Critique me, challenge me, tell me that I’m a bad human for even asking these questions. That’s fine. I want people to be provoked, to question their assumptions, and to reflect on the unintentional instantiation of discrimination. More than anything, I want those with the capacity to take what I started forward. There’s no doubt that my pilot studies are the beginning, not the end of this research. If folks really want to build the Metaverse, make sure that it’s not going to unintentionally discriminate on the basis of sex because no one thought to ask if the damn thing was sexist.

Designing for Social Norms (or How Not to Create Angry Mobs)

In his seminal book “Code”, Larry Lessig argued that social systems are regulated by four forces: 1) the market; 2) the law; 3) social norms; and 4) architecture or code. In thinking about social media systems, plenty of folks think about monetization. Likewise, as issues like privacy pop up, we regularly see legal regulation become a factor. And, of course, folks are always thinking about what the code enables or not. But it’s depressing to me how few people think about the power of social norms. In fact, social norms are usually only thought of as a regulatory process when things go terribly wrong. And then they’re out of control and reactionary and confusing to everyone around. We’ve seen this with privacy issues and we’re seeing this with the “real name” policy debates. As I read through the discussion that I provoked on this issue, I couldn’t help but think that we need a more critical conversation about the importance of designing with social norms in mind.

Good UX designers know that they have the power to shape certain kinds of social practices by how they design systems. And engineers often fail to give UX folks credit for the important work that they do. But designing the system itself is only a fraction of the design challenge when thinking about what unfolds. Social norms aren’t designed into the system. They don’t emerge by telling people how they should behave. And they don’t necessarily follow market logic. Social norms emerge as people – dare we say “users” – work out how a technology makes sense and fits into their lives. Social norms take hold as people bring their own personal values and beliefs to a system and help frame how future users can understand the system. And just as “first impressions matter” for social interactions, I cannot underestimate the importance of early adopters. Early adopters configure the technology in critical ways and they play a central role in shaping the social norms that surround a particular system.

How a new social media system rolls out is of critical importance. Your understanding of a particular networked system will be heavily shaped by the people who introduce you to that system. When a system unfolds slowly, there’s room for the social norms to slowly bake, for people to work out what the norms should be. When a system unfolds quickly, there’s a whole lot of chaos in terms of social norms. Whenever a network system unfolds, there are inevitably competing norms that arise from people who are disconnected to one another. (I can’t tell you how much I loved watching Friendster when the gay men, Burners, and bloggers were oblivious to one another.) Yet, the faster things move, the faster those collisions occur, and the more confusing it is for the norms to settle.

The “real name” culture on Facebook didn’t unfold because of the “real name” policy. It unfolded because the norms were set by early adopters and most people saw that and reacted accordingly. Likewise, the handle culture on MySpace unfolded because people saw what others did and reproduced those norms. When social dynamics are allowed to unfold organically, social norms are a stronger regulatory force than any formalized policy. At that point, you can often formalize the dominant social norms without too much pushback, particularly if you leave wiggle room. Yet, when you start with a heavy-handed regulatory policy that is not driven by social norms – as Google Plus did – the backlash is intense.

Think back to Friendster for a moment… Remember Fakester? (I wrote about them here.) Friendster spent ridiculous amounts of time playing whack-a-mole, killing off “fake” accounts and pissing off some of the most influential of its userbase. The “Fakester genocide” prompted an amazing number of people to leave Friendster and head over to MySpace, most notably bands, all because they didn’t want to be configured by the company. The notion of Fakesters died down on MySpace, but the most central practice – the ability for groups (bands) to have recognizable representations – ended up being the most central feature of MySpace.

People don’t like to be configured. They don’t like to be forcibly told how they should use a service. They don’t want to be told to behave like the designers intended them to be. Heavy-handed policies don’t make for good behavior; they make for pissed off users.

This doesn’t mean that you can’t or shouldn’t design to encourage certain behaviors. Of course you should. The whole point of design is to help create an environment where people engage in the most fruitful and healthy way possible. But designing a system to encourage the growth of healthy social norms is fundamentally different than coming in and forcefully telling people how they must behave. No one likes being spanked, especially not a crowd of opinionated adults.

Ironically, most people who were adopting Google Plus early on were using their real names, out of habit, out of understanding how they thought the service should work. A few weren’t. Most of those who weren’t were using a recognizable pseudonym, not even trying to trick anyone. Going after them was just plain stupid. It was an act of force and people felt disempowered. And they got pissed. And at this point, it’s no longer about whether or not the “real names” policy was a good idea in the first place; it’s now an act of oppression. Google Plus would’ve been ten bazillion times better off had they subtly encouraged the policy without making a big deal out of it, had they chosen to only enforce it in the most egregious situations. But now they’re stuck between a rock and a hard place. They either have to stick with their policy and deal with the angry mob or let go of their policy as a peace offering in the hopes that the anger will calm down. It didn’t have to be this way though and it wouldn’t have been had they thought more about encouraging the practices they wanted through design rather than through force.

Of course there’s a legitimate reason to want to encourage civil behavior online. And of course trolls wreak serious havoc on a social media system. But a “real names” policy doesn’t stop an unrepentant troll; it’s just another hurdle that the troll will love mounting. In my work with teens, I see textual abuse (“bullying”) every day among people who know exactly who each other is on Facebook. The identities of many trolls are known. But that doesn’t solve the problem. What matters is how the social situation is configured, the norms about what’s appropriate, and the mechanisms by which people can regulate them (through social shaming and/or technical intervention). A culture where people can build reputation through their online presence (whether “real” names or pseudonyms) goes a long way in combating trolls (although it is by no means a fullproof solution). But you don’t get that culture by force; you get it by encouraging the creation of healthy social norms.

Companies that build systems that people use have power. But they have to be very very very careful about how they assert that power. It’s really easy to come in and try to configure the user through force. It’s a lot harder to work diligently to design and build the ecosystem in which healthy norms emerge. Yet, the latter is of critical importance to the creation of a healthy community. Cuz you can’t get to a healthy community through force.