Do Systems Create Harassment?

It’s no surprise that people generally think the internet has made our society more violent and toxic. Perhaps it’s not the internet as much as the anonymity it provides, but either way. Hate, misogyny, racism, homophobia, and all kinds of fear-based/difference-based abuse has always existed, of course. We didn’t need the internet to know that many, many people are full of hate. However, the internet, and games by extension, has… let’s say amplified the hate. On the other hand, we have some games that create incredibly welcoming communities. We have some games that have fairly neutral communities. And we have some that are as toxic as it gets. The question is: why? Why do some games attract the most toxic players? OR, why do some games create the most toxic players?

I simply don’t buy the argument that violent games create violent people. The studies that have claimed that are dubious, so say the least. But if that’s the case, then why are some gamers constantly harassing and threatening women in the industry and the community? Why is swatting a thing? Why are racist and homophobic epithets used frequently whereas they would likely not be used in the non-virtual world? “Play” is the answer some people have. “Oh they’re just playing, it’s not real. It doesn’t matter if he says he will rape and murder the other player because it’s just a game.” Akin to the “boys will be boys” argument, this perspective is not only wrong but actually makes the situation worse. I don’t buy the argument that play just naturally creates violence. Nor does play, even intense competition, create harassment and toxic environments. I’ve managed to play every day of my adult life and not harass or threaten rape against anyone. So what is it?

A recent article on Develop reports some interesting findings: when players are abusive online, warning them of the behavior within 10 minutes reduces the racist/sexist/toxic behavior by 50%. If you send evidence of the toxicity along with the warning, it reduces it by a shocking 70%. In other words, developers have uncovered a system-driven way to reduce harassment and toxicity online. This may not seem like anything, but for those of us who constantly face this kind of toxicity when we try to participate in online communities, it is fairly shocking. It’s not relying on convincing abusers that their behavior is bad because of X, Y, and Z. It’s forcing them to actually look at the things they wrote. It’s forcing them, in the heat of battle and in the moment of anger and violence, to accept that their behavior is unacceptable and actually look at what they wrote. It’s pretty brilliant actually.

League of Legends is known for having an incredibly vitriolic community. They have tried, and mostly failed, to create a system where players aren’t racist, homophobic, sexist, and threatening. They instituted the now famous tribunal system in 2011, though it was disabled in 2014. The idea was that cases of abuse would be sent to a board of peers to vote on whether the player should be banned or sanctioned. Of course, this assumes that there are some cases when it’s ok to tell someone you are going to rape them to death. Of course there are opposite systems being put in place by pioneers like Bonnie Ross. Ross essentially defined the Halo series and is now responsible for lifetime bans on user accounts on Xbox Live. The system bans the Xbox itself from ever being able to use Xbox live again, essentially making it a fancy DVD player. One would expect a lot of backlash to this policy. One would expect Ross to be burned alive. Yet, people surprisingly seem to often put the blame where it is deserved: on the person who got banned. Has this “system” made the environment less toxic? It may remain to be seen, but certainly it opens up an interesting discussion.

The authors of the book Values at Play in Digital Games, Helen Nissenbaum and Mary Flanagan, talk about how values are hard coded into technologies. Each piece of technology, video games included, are a record of the deeply held beliefs of the people who made them: their assumptions about culture often being the most salient. If this is true, and I believe it is, I wonder what values are perpetuated by the most toxic online technologies? Twitter of course comes to mind first. This technology values free speech and the exchange of ideas high above the value of respect and intelligent discussion. In fact, the medium seems to preclude intelligent discussion altogether, limited us to buzzword-laden, attention grabbing, misleading and easily criticized 140 word train wrecks. A technology like Yik Yak, which has been used again and again to threaten women and people of color, values proximity communication and anonymity. Anonymity > safety.

My hope would be that creating more systems that force toxic players to face the things they write will create players that are more cognizant of the long-term damage their language and threats can have on people. It feels too pessimistic to say that people have and will always be racist/homophobic/sexist so let’s just create systems to silence them. And I don’t think that’s going on with these system changes. I think what’s happening on a deeper level is forcing players to recognize that things they say in the heat of battle simply to not disappear and have no effect on people. That idea is what leads to things like creating fake accounts on Twitter to harass people, or writing terroristic threats to airlines online “as a joke.” The separation between the virtual and the real exists in people’s minds much more than it does in reality. Our lives ARE online, thus harassment and abuse that happens online is very much real. I hope developers continue to research this and make better systems for us all.