In March, a right-wing pundit calling for "transgenderism" to be "eradicated" because they were part of "the left’s attempt to erase biological women from modern society.”
His words did not fit the platform's definition of hate speech. But in my latest for New York Times Opinion, I argue his words were more dangerous: fear-inducing speech that can stoke violence.
Hate speech gets all the attention. But fear is what leaders use to inspire violence. Fear that the election has been stolen. Fear that trans movement is erasing women. Fear that children are being groomed by pedophiles. This is dangerous speech.
Susan Benesch of the Dangerous Speech Project says the key feature of it is that it persuades “people to perceive other members of a group as a terrible threat. That makes violence seem acceptable, necessary or even virtuous.”
https://dangerousspeech.org/
Fear speech is hard for automated systems to identify because it doesn't always rely on slurs and derogatory words that are in hate speech, Rutgers University professor Kiran Garimella found in his first large-scale quantitative study of fear speech. https://t.co/8fOXlmk8D5
In his second study of fear speech, Garimella found that it prompts more engagement on social media than hate speech, and users who post fear speech garner more followers. https://t.co/jRRmsuR5Yb
One innovative approach to quashing the popularity of fear speech comes in a new paper from former Facebook engineer Ravi Iyer, Jonathan Stray and Helena Puig Larrauri. They say platforms can reduce 'destructive conflict' by relying less on 'engagement' metrics that boost posts with high numbers of comments, shares or time spent.
Instead, they argue, platforms could boost posts that users explicitly indicate they found valuable. https://t.co/rB97KHcLUu
Facebook has just announced a change in how it promotes political content. In a blog post the company said it is “continuing to move away from ranking based on engagement” and instead for users was giving more weight to “learning what is informative, worth their time or meaningful.” https://t.co/DC6vomOoRK
But in the end, the algorithms alone aren't going to save us. We, the users of the platforms, also have a role to play in challenging dangerous speech by calling out fear-based incitement through what is called "counterspeech." https://dangerousspeech.org/counterspeech/
Fighting fear is not going to be easy. But it is possibly the most important work we can do to prevent online outrage from begetting real-life violence.