The European Union has a new code of conduct covering online hate speech. In an announcement this morning, the EU indicates Facebook, Twitter, YouTube and Microsoft have all agreed to adopt that code of conduct and to step up their efforts to review and take down hate speech within 24 hours of it being reported.
The latest agreement by the social media platforms follows a similar agreement that was implemented in Germany last year with Google, Facebook and Twitter all agreeing to delete hate speech from their platforms within 24 hours. According to Monika Bickert, Head of Global Policy Management at Facebook, “There’s no place for hate speech on Facebook.”
In the U.S., social media companies have focused more on “counter-narrative” tools to combat hate speech. However, EU governments are dealing with a refugee crisis, terror attacks, and efforts by ISIS to recruit new followers to a much greater extent than what other parts of the world may be experiencing. EU Justice Commissioner Vera Jourova said,
“The recent terror attacks have reminded us of the urgent need to address illegal online hate speech. Social media is unfortunately one of the tools that terrorist groups use to radicalize young people.”
Although the other companies have not shared any figures, Twitter says they have suspended over 125,000 accounts that threatened or promoted terrorist acts, usually in connection to ISIS, since the middle of 2015.
Outside of the new code of conduct that calls for a much more aggressive review timeline to take down hate speech content, the companies will continue their efforts to make it easy for users to report hateful content, training staff on how to assess the reports, partnering with other organizations that help monitor hate speech, and pursuing “counter-narrative” messages.
What do you think of social media platforms actively policing hate speech and extremist content on their sites?