To truly target hate speech, moderation must extend beyond civility – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Many Americans decry the decline of civility online and platforms typically prohibit profane speech. Tech critics say the emphasis on civility alone is dangerous and that such thinking helps fuel the white supremacist movement, particularly on social media.

Theyre right.

Big Tech errs by treating content moderation as merely about content-matching. Polite speech diverts attention from the substance of what white supremacists say and redirects it to tone. When content moderation is too reliant on detecting profanity, it ignores how hate speech targets people who have been historically discriminated against. Content moderation overlooks the underlying purpose of hate speech to punish, humiliate and control marginalized groups.

Prioritizing civility online has not only allowed civil but hateful speech to thrive and it normalizes white supremacy. Most platforms analyze large bodies of speech with small quantities of hate rather than known samples of extremist speech a technological limitation. But platforms dont recognize that white supremacist speech, even when not directly used to harass, is hate speech a policy problem.

My team at the University of Michigan used machine learning to identify patterns in white supremacist speech that can be used to improve platforms detection and moderation systems. We set out to teach algorithms to distinguish white supremacist speech from general speech on social media.

Our study, published by ADL (the Anti-Defamation League), reveals that white supremacists avoid using profane language to spread hate and weaponize civility against marginalized groups (especially Jews, immigrants and people of color). Automated moderation systems miss most white supremacist speech when they correlate hate with vulgar, toxic language. Instead, we analyzed how extremists differentiate and exclude racial, religious and sexual minorities.

White supremacists, for example, frequently center their whiteness by appending white to many terms (white children, white women, the white race). Keyword searches and automated detection dont surface these linguistic patterns. By analyzing known samples of white supremacist speech specifically, we were able to detect such speech sentiments such as we should protect white children or accusing others, especially Jews, of being anti-white.

Extremists are active on multiple social media platforms and quickly recreate their networks after being caught and banned. White supremacy, sociologist Jessie Daniels says, is algorithmically amplified, sped up and circulated through networks to other White ethnonationalism movements around the world, ignored all the while by a tech industry that doesnt see race in the tools it creates.

Our team developed computational tools to detect white supremacist speech across three platforms from 2016-2020. Despite its outsized harm, hate speech is a small proportion of the vast quantity of speech online. Its difficult for machine learning systems to recognize hate speech based on large language models, systems trained on large samples of general online speech. We turned to a known source of explicit white supremacist speech: the far-right, white nationalist website Stormfront. We collected 275,000 posts from Stormfront and compared them to two other samples: tweets from users in a census of alt-right accounts and typical social media speech from Reddits r/all (a compendium of discussions on Reddit). We trained algorithms to study the sentence structure of posts, identify specific phrases and spot broad, recurring themes and topics.

White supremacists come across surprisingly polite across platforms and contexts. Along with adding white to many words, they often referred to racial or ethnic groups with plural nouns (Blacks, whites, Jews, gays). They also racialized Jews through their speech patterns, framing them as racially inferior and appropriate targets of violence and erasure. Their conversations about race and Jews overlapped, but their conversations about church, religion and Jews did not.

White supremacists talked frequently about white decline, conspiracy theories about Jews and Jewish power and pro-Trump messaging. The specific topics they discussed changed, but these broader grievances did not. Automated detection systems should look for these themes rather than specific terms.

White supremacist speech doesnt always involve explicit attacks against others. On the contrary, white supremacists in our study were just as likely to use distinctive speech to signal their identity to others, to recruit and radicalize and to build in-group solidarity. Marking ones speech as a white supremacist, for example, may be necessary for inclusion into these online spaces and extremist communities.

Platforms claim content moderation at scale is too difficult and expensive, but our team detected white supremacist speech with affordable tools available to most researchersmuch less expensive than those available to platforms. By affordable we mean the laptops and central computing resources provided by our university and open source Python code thats freely available.

Once white supremacists enter online spaces as with offline ones they threaten the safety of already marginalized groups and their ability to participate in public life. Content moderation should focus on proportionality: the impact it has on people already structurally disadvantaged, compounding the harm. Treating all offensive language as equal ignores the inequalities under girding American society.

Ultimately, research shows that social media platforms would do well to focus less on politeness and more on justice and equity. Civility be damned.

Libby Hemphill is an associate professor at the University of Michigans School of Information and the Institute for Social Research.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Follow this link:
To truly target hate speech, moderation must extend beyond civility - VentureBeat

Related Posts

Comments are closed.