Welcome to TikToks endless cycle of censorship and mistakes – MIT Technology Review

Its not necessarily a surprise that these videos make news. People make their videos because they work. Getting views has been one of the more effective strategies to push a big platform to fix something for years. Tiktok, Twitter, and Facebook have made it easier for users to report abuse and rule violations by other users. But when these companies appear to be breaking their own policies, people often find that the best route forward is simply to try to post about it on the platform itself, in the hope of going viral and getting attention that leads to some kind of resolution. Tylers two videos on the Marketplace bios, for example, each have more than 1 million views.

"Contents getting flagged because they are someone from a marginalized group who is talking about their experiences with racism. Hate speech and talking about hate speech can look very similar to an algorithm.

I probably get tagged in something about once a week, says Casey Fiesler, an assistant professor at the University of Colorado, Boulder, who studies technology ethics and online communities. Shes active on TikTok, with more than 50,000 followers, but while not everything she sees feels like a legitimate concern she says the apps regular parade of issues is real. TikTok has had several such errors over the past few months, all of which have disproportionately impacted marginalized groups on the platform.

MIT Technology Review has asked TikTok about each of these recent examples, and the responses are similar: after investigating, TikTok finds that the issue was created in error, emphasizes that the blocked content in question is not in violation of their policies, and links to support the company gives such groups.

The question is whether that cyclesome technical or policy error, a viral response and apologycan be changed.

There are two kinds of harms of this probably algorithmic content moderation that people are observing, Fiesler says. One is false negatives. People are like, why is there so much hate speech on this platform and why isnt it being taken down?

The other is a false positive. Their contents getting flagged because they are someone from a marginalized group who is talking about their experiences with racism, she says. Hate speech and talking about hate speech can look very similar to an algorithm.

Both of these categories, she noted, harm the same people: those who are disproportionately targeted for abuse end up being algorithmically censored for speaking out about it.

TikToks mysterious recommendation algorithms are part of its successbut its unclear and constantly changing boundaries are already having a chilling effect on some users. Fiesler notes that many TikTok creators self-censor words on the platform in order to avoid triggering a review. And although shes not sure exactly how much this tactic is accomplishing, Fielser has also started doing it, herself, just in case. Account bans, algorithmic mysteries, and bizarre moderation decisions are a constant part of the conversation on the app.

Continued here:
Welcome to TikToks endless cycle of censorship and mistakes - MIT Technology Review

Related Posts

Comments are closed.