Archive for the ‘Censorship’ Category

Facebook Needs to Be More Transparent About Why It Censors Speech – Fortune

Photograph by Chris Ratcliffe Bloomberg/Getty Images

The more Facebook tries to move beyond its original role as a social network for sharing family photos and other ephemera, the more it finds itself in an ethical minefield, torn between its desire to improve the world and its need to curb certain kinds of speech.

The tension between these two forces has never been more obvious than it is now, thanks to two recent examples of when its impulses can go wrong, and the potential damage that can be caused as a result. The first involves a Pulitzer Prize-winning journalist whose account was restricted, and the second relates to Facebook's leaked moderation guidelines.

In the first case, investigative reporter Matthew Caruana Galizia had his Facebook account suspended recently after he posted documents related to a story about a politician in Malta.

Caruana Galizia was part of a team that worked with the International Consortium of Investigative Journalists to break the story of the Panama Papers, a massive dump of documents that were leaked from an offshore law firm last year.

The politician, Maltese prime minister Joseph Muscat, was implicated in a scandal as a result of those leaked documents, which referred to shell companies set up by him and two other senior politicians in his administration.

Get Data Sheet , Fortune s technology newsletter.

Facebook not only suspended Caruana Galizia's account, it also removed a number of the documents that he had posted related to the story. It later restored his access to his account after The Guardian and a Maltese news outlet wrote about it, but some of the documents never reappeared.

The social network has rules that are designed to prevent people from posting personal information about other users, but it's not clear whether that's why the account was suspended.

Some of what Caruana Galizia posted contained screenshots of passports and other personal data, but many of these documents have remained available, while others have been removed. He is being sued by Muscat for libel, which has raised concerns about whether Facebook suspended the account because of pressure from officials in Malta.

A spokesman for Facebook told the Guardian that it was working with the reporter "so that he can publish what he needs to, without including unnecessary private details that could present safety risks. If we find that we have made errors, we will correct them."

Caruana Galizia said the incident was enlightening "because I realized how crippling and punitive this block is for a journalist." And they clearly reinforce the risks that journalists and media entities take when they decide to use the social network as a distribution outlet.

If nothing else, these and other similar incidents make it obvious that Facebook needs to do far more when it comes to being transparent about when and why it removes content, especially when that content is of a journalistic nature.

In an unrelated incident, the world got a glimpse into how the social network makes some of its content decisions thanks to a leaked collection of guidelines and manuals for the 4,500 or so moderators it employs, which was posted by the Guardian .

Outlined in the documents are rules about what kinds of statements are considered too offensive to allow, how much violence the site allows in videos including Facebook Live, which has been the subject of significant controversy recentlyand what to do with sexually suggestive imagery.

Much like Twitter, Facebook appears to be trying to find a line between getting rid of offensive behavior while still leaving room for freedom of expression.

In the process, however, it has raised questions about why the giant social network makes some of the choices it does. Statements within the guidelines about violence towards women, for examplesuch as "To snap a bitchs neck, make sure to apply all your pressure to the middle of her throat"are considered okay because they are not specific threats.

Facebook has already come under fire for some of its decisions around what to show on its live-streaming feature. There have been several cases in which people committed suicide and streamed it on Facebook Live, and in at least one case a man killed his child and then himself .

The guidelines say that while videos of violence and even death should be marked as disturbing, in many cases they do not have to be deleted because they can "help create awareness of issues such as mental illness," and because Facebook doesn't want to "censor or punish people in distress."

As a private corporation, Facebook is entitled to make whatever rules it wants about the type of speech that is permitted on its platform because the First Amendment only applies to the actions of governments. But when a single company plays such a huge role in the online behavior of more than a billion people, it's worth asking questions about the impact its rules have.

If Facebook censors certain kinds of speech, then for tens of millions of people it effectively ceases to exist, or becomes significantly less obvious.

The risks of this kind of private control over speech are obvious when it comes to things like filter bubbles or the role that "fake news" plays in political movements. But there's a deeper risk as well, which is that thanks to the inscrutability of Facebook's algorithm, many people won't know what they are missing when information is removed.

Facebook may not want to admit that it is a media entity, but the reality is that it plays a huge role in how billions of people see the world around them. And part of the responsibility that comes with that kind of role is being more transparent about why and how you make decisions about what information people shouldn't be able to see.

Read more from the original source:
Facebook Needs to Be More Transparent About Why It Censors Speech - Fortune

Online Censorship and User Notification: Lessons from Thailand – EFF

For governments interested in suppressing information online, the old methods of direct censorship are getting less and less effective.

Over the past month, the Thai government has made escalating attempts to suppress critical information online. In the last week, faced with an embarrassing video of the Thai King, the government ordered Facebook to geoblock over 300 pages on the platform and even threatened to shut Facebook down in the country. This is on top of last month's announcement that the government had banned any online interaction with three individuals: two academics and one journalist, all three of whom are political exiles and prominent critics of the state. And just today, law enforcement representatives described their efforts to target those who simply viewnot even create or sharecontent critical of the monarchy and the government.

The Thai government has several methods at its own disposal to directly block large volumes of content. It could, as it has in the past, pressure ISPs to block websites. It could also hijack domain name queries, making sites harder to access. So why is it negotiating with Facebook instead of just blocking the offending pages itself? And what are Facebooks responsibilities to users when this happens?

The answer is, in part, HTTPS. When HTTPS encrypts your browsing, it doesnt just protect the contents of the communication between your browser and the websites you visit. It also protects the specific pages on those sites, preventing censors from seeing and blocking anything after the slash in a URL. This means that if a sensitive video of the King shows up on a website, government censors cant identify and block only the pages on which it appears. In an HTTPS world that makes such granularized censorship impossible, the governments only direct censorship option is to block the site entirely.

That might still leave the government with tenable censorship options if critical speech and dissenting activity only happened on certain sites, like devoted blogs or message boards. A government could try to get away with blocking such sites wholesale without disrupting users outside a certain targeted political sphere.

But all sorts of user-generated contentfrom calls to revolution to cat picturesare converging on social media websites like Facebook, which members of every political party use and rely on. This brings us to the second part of the answer as to why the government cant censor like it used to: mixed-use social media sites. When content is both HTTPS-encrypted and on a mixed-use social media site like Facebook, it can be too politically expensive to block the whole site. Instead, the only option left is pressuring Facebook to do targeted blocking at the governments request.

Government requests for targeted blocking happen when something is compliant with Facebooks community guidelines, but not with a countrys domestic law. This comes to a head when social media platforms have large user bases in repressive, censorious statesa dynamic that certainly applies in Thailand, where a military dictatorship shares its capital city with a dense population of Facebook power-users and one of the most Instagrammed locations on earth.

In Thailand, the video of the King in question violated the countrys overbroad lese majeste defamation laws against in any way insulting or criticizing the monarchy. So the Thai government requested that Facebook remove italong with hundreds of other pieces of contenton legal grounds, and made an ultimately empty threat to shut down the platform in Thailand if Facebook did not comply.

Facebook did comply and geoblock over 100 URLs for which it received warrants from the Thai government. This may not be surprising; although the government is likely not going to block Facebook entirely, they still have other ways to go after the company, including threatening any in-country staff. Indeed, Facebook put itself in a vulnerable position when it inexplicably opened a Bangkok office during high political tensions after the 2014 military coup.

If companies like Facebook do comply with government demands to remove content, these decisions must be transparent to their users and the general public. Otherwise, Facebook's compliance transforms its role from a victim of censorship, to a company pressured to act as a government censor. The stakes are high, especially in unstable political environments like Thailand. There, the targets of takedown requests can often be journalists, activists, and dissidents, and requests to take down their content or block their pages often serve as an ominous prelude to further action or targeting.

With that in mind, Facebook and other companies responding to government requests must provide the fullest legally permissible notice to users whenever possible. This means timely, informative notifications, on the record, that give users information like what branch of government requested to take down their content, on what legal grounds, and when the request was made.

Facebook seems to be getting better at this, at least in Thailand. When journalist Andrew MacGregor Marshall had content of his geoblocked in January, he did not receive consistent notice. Worse, the page that his readers in Thailand saw when they tried to access his post implied that the block was an error, not a deliberate act of government-mandated removal.

More recently, however, we have been happy to see evidence of Facebook providing more detailed notices to users, like this notice that exiled dissident Dr. Somsak Jeamteerasakul received and then shared online:

In an ideal world, timely and informative user notice can help power the Streisand effect: that is, the dynamic in which attempts to suppress information actually backfire and draw more attention to it than ever before. (And thats certainly whats happening with the video of the King, which has garnered countless international media headlines.) With details, users are in a better position to appeal to Facebook directly as well as draw public attention to government targeting and censorship, ultimately making this kind of censorship a self-defeating exercise for the government.

In an HTTP environment where governments can passively spy on and filter Internet content, individual pages could disappear behind obscure and misleading error messages. Moving to an increasingly HTTPS-secured world means that if social media companies are transparent about the pressure they face, we may gain some visibility into government censorship. However, if they comply without informing creators or readers of blocked content, we could find ourselves in a much worse situation. Without transparency, tech giants could misuse their power not only to silence vulnerable speakers, but also to obscure how that censorship takes placeand who demanded it.

Have you had your content or account removed from a social media platform? At EFF, weve been shining a light on the expanse and breadth of content removal on social media platforms with OnlineCensorship.org, where we and our partners at Visualising Impact collect your stories about content and account deletions. Share your story here.

Read more:
Online Censorship and User Notification: Lessons from Thailand - EFF

The Censors’ Disappearing Vibrator – New York Times


New York Times
The Censors' Disappearing Vibrator
New York Times
I discovered later that the second half of this episode featured two segments with celebrity guests that did not survive the Singapore censors' scrutiny: Jane Fonda wielding a vibrator and Asia Kate Dillon discussing her nonbinary gender identity, both ...

Read more from the original source:
The Censors' Disappearing Vibrator - New York Times

Fight ‘fake news’ with education, not censorship – Iowa City Press Citizen

Rachel Zuckerman, Guest Opinion 6:34 p.m. CT May 19, 2017

Guest Opinion(Photo: Press-Citizen)Buy Photo

Journalists have been distraught since the 2017 presidential campaign. We are struggling with how to deal with fake news, increased calls for censorship, and negotiating what freedom of the press looks like in the digital age.

These conflicts are all important topics that must be debated. As journalists, we should be introspective about our role moving forward. However, while we negotiate the appropriate level of censorship or the best way to report on President Donald Trumps latest tweet, we miss the bigger picture.

Where are the critical discussions happening around education and media literacy?

Only about 1 in 3 American adults had a bachelors degree or higher in 2015, according to census data. Nate Silver's FiveThirtyEight identified education, not income or other demographic factors, as the largest gap between Trump and Hillary Clinton voters. Clinton overwhelmingly outperformed Trump in counties where most people had at least a four-year degree.

The Trump campaigns fear-mongering and emotional appeals likely resonated more among people with lower educational levels than Clintons policy-oriented message. Trumps appeal also contributed to his ability to sow distrust in the media among his less educated base.

Yet, journalists have still arrived at a place where we debate semantics do we call false statements lies or falsehoods? Concurrent debates about censorship emerge. Is it beneficial to the public to censor hate speech and fake news that could perpetuate violence? Some journalists may feel the need to self-censor to avoid the criticism of a politically charged president.

As journalists, we fail to address societal problems when we become too self-centered. While we focus on how journalists should do their jobs better, we miss reporting on the fact that many of these issues would be mitigated with increased education and informed news consumption.

The editor-in-chief of The Daily Iowan, Lily Abromeit, agrees.

The reason fake news is such a problem is because people believe it, she said. I'm kind of starting to think that people don't really understand how to read a news article and what to look for to understand if it is legitimate.

A 2016 study from Stanford confirms Abromeits analysis. The research found that students at almost all grade levels cannot recognize fake news online.

Therefore, rather than disputing the limits of censorship, our time would be better spent thinking about how to integrate media literacy training into the classroom in addition to making education more accessible to Americans. Increased rates of educational attainment would equip more of the U.S. population with the critical thinking skills necessary to navigate our complex modern media landscape.

In an era of fake news and alternative facts, journalists must be diligent. We should question how to do our jobs better, but we should also press the public to demand education for the millions of Americans who have not received sufficient opportunities.

I realize it actually isn't probably very easy. But still important enough to be worthwhile, Abromeit said.

Rachel Zuckerman is a recent journalism and political science graduate from the University of Iowa who also served as student body president.

Read or Share this story: http://icp-c.com/2rAKlKy

See the rest here:
Fight 'fake news' with education, not censorship - Iowa City Press Citizen

5 Authoritarian Regimes That Shape Facebook’s Censorship Policies – Breitbart News

SIGN UP FOR OUR NEWSLETTER

Facebooks growth is slowing. It needs new markets and new audiences, which is why it is making a big push into foreign countries. However, some of these countries arent happy with the idea of letting their citizens have access to free-speech friendly platforms, and impose conditions on Facebooks operations within their borders.

So, does Zuckerbergs stated commitment to free speech trump the companys need to enter markets controlled by authoritarian, censorious governments? Readers can examine the following five examples, and judge for themselves.

1. China

Facebook was banned from China following riots in 2009 inrmqi and revelations that the Xinjiang independence activists behind the riots used the social network to organize. Facebook has been desperate to re-enter Chinas massive market ever since.

Mark Zuckerberghas met with Chinese president Xi Jinpingas well as Chinese propaganda chief Liu Yunshan. The Facebook CEO has even learned Mandarin and delivered speeches (albeit clumsy ones, according to Quartz) in the language during his multiple trips to China. According to reports, Zuckerberg even asked the Chinese president to name his baby during a meeting at the White House, although the president refused.

But Facebook has done more than cosy up to Chinese officials. According to reports, they are also building a censorship tool to block banned news sources in China from users timelines. Several Facebook employees have quit in protest at the development of the tool, which will reportedly give third parties like ISPs and governments the power to suppress posts.

Then again, Facebook is competing with domestic Chinese social networks, which pride themselves on blocking what they call fake news

2. Turkey

Turkey frequently censors its citizens on the internet. During the coup attempt against President Erdogan last year, all social media was blacked out across the country.Just last month, Turkey blocked access to Wikipedia.

Facebook has been working with Turkey to censor Kurdish militia in northern Syria. Although these groups are largely credited with rolling back the frontiers of the Islamic State, they are considered terrorists by Turkey, an extension of the Kurdistan Workers Party (PKK), that has staged attacks inside the country. Turkey is even accused of allowing ISIS fighters to cross its southern border to fight the Kurds.

A document leaked in 2012 revealed even more censorship on behalf of Turkey: according to guidelines on IP blocks and international compliance given to an external Facebook contractor, moderators were told to consider a wide range of Turkey-critical content to be an abuse standards violation. These included attacks on Kemal Ataturk, the founder of modern Turkey, maps of Kurdistan, images depicting the burning of the Turkish flag, and any content related to Abdullah Ocalan, the most influential leader of the Kurdish independence movement.

3. Pakistan

Pakistan, also known astheIslamic Republicof Pakistan, is currently undertaking a massive crackdown against what it describes as social media blasphemy. The state recently sent out a text message to millions of Pakistanis urging them to report their fellow citizens if they suspect them of blasphemous posting, effectively encouraging a citizen-led religious Stasi.

Much of the citizenry will be happy to oblige. Indeed, some Pakistaniswould like to go beyond simply reporting blasphemers:

Pakistan has asked Facebook for help identifying blasphemers on social media even those outside the country, so it can pursue their extradition.Facebook has not denied complying with the request, instead saying thatthe companyreviews all government requests carefully, with the goal of protecting the privacy and rights of our users.'

What is known is that Facebook has dispatched a delegation to Pakistanto address the governments concerns. Moreover,government officials have claimed that the company has helped them remove 85 of blasphemous material on Facebook.This would make Facebook complicit in Pakistans determination to quash religious dissent from its citizens, which includes a potential death penalty for the crime of blasphemy.

4. Russia

The media is determined to find evidence of collusion between President Trump and Russia, but there is considerably more evidence to be found of Facebook doing the bidding of the Russian government, which is frightening the social network by threatening to ban it from the country.

The pressure seems to have paid off in 2014, Facebook blocked a page supporting Alexei Navalny, described by theWashington Postas Putins biggest critic.

5. Germany and the European Union

Not all authoritarian countries are non-western. In response to the migrant crisis and the subsequent crime and terrorism wave sweeping Europe, Germany has taken a keen interest in scrubbing criticism of their catastrophic mass migration policies from social media.German police have even raided homes over alleged Facebook hate speech, and one couple was taken to court and sentenced for criticizing mass migration on the platform.

In September 2015, German chancellor Angela Merkel was overheardasking Mark Zuckerberg if he was working on clamping down against allegedly hateful content on the platform, to which Zuckerberg replied yeah. The German government has also threatened to fine Facebook if it does not clamp down on fake news, while the European Union has threatened non-legislative action if social networks like Facebook and YouTube do not tackle hate speech on their platforms.

Zuckerberg was true to his word. Following his overheard discussion with Merkel, Facebook has signed up to an E.U. pledge to suppress illegal hate speech and use their power to promote counter-narratives. Facebook also launched its own Initiative for Civil Courage Online, a Europe-wide campaign to clamp down on alleged hate speech during the migrant crisis. In just one month alone in September 2016, Facebook deleted over 100,000 posts in Germany for containing hate a figure that was attacked by the German government as too low.

Mark Zuckerberg is a strong supporter of Angela Merkels refugee policies, and has called on the U.S. to follow their lead.

You can follow Allum Bokhari on Twitterandadd him on Facebook.Email tips and suggestions toabokhari@breitbart.com.

Go here to see the original:
5 Authoritarian Regimes That Shape Facebook's Censorship Policies - Breitbart News