Archive for the ‘Censorship’ Category

Terrorists can easily bypass Facebook censorship, leaked documents show – Telegraph.co.uk

Facebook's guidelines about whena post must be removed show that images captioned by commentary or criticism can remain on the site but support, praise or threats mean they must be removed.

Apicture of a person being shot at close range can stay online, the documents show,if the caption is "More deaths" or "How sad". It must be removed if there is noaccompanying text or if itsays something like"A great day".

Images of leaders of terrorist organisations must be deleted if they are posted without a comment or a supporting one, but can remain if the comment is neutral or condemning.

Posts celebrating terrorist attacks, groups and members are must be removed.

The leak contains a 44-page guide for moderators that includes the pictures and names of 646 terrorist leaders and their groups, the Guardian said. Most of the groups are recognised internationally as terrorist organisations. But some, including the Free Syrian Army and First Division Coastal Group, are supported as legitimate organisations by the US and UK.

Read more:
Terrorists can easily bypass Facebook censorship, leaked documents show - Telegraph.co.uk

The EU takes first step on slippery slope to internet censorship – Diginomica

SUMMARY:

The EU has taken its first steps towards greater regulation of the internet with proposed legislation that would take on video content on social media platforms such as Facebook and WhatsApp. Its a slippery slope to leave to politicians.

The European Union (EU) has signed off on the first steps towards greater regulation of the internet with a vote to establish a universal set of video content censorship rules that companies like Facebook and Twitter would be forced to follow.

The ruling was part of revisions to the EUs Audiovisual Media Services Directive, issued a year ago, tackling extremism and hate speech online.

The EU Parliament wil have to give the final nod for the proposal to become law, but it seems inevitable that this will happen. Vice-President for the Digital Single Market Andrus Ansip says:

It is essential to have one common set of audiovisual rules across the EU and avoid the complication of different national laws. We need to take into account new ways of watching videos, and find the right balance to encourage innovative services, promote European films, protect children and tackle hate speech in a better way.

Individual EU states have tackled the issue of online extremism in different ways. For example, Germany recently passed a bill that makes companies open to fines of up to $53 million if hate speech was not scrubbed from their platforms within 24 hours of being flagged.And an Austrian court ruled earlier this month that Facebook must delete hate posts about the leader of the countrys Green Party.

Meanwhile in the UK, the question of how to manage the rise of extermist content online has become a policy issue in the forthcoming General Election on 8 June. The Conservative Party has been particularly forthright in warning of heavy financial penalties for online content platform providers who dont toe the line.

And following the appalling terrorist attack in Manchester on Monday evening, its now being suggested that anti-terrorism legislation will be rushed through to force co-operation from social media firms as soon as the election is over if the Tories win a majority as the current polls suggest.

According to reports in The Sun an enthusiastic basher of all thing Facebook, Twitter etc Technical Capability Orders would be put in place to allow the police and security services to insist that the likes of WhatsApp would have to remove all encryption from suspect messages themselves for the first time. WhatsApp messages were sent by the perpetrator of the car terrorist attack on Parliament in March in advance of the atrocity and police complained they were unable to see what they said without having the phone in their possession.

The timing of the vote came after Facebook documents, leaked to The Guardian, revealed how difficult the social network finds it to police its own audience of nearly 2 billion users. Monika Bickert, Facebooks head of global policy management, told the newspaper:

We have a really diverse global community and people are going to have very different ideas about what is okay to share. No matter where you draw the line there are always going to be some grey areas.

This leads to some interesting policy decisions. For example, online threats against a head of a state, such as Donald Trump or Theresa May, would automatically be removed, but a threat against nprmal citizens are left live unless the threat being issued is judged to be credible.

On terrorist activities, the documents indicate that in one month last year Facebook moderators identified 1,340 posts that posed credible terrorist threats, but only removed 311.

The leak also provided an insight into how Facebook makes judgement calls on what consistutes a terrorist organisation, citing 646 terrorist leaders and their groups But there are the inevitable problems of interpretation here. For example, the Facebook documents designate the Free Syrian Army (FSA) as a terrorist group. But the FSA is recognised a legitimate anti-Bashir opposition force by various Western governments, including the US and the UK.

Posts celebrating terrorist attacks, groups and members must be removed and images of leaders of terrorist organisations must be deleted if they are posted without a comment or with a supporting one, but they can remain if the comment is felt to be either neutral or condemning.The guidelines state:

People must not praise, support or represent a member of a terrorist organization, or any organization that is primarily dedicated to intimidate a population, government or use violence to resist occupation of an internationally recognized state.

Things that have been censored on Facebook include images of breastfeeding and female nipples in general male nipples are fine apparently; plus-sized women; and burn victims. In most cases, these were errors that were subsequently corrected, but are indicative of the pressure that the firms 4,500 community managers are under.

Last year Facebook, Twitter, YouTube, and Microsoft signed signed up to a voluntary code of conduct in Europe, under which they agreed to review and remove content flagged as hateful within 24 hours. But according to a European Commission study, only around 40% of reported content has been removed within that time frame, rising to 80% after 48 hours.

Ive said before that this far too complex a matter to be left to politicians to tackle. The European Unions first step down the censorship is part of a wider protectionist stance that would force non-EU broadcasters, such as Netflix, to produce 20% of their content in Europe. So whatever the official line, this isnt just about social responsibility; its also about stacking the deckfor European media firms.

In the UK, if the Tories win the election, it will be all-out war with the social media firms. But then if Labour wins, itll be pretty much the same story, given that some of the most vocal social media critics are part of the current main opposition party. Meanwhile in the US, the Trump administration maintains its stance that social media firms are not playing a big enough role in the war on terror.

The social media firms themselves do have to start taking more responsibility this nonsense about not being media firms isnt going to stand and there should be some urgent rethinking going on about that as a defence against being seen to be more proactive. Yes, the likes of Facebook and Twitter are between a rock and a hard place on matters like extremist content and hate speech, but if they allow politicians to take the public moral high ground then theyll have to take what they get and thats not going to be good for society in the long run.

Image credit - Freeimages.com

See more here:
The EU takes first step on slippery slope to internet censorship - Diginomica

Ducey Vetoes Bill Aimed At Protecting High School Journalists From Censorship – KJZZ


KJZZ
Ducey Vetoes Bill Aimed At Protecting High School Journalists From Censorship
KJZZ
The legislation was meant to allow students more freedom in reporting, and stop school administrators from censoring stories from publication. Advocates of the bill say it would have allowed students to write about more hot-button issues and give a ...
Gov. Ducey limits power of student journalistsArizona Daily Sun

all 43 news articles »

Original post:
Ducey Vetoes Bill Aimed At Protecting High School Journalists From Censorship - KJZZ

Facebook Needs to Be More Transparent About Why It Censors Speech – Fortune

Photograph by Chris Ratcliffe Bloomberg/Getty Images

The more Facebook tries to move beyond its original role as a social network for sharing family photos and other ephemera, the more it finds itself in an ethical minefield, torn between its desire to improve the world and its need to curb certain kinds of speech.

The tension between these two forces has never been more obvious than it is now, thanks to two recent examples of when its impulses can go wrong, and the potential damage that can be caused as a result. The first involves a Pulitzer Prize-winning journalist whose account was restricted, and the second relates to Facebook's leaked moderation guidelines.

In the first case, investigative reporter Matthew Caruana Galizia had his Facebook account suspended recently after he posted documents related to a story about a politician in Malta.

Caruana Galizia was part of a team that worked with the International Consortium of Investigative Journalists to break the story of the Panama Papers, a massive dump of documents that were leaked from an offshore law firm last year.

The politician, Maltese prime minister Joseph Muscat, was implicated in a scandal as a result of those leaked documents, which referred to shell companies set up by him and two other senior politicians in his administration.

Get Data Sheet , Fortune s technology newsletter.

Facebook not only suspended Caruana Galizia's account, it also removed a number of the documents that he had posted related to the story. It later restored his access to his account after The Guardian and a Maltese news outlet wrote about it, but some of the documents never reappeared.

The social network has rules that are designed to prevent people from posting personal information about other users, but it's not clear whether that's why the account was suspended.

Some of what Caruana Galizia posted contained screenshots of passports and other personal data, but many of these documents have remained available, while others have been removed. He is being sued by Muscat for libel, which has raised concerns about whether Facebook suspended the account because of pressure from officials in Malta.

A spokesman for Facebook told the Guardian that it was working with the reporter "so that he can publish what he needs to, without including unnecessary private details that could present safety risks. If we find that we have made errors, we will correct them."

Caruana Galizia said the incident was enlightening "because I realized how crippling and punitive this block is for a journalist." And they clearly reinforce the risks that journalists and media entities take when they decide to use the social network as a distribution outlet.

If nothing else, these and other similar incidents make it obvious that Facebook needs to do far more when it comes to being transparent about when and why it removes content, especially when that content is of a journalistic nature.

In an unrelated incident, the world got a glimpse into how the social network makes some of its content decisions thanks to a leaked collection of guidelines and manuals for the 4,500 or so moderators it employs, which was posted by the Guardian .

Outlined in the documents are rules about what kinds of statements are considered too offensive to allow, how much violence the site allows in videos including Facebook Live, which has been the subject of significant controversy recentlyand what to do with sexually suggestive imagery.

Much like Twitter, Facebook appears to be trying to find a line between getting rid of offensive behavior while still leaving room for freedom of expression.

In the process, however, it has raised questions about why the giant social network makes some of the choices it does. Statements within the guidelines about violence towards women, for examplesuch as "To snap a bitchs neck, make sure to apply all your pressure to the middle of her throat"are considered okay because they are not specific threats.

Facebook has already come under fire for some of its decisions around what to show on its live-streaming feature. There have been several cases in which people committed suicide and streamed it on Facebook Live, and in at least one case a man killed his child and then himself .

The guidelines say that while videos of violence and even death should be marked as disturbing, in many cases they do not have to be deleted because they can "help create awareness of issues such as mental illness," and because Facebook doesn't want to "censor or punish people in distress."

As a private corporation, Facebook is entitled to make whatever rules it wants about the type of speech that is permitted on its platform because the First Amendment only applies to the actions of governments. But when a single company plays such a huge role in the online behavior of more than a billion people, it's worth asking questions about the impact its rules have.

If Facebook censors certain kinds of speech, then for tens of millions of people it effectively ceases to exist, or becomes significantly less obvious.

The risks of this kind of private control over speech are obvious when it comes to things like filter bubbles or the role that "fake news" plays in political movements. But there's a deeper risk as well, which is that thanks to the inscrutability of Facebook's algorithm, many people won't know what they are missing when information is removed.

Facebook may not want to admit that it is a media entity, but the reality is that it plays a huge role in how billions of people see the world around them. And part of the responsibility that comes with that kind of role is being more transparent about why and how you make decisions about what information people shouldn't be able to see.

Read more from the original source:
Facebook Needs to Be More Transparent About Why It Censors Speech - Fortune

Online Censorship and User Notification: Lessons from Thailand – EFF

For governments interested in suppressing information online, the old methods of direct censorship are getting less and less effective.

Over the past month, the Thai government has made escalating attempts to suppress critical information online. In the last week, faced with an embarrassing video of the Thai King, the government ordered Facebook to geoblock over 300 pages on the platform and even threatened to shut Facebook down in the country. This is on top of last month's announcement that the government had banned any online interaction with three individuals: two academics and one journalist, all three of whom are political exiles and prominent critics of the state. And just today, law enforcement representatives described their efforts to target those who simply viewnot even create or sharecontent critical of the monarchy and the government.

The Thai government has several methods at its own disposal to directly block large volumes of content. It could, as it has in the past, pressure ISPs to block websites. It could also hijack domain name queries, making sites harder to access. So why is it negotiating with Facebook instead of just blocking the offending pages itself? And what are Facebooks responsibilities to users when this happens?

The answer is, in part, HTTPS. When HTTPS encrypts your browsing, it doesnt just protect the contents of the communication between your browser and the websites you visit. It also protects the specific pages on those sites, preventing censors from seeing and blocking anything after the slash in a URL. This means that if a sensitive video of the King shows up on a website, government censors cant identify and block only the pages on which it appears. In an HTTPS world that makes such granularized censorship impossible, the governments only direct censorship option is to block the site entirely.

That might still leave the government with tenable censorship options if critical speech and dissenting activity only happened on certain sites, like devoted blogs or message boards. A government could try to get away with blocking such sites wholesale without disrupting users outside a certain targeted political sphere.

But all sorts of user-generated contentfrom calls to revolution to cat picturesare converging on social media websites like Facebook, which members of every political party use and rely on. This brings us to the second part of the answer as to why the government cant censor like it used to: mixed-use social media sites. When content is both HTTPS-encrypted and on a mixed-use social media site like Facebook, it can be too politically expensive to block the whole site. Instead, the only option left is pressuring Facebook to do targeted blocking at the governments request.

Government requests for targeted blocking happen when something is compliant with Facebooks community guidelines, but not with a countrys domestic law. This comes to a head when social media platforms have large user bases in repressive, censorious statesa dynamic that certainly applies in Thailand, where a military dictatorship shares its capital city with a dense population of Facebook power-users and one of the most Instagrammed locations on earth.

In Thailand, the video of the King in question violated the countrys overbroad lese majeste defamation laws against in any way insulting or criticizing the monarchy. So the Thai government requested that Facebook remove italong with hundreds of other pieces of contenton legal grounds, and made an ultimately empty threat to shut down the platform in Thailand if Facebook did not comply.

Facebook did comply and geoblock over 100 URLs for which it received warrants from the Thai government. This may not be surprising; although the government is likely not going to block Facebook entirely, they still have other ways to go after the company, including threatening any in-country staff. Indeed, Facebook put itself in a vulnerable position when it inexplicably opened a Bangkok office during high political tensions after the 2014 military coup.

If companies like Facebook do comply with government demands to remove content, these decisions must be transparent to their users and the general public. Otherwise, Facebook's compliance transforms its role from a victim of censorship, to a company pressured to act as a government censor. The stakes are high, especially in unstable political environments like Thailand. There, the targets of takedown requests can often be journalists, activists, and dissidents, and requests to take down their content or block their pages often serve as an ominous prelude to further action or targeting.

With that in mind, Facebook and other companies responding to government requests must provide the fullest legally permissible notice to users whenever possible. This means timely, informative notifications, on the record, that give users information like what branch of government requested to take down their content, on what legal grounds, and when the request was made.

Facebook seems to be getting better at this, at least in Thailand. When journalist Andrew MacGregor Marshall had content of his geoblocked in January, he did not receive consistent notice. Worse, the page that his readers in Thailand saw when they tried to access his post implied that the block was an error, not a deliberate act of government-mandated removal.

More recently, however, we have been happy to see evidence of Facebook providing more detailed notices to users, like this notice that exiled dissident Dr. Somsak Jeamteerasakul received and then shared online:

In an ideal world, timely and informative user notice can help power the Streisand effect: that is, the dynamic in which attempts to suppress information actually backfire and draw more attention to it than ever before. (And thats certainly whats happening with the video of the King, which has garnered countless international media headlines.) With details, users are in a better position to appeal to Facebook directly as well as draw public attention to government targeting and censorship, ultimately making this kind of censorship a self-defeating exercise for the government.

In an HTTP environment where governments can passively spy on and filter Internet content, individual pages could disappear behind obscure and misleading error messages. Moving to an increasingly HTTPS-secured world means that if social media companies are transparent about the pressure they face, we may gain some visibility into government censorship. However, if they comply without informing creators or readers of blocked content, we could find ourselves in a much worse situation. Without transparency, tech giants could misuse their power not only to silence vulnerable speakers, but also to obscure how that censorship takes placeand who demanded it.

Have you had your content or account removed from a social media platform? At EFF, weve been shining a light on the expanse and breadth of content removal on social media platforms with OnlineCensorship.org, where we and our partners at Visualising Impact collect your stories about content and account deletions. Share your story here.

Read more:
Online Censorship and User Notification: Lessons from Thailand - EFF