Archive for the ‘Censorship’ Category

Trump administration sued over climate change ‘censorship’ – Climate Home


Climate Home
Trump administration sued over climate change 'censorship'
Climate Home
The Trump administration's refusal to release public information about its climate censorship continues a dangerous and illegal pattern of anti-science denial, said Taylor McKinnon from the CBD. Just as censorship won't change climate science, ...

Continued here:
Trump administration sued over climate change 'censorship' - Climate Home

Terrorists can easily bypass Facebook censorship, leaked documents show – Telegraph.co.uk

Facebook's guidelines about whena post must be removed show that images captioned by commentary or criticism can remain on the site but support, praise or threats mean they must be removed.

Apicture of a person being shot at close range can stay online, the documents show,if the caption is "More deaths" or "How sad". It must be removed if there is noaccompanying text or if itsays something like"A great day".

Images of leaders of terrorist organisations must be deleted if they are posted without a comment or a supporting one, but can remain if the comment is neutral or condemning.

Posts celebrating terrorist attacks, groups and members are must be removed.

The leak contains a 44-page guide for moderators that includes the pictures and names of 646 terrorist leaders and their groups, the Guardian said. Most of the groups are recognised internationally as terrorist organisations. But some, including the Free Syrian Army and First Division Coastal Group, are supported as legitimate organisations by the US and UK.

Read more:
Terrorists can easily bypass Facebook censorship, leaked documents show - Telegraph.co.uk

The EU takes first step on slippery slope to internet censorship – Diginomica

SUMMARY:

The EU has taken its first steps towards greater regulation of the internet with proposed legislation that would take on video content on social media platforms such as Facebook and WhatsApp. Its a slippery slope to leave to politicians.

The European Union (EU) has signed off on the first steps towards greater regulation of the internet with a vote to establish a universal set of video content censorship rules that companies like Facebook and Twitter would be forced to follow.

The ruling was part of revisions to the EUs Audiovisual Media Services Directive, issued a year ago, tackling extremism and hate speech online.

The EU Parliament wil have to give the final nod for the proposal to become law, but it seems inevitable that this will happen. Vice-President for the Digital Single Market Andrus Ansip says:

It is essential to have one common set of audiovisual rules across the EU and avoid the complication of different national laws. We need to take into account new ways of watching videos, and find the right balance to encourage innovative services, promote European films, protect children and tackle hate speech in a better way.

Individual EU states have tackled the issue of online extremism in different ways. For example, Germany recently passed a bill that makes companies open to fines of up to $53 million if hate speech was not scrubbed from their platforms within 24 hours of being flagged.And an Austrian court ruled earlier this month that Facebook must delete hate posts about the leader of the countrys Green Party.

Meanwhile in the UK, the question of how to manage the rise of extermist content online has become a policy issue in the forthcoming General Election on 8 June. The Conservative Party has been particularly forthright in warning of heavy financial penalties for online content platform providers who dont toe the line.

And following the appalling terrorist attack in Manchester on Monday evening, its now being suggested that anti-terrorism legislation will be rushed through to force co-operation from social media firms as soon as the election is over if the Tories win a majority as the current polls suggest.

According to reports in The Sun an enthusiastic basher of all thing Facebook, Twitter etc Technical Capability Orders would be put in place to allow the police and security services to insist that the likes of WhatsApp would have to remove all encryption from suspect messages themselves for the first time. WhatsApp messages were sent by the perpetrator of the car terrorist attack on Parliament in March in advance of the atrocity and police complained they were unable to see what they said without having the phone in their possession.

The timing of the vote came after Facebook documents, leaked to The Guardian, revealed how difficult the social network finds it to police its own audience of nearly 2 billion users. Monika Bickert, Facebooks head of global policy management, told the newspaper:

We have a really diverse global community and people are going to have very different ideas about what is okay to share. No matter where you draw the line there are always going to be some grey areas.

This leads to some interesting policy decisions. For example, online threats against a head of a state, such as Donald Trump or Theresa May, would automatically be removed, but a threat against nprmal citizens are left live unless the threat being issued is judged to be credible.

On terrorist activities, the documents indicate that in one month last year Facebook moderators identified 1,340 posts that posed credible terrorist threats, but only removed 311.

The leak also provided an insight into how Facebook makes judgement calls on what consistutes a terrorist organisation, citing 646 terrorist leaders and their groups But there are the inevitable problems of interpretation here. For example, the Facebook documents designate the Free Syrian Army (FSA) as a terrorist group. But the FSA is recognised a legitimate anti-Bashir opposition force by various Western governments, including the US and the UK.

Posts celebrating terrorist attacks, groups and members must be removed and images of leaders of terrorist organisations must be deleted if they are posted without a comment or with a supporting one, but they can remain if the comment is felt to be either neutral or condemning.The guidelines state:

People must not praise, support or represent a member of a terrorist organization, or any organization that is primarily dedicated to intimidate a population, government or use violence to resist occupation of an internationally recognized state.

Things that have been censored on Facebook include images of breastfeeding and female nipples in general male nipples are fine apparently; plus-sized women; and burn victims. In most cases, these were errors that were subsequently corrected, but are indicative of the pressure that the firms 4,500 community managers are under.

Last year Facebook, Twitter, YouTube, and Microsoft signed signed up to a voluntary code of conduct in Europe, under which they agreed to review and remove content flagged as hateful within 24 hours. But according to a European Commission study, only around 40% of reported content has been removed within that time frame, rising to 80% after 48 hours.

Ive said before that this far too complex a matter to be left to politicians to tackle. The European Unions first step down the censorship is part of a wider protectionist stance that would force non-EU broadcasters, such as Netflix, to produce 20% of their content in Europe. So whatever the official line, this isnt just about social responsibility; its also about stacking the deckfor European media firms.

In the UK, if the Tories win the election, it will be all-out war with the social media firms. But then if Labour wins, itll be pretty much the same story, given that some of the most vocal social media critics are part of the current main opposition party. Meanwhile in the US, the Trump administration maintains its stance that social media firms are not playing a big enough role in the war on terror.

The social media firms themselves do have to start taking more responsibility this nonsense about not being media firms isnt going to stand and there should be some urgent rethinking going on about that as a defence against being seen to be more proactive. Yes, the likes of Facebook and Twitter are between a rock and a hard place on matters like extremist content and hate speech, but if they allow politicians to take the public moral high ground then theyll have to take what they get and thats not going to be good for society in the long run.

Image credit - Freeimages.com

See more here:
The EU takes first step on slippery slope to internet censorship - Diginomica

Ducey Vetoes Bill Aimed At Protecting High School Journalists From Censorship – KJZZ


KJZZ
Ducey Vetoes Bill Aimed At Protecting High School Journalists From Censorship
KJZZ
The legislation was meant to allow students more freedom in reporting, and stop school administrators from censoring stories from publication. Advocates of the bill say it would have allowed students to write about more hot-button issues and give a ...
Gov. Ducey limits power of student journalistsArizona Daily Sun

all 43 news articles »

Original post:
Ducey Vetoes Bill Aimed At Protecting High School Journalists From Censorship - KJZZ

Facebook Needs to Be More Transparent About Why It Censors Speech – Fortune

Photograph by Chris Ratcliffe Bloomberg/Getty Images

The more Facebook tries to move beyond its original role as a social network for sharing family photos and other ephemera, the more it finds itself in an ethical minefield, torn between its desire to improve the world and its need to curb certain kinds of speech.

The tension between these two forces has never been more obvious than it is now, thanks to two recent examples of when its impulses can go wrong, and the potential damage that can be caused as a result. The first involves a Pulitzer Prize-winning journalist whose account was restricted, and the second relates to Facebook's leaked moderation guidelines.

In the first case, investigative reporter Matthew Caruana Galizia had his Facebook account suspended recently after he posted documents related to a story about a politician in Malta.

Caruana Galizia was part of a team that worked with the International Consortium of Investigative Journalists to break the story of the Panama Papers, a massive dump of documents that were leaked from an offshore law firm last year.

The politician, Maltese prime minister Joseph Muscat, was implicated in a scandal as a result of those leaked documents, which referred to shell companies set up by him and two other senior politicians in his administration.

Get Data Sheet , Fortune s technology newsletter.

Facebook not only suspended Caruana Galizia's account, it also removed a number of the documents that he had posted related to the story. It later restored his access to his account after The Guardian and a Maltese news outlet wrote about it, but some of the documents never reappeared.

The social network has rules that are designed to prevent people from posting personal information about other users, but it's not clear whether that's why the account was suspended.

Some of what Caruana Galizia posted contained screenshots of passports and other personal data, but many of these documents have remained available, while others have been removed. He is being sued by Muscat for libel, which has raised concerns about whether Facebook suspended the account because of pressure from officials in Malta.

A spokesman for Facebook told the Guardian that it was working with the reporter "so that he can publish what he needs to, without including unnecessary private details that could present safety risks. If we find that we have made errors, we will correct them."

Caruana Galizia said the incident was enlightening "because I realized how crippling and punitive this block is for a journalist." And they clearly reinforce the risks that journalists and media entities take when they decide to use the social network as a distribution outlet.

If nothing else, these and other similar incidents make it obvious that Facebook needs to do far more when it comes to being transparent about when and why it removes content, especially when that content is of a journalistic nature.

In an unrelated incident, the world got a glimpse into how the social network makes some of its content decisions thanks to a leaked collection of guidelines and manuals for the 4,500 or so moderators it employs, which was posted by the Guardian .

Outlined in the documents are rules about what kinds of statements are considered too offensive to allow, how much violence the site allows in videos including Facebook Live, which has been the subject of significant controversy recentlyand what to do with sexually suggestive imagery.

Much like Twitter, Facebook appears to be trying to find a line between getting rid of offensive behavior while still leaving room for freedom of expression.

In the process, however, it has raised questions about why the giant social network makes some of the choices it does. Statements within the guidelines about violence towards women, for examplesuch as "To snap a bitchs neck, make sure to apply all your pressure to the middle of her throat"are considered okay because they are not specific threats.

Facebook has already come under fire for some of its decisions around what to show on its live-streaming feature. There have been several cases in which people committed suicide and streamed it on Facebook Live, and in at least one case a man killed his child and then himself .

The guidelines say that while videos of violence and even death should be marked as disturbing, in many cases they do not have to be deleted because they can "help create awareness of issues such as mental illness," and because Facebook doesn't want to "censor or punish people in distress."

As a private corporation, Facebook is entitled to make whatever rules it wants about the type of speech that is permitted on its platform because the First Amendment only applies to the actions of governments. But when a single company plays such a huge role in the online behavior of more than a billion people, it's worth asking questions about the impact its rules have.

If Facebook censors certain kinds of speech, then for tens of millions of people it effectively ceases to exist, or becomes significantly less obvious.

The risks of this kind of private control over speech are obvious when it comes to things like filter bubbles or the role that "fake news" plays in political movements. But there's a deeper risk as well, which is that thanks to the inscrutability of Facebook's algorithm, many people won't know what they are missing when information is removed.

Facebook may not want to admit that it is a media entity, but the reality is that it plays a huge role in how billions of people see the world around them. And part of the responsibility that comes with that kind of role is being more transparent about why and how you make decisions about what information people shouldn't be able to see.

Read more from the original source:
Facebook Needs to Be More Transparent About Why It Censors Speech - Fortune