Archive for the ‘Censorship’ Category

Concerns About Censorship Soar As Russia Detains Director – Forward

MOSCOW (Reuters) - A prominent Russian theater director who has lamented what he says is the lack of freedom and growing social conservatism in his country was detained on Tuesday and accused of embezzling state funds.

Russias Investigative Committee said it suspected Kirill Serebrennikov of embezzling at least 68 million rubles ($1.15 million) in state funds earmarked for an art project, it said in a statement.

Serebrennikov, artistic director at Moscows avant-garde Gogol Centre theater, denies wrongdoing. He faces up to 10 years in jail if found guilty.

Dmitry Kharitonov, a lawyer for Serebrennikov, said his client was detained in St. Petersburg where he was working on a film about a Soviet rock star.

Serebrennikov, an award-winning director whose father was Jewish, has used his work to criticize the authorities in the past, lashing out at what he sees as the pernicious growing role of the state and church in Russian society.

His detention shocked his supporters and the arts world.

The arrest of the director before a trial is a clearly excessive measure, wrote Alexei Kudrin, a liberal economist and former finance minister, on social media.

In May, investigators searched Serebrennikovs home and office and questioned him as a witness in an embezzlement case.

His lawyer could not immediately say if Serebrennikovs detention was linked to the same case or a different one. The accountant and general director of Serebrennikovs theater have already been accused of stealing state funds.

As The New York Times reported, well-regarded Russian cultural figures spoke out on Serebrennikovs behalf following both the earlier searches and his arrest. When Russian President Vladimir V. Putin gave a state award to the actor Yevgeny V. Mironov in May, Mironov passed him a letter advocating for Serebrennikov. And the literary critic and television host Aleksandr Arkhangelsky posted a Facebook status that, in the Times translation, was damning towards the authorities: Those who do this cover themselves with shame, he wrote.

In July, the Bolshoi Theatre postponed the world premiere of Nureyev, an edgy ballet about the famous Russian dancer which was directed by Serebrennikov.

The TASS news agency reported that Russias minister for culture had a long conversation with the Bolshoi before it announced it was postponing the premiere.

But Vladimir Urin, the theatres director general, said it had been pulled because rehearsals had shown it was not ready. He said it would be staged in May next year instead.

Link:
Concerns About Censorship Soar As Russia Detains Director - Forward

Measuring the Internet for Freedom – Project Syndicate

ROME Last year, during a wave of deadly political protests in Ethiopia, the government blocked more than 15 media websites and the smartphone chat application WhatsApp. Sites promoting freedom of expression and LGBTQ+ rights, as well as those offering censorship-circumvention tools, such as Tor and Psiphon, were also suppressed.

All of this was uncovered through the use of software called ooniprobe, which is designed to measure networks and detect Internet censorship. Ooniprobe was developed more than five years ago by the Tor-supported Open Observatory of Network Interference (OONI), with which I work, in order to boost transparency, accountability, and oversight of Internet censorship. The software is free and open source, meaning that anyone can use it. And, indeed, tens of thousands of ooniprobe users from more than 190 countries have already done just that.

Those users have contributed to the collection of millions of network measurements, all of which are published on OONI Explorer, arguably the largest publicly available resource on Internet censorship. Thanks to their use of ooniprobe, we uncovered the extent of last years wave of censorship in Ethiopia, as well as details of many other cases of censorship elsewhere in the world.

In Uganda, local groups used ooniprobe during last years general election, when the government blocked social media. Ooniprobes network-measurement data not only confirmed the governments action; it also uncovered which sites were blocked and the different methods used by Internet Service Providers (ISPs) to implement censorship.

Ooniprobe also came in handy in Malaysia in 2015. Facing accusations that he had transferred nearly $700 million from the state investment fund 1MDB to his personal bank accounts, Prime Minister Najib Razak attempted to block news outlets and blogs that reported on the scandal. It was ooniprobes network-measurement software that enabled Malaysian civil-society groups to collect data that serve as evidence of the blocking.

Of course, censorship is not always carried out to protect the politically powerful; it can also be used to reinforce social and cultural norms. In Indonesia, for example, low social tolerance for homosexuality may have played a role in the blocking of numerous LGBTQ+ websites, even though the country does not officially restrict LGBTQ+ rights. Similar factors may have influenced efforts to block sites perceived as overly critical of Islam.

In Thailand, ISPs have, in the last three years, blocked access to a number of sites that are perceived to be offensive toward the countrys royal family. But, here, there is a legal justification: Thailands strict prohibition against lse-majest protects the royal familys most senior members from insult or threat. Other cases of legally justified Internet censorship include the blocking of sexually explicit websites in countries where pornography is prohibited.

Then there are cases where the motivation for censorship is unclear. Why, for example, has an online dating site been blocked in Malaysia? In some countries, ISPs appear to be censoring sites at their own discretion. According to ooniprobe data, multiple Thai ISPs simultaneously blocked access to different types of websites from news outlets to Wikileaks to pornography indicating that they likely received vague orders from authorities.

Before ooniprobe, such censorship was difficult to detect, leading to a lack of accountability, with governments and ISPs often denying any and all involvement. Even in cases where governments announce official lists of blocked sites, they may leave some targets off. Likewise, ISPs may not always comply with official orders to lift blocks. Vimeo and Reddit, for example, were recently found to be blocked in some networks in Indonesia, even though the official ban on those sites was lifted more than two years ago.

With ooniprobe, users are not only able to expose Internet censorship; they can also acquire substantial detail about how, when, where, and by whom the censorship is being implemented. OONIs Web-Connectivity Test, for example, is designed to examine whether access to websites is blocked through DNS tampering, TCP/IP blocking, or a transparent HTTP proxy.

Other ooniprobe tests are designed to examine the accessibility of chat apps namely, WhatsApp, Telegram, and Facebook Messenger within networks, as well as that of censorship-circumvention tools, such as Tor, Psiphon, and Lantern. OONI also provides software tests that uncover the presence of systems (middle boxes) that could potentially be responsible for censorship or surveillance.

The depth of OONI data supports much-needed accountability and oversight. Lawyers can use OONI data to assess the legality of Internet censorship in their countries, and potentially introduce it as evidence in court cases. Journalists, researchers, and human-rights defenders can use the data to inform their work as well. And censorship-circumvention projects like Tor can use OONI findings on emergent censorship events to shape their tools and strategies.

OONI data can help enrich public discourse about the legality, necessity, and proportionality of Internet censorship. That makes it a critical tool for safeguarding human rights on the Internet and beyond.

Todays media landscape is littered with landmines: open hostility by US President Donald Trump, increased censorship in countries such as Hungary, Turkey, and Zambia, growing financial pressure, and the challenge of "fake news." In Press Released, Project Syndicate, in partnership with the European Journalism Centre, provides a truly global platform to frame and stimulate debate about the myriad challenges facing the press today.

Continue reading here:
Measuring the Internet for Freedom - Project Syndicate

Tech Companies and Censorship: Where Should We Draw The Line? – Inc.com

This has been a tough week.

Starting with the terrible event that occurred last weekend in Charlottesville, VA, where clashes between neo-Nazi and white supremacist groups erupted into fights and violence and led to death of one protester.

Throughout the week, the event continued to gain steam when President Trump commented about the incident, then made a second comment, then held an unprecedented press conference that even members of his own party condemned.

As prominent CEOs's of the President's manufacturing council began to drop out, several tech companies began or intensified their crack down on hate speech and banning of alt-right and neo-Nazi websites. According to PBS News, here are just a few big names and their actions:

Cloudflare, a company that provides security services to internet companies to protect them from hackers, also joined the movement by also dropping The Daily Stormer from its network services. The move was a bit of a surprise, because Matthew Prince, co-founder and CEO of Cloudflare, has long been an advocate of free speech saying that "a website is speech, it is not a bomb,"

Cloudfire took the action, however, because management determined that the The Daily Stormer was harassing individuals who were reporting their site as abusive. Prince was also clear that he and the company found the content on the site "abhorrent and vile" and in a company memo stated that "the tipping point for us making this decision was that the team behind Daily Stormer made the claim that we were secretly supporters of their ideology ... we could not remain neutral after these claims of secret support by Cloudflare."

While these actions by tech companies seen by most as the proper and moral thing to do, some have rightfully questioned the ability of businesses in general to have such a significant influence on the fundamental right of free speech online -- censoring or even removing it altogether.

Prince goes on to say that entrepreneurs -- and society at large -- need to ask ourselves who should be responsible for policing and regulating online content. "I sit in a very privileged position," said Prince, "I see about 10 percent of all online traffic, and I can make a decision whether they can be online anymore. And I'm not sure I am the one who should be making that kind of decision."

The the question for all of us is who should be?

We are all affording the freedom of speech and expression -- a very unique, precious and delicate gift. We have also been afforded, through the sacrifice of many generations, the right to life, liberty and the pursuit of happiness.

When these two rights intersect and conflict, we need a moral standard -- not the constitution -- to moderate.

Of course, the question then becomes who gets to decide the moral standard?

Luckily, we have a democratic system in place that allows the country's citizens to select representatives who serve as the law makers that mold this standard. Is our system flawed -- absolutely -- but as Winston Churchill astutely recognized, "Democracy is the worst form of government, except for all the others."

When it comes to tech companies -- or any company for that matter -- they have an obligation to follow the law -- and that is about it. As Prince contends, the right policy is for content providers to be "content neutral." The community can be policed by its users in the form reporting reprehensible content, and companies have the obligations to engage experts and authorities in law enforcement to determine what should be removed.

Of course, if some companies wish to write and maintain an internal set of codes and as long as those codes do not infringe upon or otherwise break a law, a company has every right to do so. Customers who disagree can exercise their freedom of speech to voice their opinion or simply "protest with their wallets."

This debate will surely not end anytime soon, and by all indications, it is just getting started.

What do you think? Should censorship be under the management of companies, or should content be continued to be given freedoms under the right to free speech? Please share your (constructive and civil) comments below.

Visit link:
Tech Companies and Censorship: Where Should We Draw The Line? - Inc.com

Daily Stormer ban opens door to government censorship, some say … – Washington Times

Major internet companies rush to oust a white nationalist website last week could make it tougher for tech companies and open-net advocates to try to keep the government from censoring websites in the future, the CEO of one of the companies said.

GoDaddy, Google and Cloudflare a company that protects sites from being knocked off-line all booted Daily Stormer from their services after the white nationalist website cheered the neo-Nazi rally in Charlottesville, Virginia, and mocked the 32-year-old woman killed in the aftermath.

Matthew Prince, CEO for Cloudflare, acknowledged the decision makes it harder for his company to fight against pressure by some governments to take down a website in the future.

I dont know the right answer, but I do know that as we work it out its critical we be clear, transparent, consistent and respectful of Due Process, Mr. Prince wrote in his statement.

At a time when open-internet advocates are pushing policies such as net neutrality, the quick moves to punish the online presence rally participants or sympathizers worried activists who said the companies appeared to be making up the rules as they went along.

We think that there is a better route to making decisions that impact fundamental rights like freedom of expression than what appeared to be pretty ad hoc decisions being made right now, said Peter Micek, general counsel for Access Now.

Daily Stormer took the brunt of the online blowback last week, getting kicked off hosting sites. Twitter also banned an account that shared links to stories from the controversial site, while Facebook expunged all efforts to share the offending article that mocked the woman killed in Charlottesville.

But Facebook allowed the article to remain posted as long as it was accompanied by criticism of Daily Stormer or its white nationalist views.

Floyd Abrams, a prominent First Amendment lawyer, said he thinks its a good thing for the Facebooks of the world to ban certain types of racist speech, although he admits editorial editing from these sites is not without concern.

There is an inherent danger when so many people get so much of their information from, say, Facebook that when Facebook makes the decision not to carry something, the public is effectively deprived, said Mr. Abrams.

Meanwhile, OkCupid, an online dating site, banned one user who admitted to being a part of the white nationalist protests.

The kind of viewpoint refereeing the sites engaged in is likely legal because the sites are private, experts said.

I dont see that as adding any exposure to the service provider because they already have the ability as a private actor and as a commercial provider to determine who they are going to work with, to contract with or, if you will, even to discipline, said Brigadier Gen. Michael McDaniel, a professor at WMU-Cooley Law School.

But Mr. Abrams said tension is created when these sites engage in editing but are still protected from liability under the law.

Thats something that all these companies must be thinking about carefully, he said.

A spokesperson for Google said they ousted Daily Stormer because they feared Googles terms of use would be violated.

Twitter declined to comment, while GoDaddy and Facebook didnt respond to questions about their censorship decisions.

Mr. Prince at Cloudflare admitted to Gizmodo that he made an exception to their policy in canceling Daily Stormer but insisted he hadnt set a new precedent.

I think we have to have a conversation over what part of the infrastructure stack is right to police content, he said.

The Electronic Frontier Foundation said what hosting companies such as GoDaddy and Cloudflare did was more worrisome than the social media companies censorship.

With a content host that is like a social media site, they can just take down one post or eliminate one bit of content whereas Cloudflare and GoDaddy and so on, they cant, said Jeremy Malcolm, senior global policy analyst at Electronic Frontier Foundation. They had to take down an entire website, and that gives a lot more risk of taking down legitimate speech along with the problematic speech.

Read the rest here:
Daily Stormer ban opens door to government censorship, some say ... - Washington Times

Protecting Democracy from Online Disinformation Requires Better Algorithms, Not Censorship – Council on Foreign Relations (blog)

Eileen Donahoe is Executive Director of the Global Digital Policy Incubator at Stanford University, and former U.S. ambassador to the UN Human Rights Council. You can follow her @EileenDonahoe.

Democracies face an existential threat: information is being weaponized against them with digital tools. Although propaganda is not new, the speed, scale and extraterritorial reach of digital disinformation makes it different in kind from propaganda of old. Digital mechanisms of manipulationfrom bot armies and clickbait to micro targetingare being mastered by authoritarian and anti-democratic forces, outpacing democratic societies capacities to protect themselves.

Perhaps the most challenging aspect of this threat is that information itself is the weapon. Information has always been the lifeblood of democracy. For democracy to work, free and well-informed citizens must actively engage in civic discourse. Digital disinformation is destroying the prospect of democratic engagement by well-informed citizens.

Given the digital disinformation campaigns in the lead-up to BREXIT and the recent U.S. and French presidential elections, democratic governments now are seized with defending against disinformation operations by foreign governments seeking to disrupt their democratic processes. Until recently, many national security experts were focused on cyber threats to critical infrastructure that could have a physical consequences (e.g. a cyberattack causing something to blow up). Few anticipated that the target of cyberattack would be the civic infrastructure of our democraciesnot only voting machines, but public discourse around our elections. Fewer envisioned that the preferred vector of cyberattack would be disinformation.

But an ominous risk also arises when democratic governments responding to digital disinformation undermine their own democratic values. Germanys new NetzDG law, also known as the Network Enforcement Act or social media law, aims to eradicate hate speech and propaganda on digital platforms. It imposes steep fines (up to 50 million) for failure to take down evidently criminal content within twenty-four hours. The motivation for this legislation was to protect the quality of discourse necessary to sustain democracy, but its unintended effects risk greater damage to democracy than the original threat.

As private sector platforms like Facebook, Google, and Twitter have become primary sources of information and vehicles for expression, they effectively function as the public square for civic engagement. Their algorithms affect their users access to information and how they form political opinions. This has created conceptual confusion about the roles and responsibilities of social media platforms in democracy. The German NetzDG Act manifests this confusion.

In one swoop, the German government handed over judicial authority for determining criminality to the private sector. It simultaneously encouraged censorship, by incentivizing platforms to err on the side of taking down flagged content even if not criminal. Finally, it eroded the core concept of limited platform liability for third-party speech, which has facilitated the free flow of information on the Internet and democratized distribution of content globally.

In effect, the German bill got the target wrong: Platforms should not be liable for speech posted by users, (but should take down criminal speech based on a court order.) Platforms should be accountable for their own algorithms when they push information to users to monetize attention. The German approach retreats from governing responsibility and undermines its own commitment to freedom of expression on the Internet.

This is especially true when Russia starts holding up the German law as a model for its own censorship efforts. Democratic values are at risk of serious erosion when Moscow looks at Berlin for inspiration to regulate internet content. Within two weeks of the adoption of the German law, the Russian Duma proposed a copy-cat bill, with multiple explicit references to the German law as its model. The Russian version, like the German original, compels social media companies to take down vaguely defined illegal content within twenty-four hours or face severe penalties. The official justification for the law was to prevent use of digital networks for illegal purposes. In Russia, this can mean anything that challenges the authoritarian rule of Vladimir Putin. Russias cynical use of Germanys example should raise alarm bells for all democratic actors.

Democratic governments concerned about new digital threats need to find better algorithms to defend democratic values in the global digital ecosystem. Democracy has always been hard. It requires an exquisite balance between freedom, security and democratic accountability. This is the profound challenge that confronts the worlds liberal democracies as they grapple with foreign disinformation operations, as well as home-grown hate speech, extremism, and fake news. Fear and conceptual confusion do not justify walking away from liberal values, which are a source of security and stability in democratic society. Private sector and government actors must design algorithms for democracy that simultaneously optimize for freedom, security, and democratic accountability in our digital world.

Follow this link:
Protecting Democracy from Online Disinformation Requires Better Algorithms, Not Censorship - Council on Foreign Relations (blog)