Archive for the ‘Censorship’ Category

AAUW speaker warns of rise in book censorship, ‘similar to a pandemic’ – Los Altos Town Crier

The American Association of University Women Silicon Valley Branch (AAUW Silicon Valley) hosted a virtual discussion titled School Book Banning: A Primer for Readers of All Ages with Jennifer Lynn Wolf, senior lecturer at Stanford Universitys Graduate School of Education and former high school English teacher.

The March 14 discussion had more than 60 attendees.

According to PEN America, book banning is defined as Anyaction taken against a book based on its content and as a result of parent or community challenges, administrative decisions, or in response to direct or threatened action by lawmakers or other governmental officials, that leads to a previously accessible book being either completely removed from availability to students, or where access to a book is restricted or diminished.

Wolf focused onthe particulars of book banning in schools. She said that the current surge in book banning is similar to a pandemic in the number of attempts (531 from Jan. 1 to Aug. 31, 2023, for example) involving 3,923 titles.

This surge is not new attempts to ban books go back to the early part of the 20thcentury. Wolf cited a case study of books being burned by the Nazis at the urging of the German Student Union in 1933. In the 21st century, the controversy on books began with the banning by the McMinn County School Board in Tennessee of the childrens graphic novelMausthat described the terrors of the Nazi regime.

The audience was encouraged to learn that in 2023, California passed AB 1078, which prohibits book bans.

According to Wolf,the current schoolbook banning movement is being driven by Moms for Libertyandhas great impact on both children and families.

She pointed out that the American Library Association tracks and challenges attempts to ban books nationwide.

Wolf offered this advice on how to protect the right to read:Read and gift banned books, use your public library, learn whos on your local school board and hold candidates forums, and watch, listen to or read documentaries, podcasts or books on book banning.

In a question-and-answer session after her talk, one attendee said that San Joses AAUW has already gone to board meetings of four school districts and learned that the true reason for book banning is to discredit public schools and to promote private parochial schools.

In response to another question, Wolf said that in her opinion it is impossible to learn and grow without some discomfort, so the fact that children do experience some unease through reading shouldnt be a reason to ban books.

Wolf concluded with the comment that currently there are more questions than answers about book banning, particularly with regard to who (parents, school boards, teachers, legislators, the courts, for example) should decide what children should learn and read.

Go here to see the original:
AAUW speaker warns of rise in book censorship, 'similar to a pandemic' - Los Altos Town Crier

NSF paid universities to develop AI censorship tools for social media, House report alleges – The College Fix

Used by governments and Big Tech to shape public opinion by restricting certain viewpoints or promoting others: report

The National Science Foundation is paying universities using taxpayer money to create AI tools that can be used to censor Americans on various social media platforms, according to members of the House.

University of Michigan, the University of Wisconsin-Madison, and MIT are among the universities cited in the House Judiciary Committee and the Select Subcommittee on the Weaponization of the Federal Government interim report.

It details the foundations funding of AI-powered censorship and propaganda tools, and its repeated efforts to hide its actions and avoid political and media scrutiny.

NSF has been issuing multi-million-dollar grants to university and non-profit research teams for the purpose of developing AI-powered technologies that can be used by governments and Big Tech to shape public opinion by restricting certain viewpoints or promoting others, states the report, released last month.

Funding for the projects began in 2021 and was issued through the NSFs Convergence Accelerator grant program, which was initially launched in 2019 to develop interdisciplinary solutions to major challenges of national and societal importance such as those pertaining to AI and quantum technology, it states.

In 2021, however, the NSF introduced Track F: Trust & Authenticity in Communication Systems.

The NSFs 2021 Convergence Accelerator program solicitation stated the goal of Track F projects was to develop prototype(s) of novel research platforms forming integrated collection(s) of tools, techniques, and educational materials and programs to support increased citizen trust in public information of all sorts (health, climate, news, etc.), through more effectively preventing, mitigating, and adapting to critical threats in our communications systems.

Specifically, the grant solicitation singled out the threats posed by hackers and misinformation.

That September, the select subcommittee report notes, the NSF awarded twelve Track F teams $750,000 each (a total of $9 million) to develop and refine their project ideas and build partnerships. The following year, the NSF selected six of the 12 teams to receive an additional $5 million each for their respective projects, according to the report.

Projects from the University of Michigan, University of Wisconsin-Madison, MIT, and Meedan, a nonprofit that specializes in developing software to counter misinformation, are highlighted by the select subcommittee.

Collectively, these four projects received $13 million from the NSF, it states.

The University of Michigan intended to use the federal funding to develop its tool WiseDex, which could use AI technology to assess the veracity of content on social media and assist large social media platforms with what content should be removed or otherwise censored, it states.

The University of Wisconsin-Madisons Course Correct, which was featured in an article from The College Fix last year, was intended to aid reporters, public health organizations, election administration officials, and others to address so-called misinformation on topics such as U.S. elections and COVID-19 vaccine hesitancy.

MITs Search Lit, as described in the select subcommittees report, was developed as an intervention to help educate groups of Americans the researchers believed were most vulnerable to misinformation such as conservatives, minorities, rural Americans, older adults, and military families.

Meedan, according to its website, used its funding to develop easy-to-use, mobile-friendly tools [that] will allow AAPI [Asian-American and Pacific Islander] community members to forward potentially harmful content to tiplines and discover relevant context explainers, fact-checks, media literacy materials, and other misinformation interventions.

According to the select committees report, Once empowered with taxpayer dollars, the pseudo-science researchers wield the resources and prestige bestowed upon them by the federal government against any entities that resist their censorship projects.

In some instances, the report states, if a social media company fails to act fast enough to change a policy or remove what the researchers perceive to be misinformation on its platform, disinformation researchers will issue blogposts or formal papers to generate a communications moment (i.e., negative press coverage) for the platform, seeking to coerce it into compliance with their demands.

Efforts were made via email to contact senior members of the three university research teams, as well as a representative from Meedan, regarding the portrayal of their work in the select subcommittees report.

Paul Resnick, who serves as the WiseDex project director at the University of Michigan, referred The College Fix to the WiseDex website.

Social media companies have policies against harmful misinformation. Unfortunately, enforcement is uneven, especially for non-English content, states the site. WiseDex harnesses the wisdom of crowds and AI techniques to help flag more posts [than humans can]. The result is more comprehensive, equitable, and consistent enforcement, significantly reducing the spread of misinformation.

A video on the site presents the tool as a means to help social media sites flag posts that violate platform policies and subsequently attach warnings to or remove the posts. Posts portraying approved COVID-19 vaccines as potentially dangerous are used as an example.

Michael Wagner from the University of Wisconsin-Madison also responded to The Fix, writing, It is interesting to be included in a report that claims to be about censorship when our project censors exactly no one.

According to the select subcommittee report, some of the researchers associated with Track F and similar projects, however, privately acknowledged efforts to combat misinformation were inherently political and a form of censorship.

Yet, following negative coverage of Track F projects, depicting them as politically motivated and their products as government-funded censorship tools, the report notes, the NSF began discussing media and outreach strategy with grant recipients.

Notes from a pair of Track F media strategy planning sessions included in Appendix B of the select subcommittees report recommended researchers, when interacting with the media, focus on the pro-democracy and non-ideological nature of their work, Give examples of both sides, and use sports metaphors.

The select subcommittee report also highlights that there were discussions of having a media blacklist, although at least one researcher from the University of Michigan objected to this, citing the potential optics.

MORE: Feds give professors $5.7M to develop tool to combat misinformation

Read More

Like The College Fix on Facebook / Follow us on Twitter

The rest is here:
NSF paid universities to develop AI censorship tools for social media, House report alleges - The College Fix

EFF Opposes California Initiative That Would Cause Mass Censorship – EFF

In recent years, lots of proposed laws purport to reduce harmful content on the internet, especially for kids. Some have good intentions. But the fact is, we cant censor our way to a healthier internet.

When it comes to online (or offline) content, people simply dont agree about whats harmful. And people make mistakes, even in content moderation systems that have extensive human review and appropriate appeals. The systems get worse when automated filters are brought into the mixas increasingly occurs, when moderating content at the vast scale of the internet.

Recently, EFF weighed in against an especially vague and poorly written proposal: California Ballot Initiative 23-0035, written by Common Sense Media. It would allow for plaintiffs to sue online information providers for damages of up to $1 million if it violates its responsibility of ordinary care and skill to a child.

We sent a public comment to California Attorney General Rob Bonta regarding the dangers of this wrongheaded proposal. While the AGs office does not typically take action for or against ballot initiatives at this stage of the process, we wanted to register our opposition to the initiative as early as we could.

Initiative 23-0035 would result in broad censorship via a flood of lawsuits claiming that all manner of content online is harmful to a single child. While it is possible for children (and adults) to be harmed online, Initiative 23-0035s vague standard, combined with extraordinarily large statutory damages, will severely limit access to important online discussions for both minors and adults. Many online platforms will censor user content in order to avoid this legal risk.

The following are just a few of the many areas of culture, politics, and life where people have different views of what is harmful, and where this ballot initiative thus could cause removal of online content:

In addition, the proposed initiative would lead to mandatory age verification. Its wrong to force someone to show ID before they go online to search for information. It eliminates the right to speak or to find information anonymously, for both minors and adults.

This initiative, with its vague language, is arguably worse than the misnamed Kids Online Safety Act, a federal censorship bill that we are opposing. We hope the sponsors of this initiative choose not to move forward with this wrongheaded and unconstitutional proposal. If they do, we are prepared to oppose it.

You can read EFFs full letter to A.G. Bonta here.

Continued here:
EFF Opposes California Initiative That Would Cause Mass Censorship - EFF

Quote of the Day: Fifteen Months Later, Despite 50 Alterations and Deletions, Censors Have Yet to Approve This Film. – China Digital Times

Todays quote of the day comes from a CDT Chinese faux Dragon Seal visual about acclaimed sixth-generation filmmaker Wang Xiaoshuais battle to get his latest film Above the Dust (Wt, fertile soil) past the censors at Chinas National Film Bureau. The film will make its world premiere at the Berlin Film Festival on Saturday, minus the censors seal of approval:

Large text: This film was submitted to the censors in October 2022. Fifteen months later, despite 50 alterations and deletions, censors have yet to approve this film. Small text, at bottom: According to Variety, director Wang Xiaoshuais new film Above the Dust will premiere at the Berlin Film Festival without the Dragon Seal of approval from Chinas National Film Bureau. Chinese authorities have contacted the director and ordered him to withdraw from the festival or risk punishment.

Varietys Patrick Frater explored the films historical subject matter and the directors commitment to having his film screened in Berlin:

With a young teen boy as the protagonist, the film depicts a hardscrabble family in a village in northwest China in 2009. While their neighbors slowly migrate to the city, the boys parents dig up the arid land in search of family heirlooms. Communicating with the ghost of his grandfather, the boy learns about the 1950s reforms that transferred peasant-owned land to the government and about the disastrous Great Leap Forward.

[] Wang will go ahead with the screening of Above the Dust in Berlin without the Dragon Seal of approval from Chinas National Film Bureau. Without that pre-credits signifier, no film from China may legally play in Chinese theaters or show in an overseas festival.

[] Chinese authorities have contacted Wang and ordered him to withdraw the film from the festival or risk severe consequences for both Wang and its Chinese production company. The movie is an unofficial co-production with the Netherlands Lemming Film.

Theres pressure on the production company and myself. A lot of pressure. It is forbidden to show the film without a Dragon Seal in Berlin. But Berlin selected it. Im happy about that, Wang tells Variety. This is the film that I wanted to make. About China. About our lives. About Chinese history and reality. [Source]

In other film censorship news, documentary filmmaker Chen Pinlin has been arrested for making a documentary about the White Paper Protests that occurred in late 2023. The Committee to Protect Journalists (CPJ) provided more detail:

On February 18, Chinese authorities charged Chen, who published a documentary on anti-COVID restriction protests in late 2023, with picking quarrels and provoking trouble, according to Chinese human rights news websites Minsheng Guancha and Weiquanwang. On January 5, Shanghai police arrested Chen, who published work under the pseudonym Plato, and detained the filmmaker at the Baoshan Detention Center [in Shanghai].

[] The protests, also known as the White Paper Movement, started when a deadly apartment fire in the northwest region of Xinjiang killed at least 10 people in November 2022, and questions were raised about whether the governments stringent lockdown measures prevented the victims from escaping.

Chen posted the documentary Not the Foreign Force on the first anniversary of the White Paper Movement on YouTube and X, formerly Twitter, in late November 2023, according to those reports. The documentary compiled extensive protest footage, translated social media posts demanding freedom of expression, and reported that some protesters remained detained. Chens X account and YouTube channel were deleted within that week. [Source]

See the original post:
Quote of the Day: Fifteen Months Later, Despite 50 Alterations and Deletions, Censors Have Yet to Approve This Film. - China Digital Times

Social Media Users say their Palestine Content is being Shadow-Banned — How to Know if it’s Happening to You – Informed Comment

By Carolina Are, Northumbria University, Newcastle |

Imagine you share an Instagram post about an upcoming protest, but none of your hundreds of followers like it. Are none of your friends interested in it? Or have you been shadow banned?

Social media can be useful for political activists hoping to share information, calls to action and messages of solidarity. But throughout Israels war on Gaza, social media users have suspected they are being censored through shadow banning for sharing content about Palestine.

Shadow banning describes loss of visibility, low engagement and poor account growth on platforms like Instagram, TikTok and X (formerly Twitter). Users who believe they are shadow banned suspect platforms may be demoting or not recommending their content and profiles to the main discovery feeds. People are not notified of shadow banning: all they see is the poor engagement they are getting.

Human Rights Watch, an international human rights advocacy non-governmental organisation, has recently documented what it calls systemic censorship of Palestine content on Facebook and Instagram. After several accusations of shadow banning, Meta (Facebook and Instagrams parent company) argued the issue was due to a bug and had nothing to do with the subject matter of the content.

I have been observing shadow bans both as a researcher and social media user since 2019. In addition to my work as an academic, I am a pole dancer and pole dance instructor. Instagram directly apologised to me and other pole dancers in 2019, saying they blocked a number of the hashtags we use in error. Based on my own experience, I conducted and published one of the very first academic studies on this practice.

Content moderation is usually automated carried out by algorithms and artificial intelligence. These systems may also, inadvertently or by design, pick up borderline controversial content when moderating at scale.

Photo by Ian Hutchinson on Unsplash

Most platforms are based in the US and govern even global content according to US law and values. Shadow banning is a case in point, typically targeting sex work, nudity and sexual expression prohibited by platforms community guidelines.

Moderation of nudity and sexuality has become more stringent since 2018, after the introduction of two US laws, the Fight Online Sex Trafficking Act (Fosta) and Stop Enabling Sex Trafficking Act (Sesta), that aimed to crack down on online sex trafficking.

The laws followed campaigns by anti-pornography coalitions and made online platforms legally liable for enabling sex trafficking (a crime) and sex work (a job). Fearing legal action, platforms began over-censoring any content featuring nudity and sexuality around the world, including of legal sex work, to avoid breaching Fosta-Sesta.

Although censorship of nudity and sex work is heralded as a means to protect children and victims of non-consensual image sharing, it can have serious consequences for the livelihoods and wellbeing of sex workers and adult content creators, as well as for freedom of expression.

Platforms responses to these laws should have been a warning about what was to come for political speech.

Social media users reported conversations and information about Black Lives Matter protests were shadowbanned in 2020. Now journalistic, activist and fact-checking content about Palestine also appears to be affected by this censorship technique.

Platforms are unlikely to admit to a shadow ban or bias in their content moderation. But their stringent moderation of terrorism and violent content may be leading to posts about Palestine that is neither incitement to violence nor terror-related getting caught in censorships net.

For most social media users, shadow banning is difficult to prove. But as a researcher and a former social media manager, I was able to show it was happening to me.

As my passion for pole dancing (and posts about it) grew, I kept a record of my reach and follower numbers over several years. While my skills were improving and my follower count was growing, I noticed my posts were receiving fewer views. This decline came shortly after Fosta-Sesta was approved.

It wasnt just me. Other pole dancers noticed that content from our favourite dancers was no longer appearing in our Instagram discovery feeds. Shadowbanning appeared to also apply to swathes of pole-dancing-related hashtags.

I was also able to show that when content surrounding one hashtag is censored, algorithms restrict similar content and words. This is one reason why some creators use algospeak editing content to trick the algorithm into not picking up words it would normally censor, as seen in anti-vaccine content throughout the pandemic.

TikTok and Twitter do not notify users that their account is shadow banned, but, as of 2022, Instagram does. By checking your account status in the apps settings, you can see if your content has been marked as non-recommendable due to potential violations of Instagrams content rules. This is also noticeable if other users have to type your full profile name for you to appear in search. In short, you are harder to find. In August 2023, X owner Elon Musk said that the company was working on a way for users to see if they had been affected by shadow bans, but no such function has been introduced. (The Conversation has contacted X for comment.)

The ability to see and appeal a shadow ban are positive changes, but mainly a cosmetic tweak to a freedom of expression problem that mostly targets marginalised groups. While Instagram may now be disclosing their decisions, the effect is the same: users posting about nudity, LGBTQ+ expression, protests and Palestine are often the ones to claim they are shadow banned.

Social media platforms are not just for fun, theyre a source of work and political organising, and a way to spread important information to a large audience. When these companies censor content, it can affect the mental health and the livelihoods of people who use it.

These latest instances of shadow banning show that platforms can pick a side in active crises, and may affect public opinion by hiding or showing certain content. This power over what is visible and what is not should concern us all.

Carolina Are, Innovation Fellow, Northumbria University, Newcastle

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Go here to read the rest:
Social Media Users say their Palestine Content is being Shadow-Banned -- How to Know if it's Happening to You - Informed Comment