KOSA Moves Forward in Congress, Threatening Free Speech and Encryption – Reason

In last Wednesday's newsletter, I discussed the Kids Online Safety Act (KOSA), noting that this censorship bill disguised as a child protection measure was scheduled to move forward imminently. On Thursday, the Senate held a cloture votenecessary to move a bill to a full-floor vote when unanimous consent to do so can't be reachedand voted 86 to 1 to move forward with the bill.

Senators are now slated to vote Tuesday whether KOSA should become law.

The one "no" in the cloture vote came from Kentucky Republican Sen. Rand Paul. In a "Dear Colleague" letter, Paul urged his colleagues to reject the bill and protested against the "duty of care" that KOSA would impose on internet platforms:

While proponents of the bill claim that it is not designed to regulate content, imposing a "duty of care" on online platforms to mitigate harms associated with mental health can only lead to one outcome: the stifling of First Amendment protected speech.

Should platforms stop children from seeing climate-related news because climate change is one of the leading sources of anxiety amongst younger generations? Should they stop children from seeing coverage of international conflicts because it could lead to depression? Should pro-life groups have their content censored because platforms worry that it could impact the mental well-being of teenage mothers? This bill opens the door to nearly limitless content regulation.

The bill contains a number of vague provisions and undefined terms. The text does not explain what it means for a platform to 'prevent and mitigate' harm, nor does it define 'addition-like behaviors.' Additionally, the bill does not explicitly define the term 'mental health disorder.' Instead, it references the Fifth Edition of the Diagnostic and Statistical Manual of Mental Health Disorders or 'the most current successor edition.' As such, the definition could change without any input from Congress.

Paul went on to call the bill "a Trojan Horse."

Alas, Paul's letter is "unlikely to have even the slightest effect," writes Techdirt's Mike Masnick. "KOSA has 70 cosponsors, all of whom want to get nonsense headlines in their local papers about how they voted to 'protect the children' even as the bill will actually do real harm to children."

It's possible the bill's only opponents in the Senate are Paul and the Oregon Democrat Ron Wyden. (Any time Paul and Wyden are the only two senators against a bill, you can bet it's some civil-liberties-squelching nonsense.)

Wyden voted yes in the cloture vote but had said he will vote no when it comes to actually passing the bill. While he thinks the final version of the bill is an improvement over earlier versions, "these improvements remain insufficient," Wyden posted on X.

"I fear KOSA could be used to sue services that offer privacy technologies like encryption or anonymity features that kids rely on to communicate securely and privately without being spied on by predators online," Wyden added.

Some changes made recently include 1) explicitly stating that nothing in the bill expands or limits the scope of the internet liability law known as Section 230, and 2) changing some language related to the duty-of-care requirement.

Before, the bill said "a covered platform shall take reasonable measures in the design and operation of any product, service, or feature that the covered platform knows is used by minors" (emphasis mine). Now it limits the dusty of care to "the creation and implementation of any design feature."

The latter change theoretically limits the duty of care to product design decisions and not the entirety of its operation or services. And applying KOSA's duty of care only to design, not to content, is theoretically good. But because product design decisionshow algorithms work, how content is displayed, suppressed, filtered, etc.are so intimately tied up in what content gets seen and what doesn't, the practical difference might be nil.

Matthew Lane, senior policy counsel at Fight for the Future, has a lengthy post explaining why "in KOSA's case, it's proven impossible so far to separate the design of content recommendation systems from the speech itself." He writes:

The difference between the aspirations of KOSA and its inevitable impacts work like this: KOSA wants systems engineers to design algorithms that put safety first and not user engagement. While some companies are already pivoting away from pure engagement focused algorithms, doing so can be really hard and expensive because algorithms aren't that smart. Purely engagement focused algorithms only need to answer one questiondid the user engage? By asking that one question, and testing different inferences, the algorithms can get very good at delivering content to a user that they will engage with.

But when it comes to multi-purpose algorithms, like those that want to only serve positive content and avoid harmful content, the task is much harder and the algorithms are unreliable. Algorithms don't understand what the content they are ranking or excluding is or how it will impact the mental health and well-being of the user. Even human beings can struggle to predict what content will cause the kinds of harm described by KOSA.

To comply with KOSA, tech companies will have to show that they are taking reasonable steps to make sure their personalized recommendation systems aren't causing harm to minors' mental health and well-being. The only real way to do that is to test the algorithms to see if they are serving "harmful" content. But what is "harmful" content? KOSA leans on the [Federal Trade Commission] and a government-created Kids Online Safety Council to signal what that content might be. This means that Congress will have significant influence over categorizing harmful speech and platforms will use those categories to implement keywords, user tags, and algorithmically-estimated tags to flag this "harmful" content when it appears in personal recommendation feeds and results. This opens the door to government censorship.

But it gets even worse. The easiest and cheapest way to make sure a personal recommendation system doesn't return "harmful" content is to simply exclude any content that resembles the "harmful" content. This means adding an additional content moderation layer that deranks or delists content that has certain keywords or tags, what is called "shadowbanning" in popular online culture.

Lane points out how this may even wind up hurting young people who turn online for help, since "these sorting systems will not be perfect and will lead to mistakes on both sides":

For example, imagine the scenario in which a young user posts a cry for help. This content could easily get flagged as suicide or other harmful content, and therefore get deranked across the feeds of all those that follow the person and care for them. No one may see it.

With senators set to pass KOSA tomorrow, "the real fight now moves to the House," notes Masnick. "It's unclear if there's consensus on moving on the bill there, and if so, in what form. The current House bill is different than the Senate one, so the two sides would have to agree on what version moves forward. The real answer should be neither, but it seems like the ship has sailed on the Senate version."

On Olympians and OnlyFans.

Maybe don't outsource domestic violence policing to AI?

A Texas woman who was charged with murder for taking abortion pills can move forward with a lawsuit against the sheriff and prosecutors who brought the case.

The Nebraska Supreme Court is allowing the state's 12-week abortion ban and its restrictions on transgender medical care to go forward, after ruling that lawmakers did not violate the constitution by combining these restrictions into a single bill.

An Arizona judge says the state's official pamphlet describing an abortion rights ballot measure should not refer to fetuses as "unborn human beings."

Read this article:
KOSA Moves Forward in Congress, Threatening Free Speech and Encryption - Reason

Related Posts

Comments are closed.