The one big problem Washington faces on AI – POLITICO

Sam Altman and Sen. Richard Blumenthal (D-Ct.). | Getty Images

AI took its star turn through Congress this week, with lawmakers doing their best to demonstrate their awareness of how the tech is already disrupting society.

The highlight was a hearing featuring OpenAIs celebrity CEO Sam Altman, who welcomed the idea of flagging AI-generated content by default, and even standing up a new regulatory agency. Sen. Gary Peters (D-Mich.), chair of the Senate Committee on Homeland Security and Governmental Affairs, went even wonkier, holding a hearing on how the technology can or should be promulgated throughout the federal bureaucracy.

But even Congress best, bipartisan foot forward might still be a step behind. Because after all, those are todays problems.

Or even yesterdays, as lawmakers largely apply a lexicon that was developed for the social media eras data privacy and safety issues to an entirely new technology. When Congress turned its eye to the World Wide Web in the early 1990s, there was no way of knowing they were laying the first rails of a track that would lead to our currently-raging debate around TikTok, for example. What could they be missing now?

People are borrowing mental models from data privacy debates from five years ago, said Samuel Hammond, a senior economist at the Foundation for American Innovation who recently wrote in POLITICO Magazine about an entirely different, existential policy issue AI might pose.

Its completely unwieldy how do you even define the scope of the scene, when youre taking the hype around AI and conflating general purpose systems that have uncanny levels of understanding and reasoning with stuff that was around 10 years ago?

Hammond wrote in his op-ed about the need to place guardrails around the development of a potential artificial general intelligence that would supersede even humanitys capabilities. But you can turn the science fiction knob a little bit further to the left and find more concrete examples where the pace of development might outstrip our regulatory capacity: Hammond noted today that in this weeks hearings even Altman, generally a supporter of the current open-source AI development ecosystem, called for federal licensing only in the case of hyper-sophisticated autonomous agents, or AI that could design a novel pathogen.

In todays Morning Tech newsletter POLITICOs Mallory Culhane nodded to the general understanding in the tech industry that AI regulation will require specific, industry-level expertise and judgment especially given how rapidly the technology is developing.

If the United States wants to have a regulatory environment for AI that is flexible, responsive, and adaptable to emerging risks, it should lean into the sector-specific approach it has taken to regulation, Hodan Omaar, a senior policy analyst at the nonpartisan Center for Data Innovation, told Mallory. Federal regulators are the best placed to regulate issues in different domains because they have industry-specific knowledge.

Veteran regulator and former Federal Communications Commission Chairman Tom Wheeler praised the Digital Platform Commission Act re-introduced today by Sens. Michael Bennet (D-Colo.) and Peter Welch (D-Vt.), which would stand up a new agency specifically to oversee digital platforms and address algorithmic harm. He said that while existing agencies have plenty of tools for tackling AI the Equal Employment Opportunity Commission punishing AI discrimination in hiring, for example, or the CFPB policing AI-driven financial fraud a new platform, built on new regulatory principles, could have the agility to meet the unforeseen threats AI might pose.

What you need to have is a structure that is based on the English common law concept of duty of care that says a company has a responsibility to identify and mitigate potential harms that come from its product or service, Wheeler said. Technology is changing, the marketplace is changing, and we have to be agile as regulators.

Of course, navigating the foreseeability on which the duty of care concept is based is a little bit tricky when it comes to something like AI tools, where sometimes even the developers of a machine learning system arent quite sure how to explain whats going on inside it. Some more libertarian-minded thinkers believe that in that case its better to leave well enough alone until a tangible risk emerges.

When I see legislation dropping, or proposals for a whole new regulatory agency for AI, Im puzzled by what problem people are trying to solve, said Neil Chilson, senior research fellow at the Center for Growth and Opportunity. I worry that if we get it wrong, we throw out a bunch of benefits to consumers and cede ground to China, where what they will do with this technology is not in our best interest.

That leaves Washington doing what it can with the issues that are in front of it. And as POLITICOs Mohar Chatterjee and Rebecca Kern reported yesterday, those might be far closer to home than any threat of AGI or runaway software as a House subcommittee debated issues of rights and provenance around the images, essays, and even songs produced by generative AI. Yes, the development and spread of powerful AI is a promethean technological moment on par with the printing press or the internet. But for now and, its worth keeping in mind, as with both of those technologies the average person is just here for the memes.

A message from CTIA The Wireless Association:

America does not have enough full-power, licensed spectrum to meet exploding demand and fuel 5G-driven innovation. Congress must act now to restore FCC auction authority and allocate 1500MHz of new 5G mid-band spectrum to secure reliable wireless for all, and Americas leadership of the industries and innovations of the future. We can lead the world if we act now. Learn more at More5GSpectrum.com.

The Olympic rings are set up at a plaza that overlooks the Eiffel Tower in Paris on Sept. 14, 2017. | AP Photo//Michel Eule

The AI-powered crowd-control train keeps rolling: A French court has ruled that AI-powered cameras can be used to surveil crowds at the 2024 Paris Olympics.

As POLITICOs Laura Kayali reported yesterday evening, Frances Constitutional Council said that the law allowing for the use of experimental camera systems was valid because humans would ultimately be driving the development, implementation and possible evolution of algorithmic processing.

Human accountability has been a major sticking point for activists concerned about AI abuses or mishaps, like the numerous wrongful arrests or convictions that have marred the technologys rollout across the globe. But in this case its not enough for activists who warn that any implementation of algorithmic surveillance is too dangerous, and call for it to be banned outright: As Matt Berg reported in yesterdays DFD, a leading activist and researcher argued the overall deployment of facial recognition here should be alarming to everyone Because its even more of this incremental creep of surveillance theater that seems poorly designed to actually keep people safe, but its really problematic from a privacy perspective.

In France, at least, theyll be parsing that dilemma through March 2025, when the systems approval is set to expire.

A message from CTIA The Wireless Association:

Outlier Ventures, a London-based Web3 investing firm, has updated its outlook for the open-source, nigh-utopian vision for the metaverse that it and other decentralization boosters have been developing over the past several years.

Their new report, portentously titled The Open Metaverse Under Attack, warns of an ecosystem beset on all sides. The main roadblocks right now to their vision of a blockchain-based, user-owned 3D world: The rise of crypto scams and scandals, which tarnish the reputation of basic blockchain technology; the restriction and regulation of stablecoins; the potential classification of all crypto activity as securities trading; and the inevitable threat of monopoly as actors like FTX leave the crypto landscape.

The solution they propose is a stark, appropriately market-oriented one, saying that to drive adoption despite these risks developers must continue to walk the fine line between making Web3 accessible and usable. Building products that dont just compete purely on the philosophy of decentralization and user sovereignty, but that are 10x better than incumbents and/or allow entirely new functionality and benefits.

Stay in touch with the whole team: Ben Schreckinger ([emailprotected]); Derek Robertson ([emailprotected]); Mohar Chatterjee ([emailprotected]); Steve Heuser ([emailprotected]); and Benton Ives ([emailprotected]). Follow us @DigitalFuture on Twitter.

If youve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

CORRECTION: This newsletter has been updated to reflect that Neil Chilson is a senior research fellow at the Center for Growth and Opportunity.

A message from CTIA The Wireless Association:

5G is the fastest growing generation of wireless and its already having a big impact with 5G for home broadband helping to address the digital divide by bringing real competition to cable. Americans will use 5X more 5G data in the next five years, but today we do not have enough full-power, licensed spectrum to meet that demand. A new study from the Brattle Group predicts that America will need up to 1500MHz of 5G mid-band spectrum to avoid overloading our networks and support the development of the industries of the future. China and other countries see the same challenge and are moving quickly to allocate 370% more full-power spectrum than the United States. Congress must act now to restore FCC auction authority and allocate 1500MHz of new 5G mid-band spectrum to secure reliable wireless for all, and Americas economic and national security. Learn more at More5GSpectrum.com.

Read more here:

The one big problem Washington faces on AI - POLITICO

Related Posts

Comments are closed.