Warner Calls on Biden Administration to Remain Engaged in AI … – Senator Mark Warner
WASHINGTON U.S. Sen. Mark R.Warner(D-VA), Chairman of the Senate Select CommitteeonIntelligence,today urged the Bidenadministration to build on itsrecently announced voluntary commitmentsfrom several prominent artificial intelligence (AI) leaders in order to promote greater security, safety, and trust in the rapidly developing AI field.
As AI is rolled out more broadly, researchers have repeatedly demonstrated a number of concerning, exploitable weaknesses inprominentproducts, including abilitiestogenerate credible-seeming misinformation, developmalware,and craftsophisticatedphishing techniques. On Friday, the Biden administration announced that several AI companies had agreed to a series of measures that would promotegreatersecurity and transparency. Sen. Warner wrote to the administration to applaud these efforts and laid out a series of next steps to bolster this progress, including extending commitments to less capable models, seeking consumer-facing commitments, anddeveloping an engagement strategy to better addresssecurity risks.
These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks,Sen.Warnerwrote.As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models.
The letter builds on Sen. Warners continued advocacy for the responsible development and deployment of AI. InApril, Sen.Warnerdirectlyexpressed concerns to several AI CEOs about the potential risks posed byAI,and calledoncompaniestoensure that their productsandsystems are secure.
The letter also affirms Congress role in regulating AI, and expands on the annualIntelligence Authorization Act, legislation that recently passed unanimously through the Sente Select Committee on Intelligence. Sen. Warner urges theadministrationto adopt the strategy outlined in this pending bill as well as work with the FBI, CISA, ODNI, and other federal agencies to fully address the potential risks of AI technology.
Sen.Warner, a former tech entrepreneur,has been a vocal advocate for Big Tech accountabilityanda stronger national posture against cyberattacksandmisinformationonline. In addition to his April letters, has introduced several pieces of legislationaimed at addressing these issues, including theRESTRICT Act, which would comprehensively address theongoing threat posed by technology from foreign adversaries; theSAFE TECH Act,which would reform Section230andallow social mediacompaniestobe held accountable for enabling cyber-stalking,online harassment,anddiscriminationonsocial media platforms;andtheHonest Ads Act, which would requireonline political advertisementstoadheretothe same disclaimer requirements as TV, radio,andprintads.
A copy of thelettercan be foundhereandbelow.
Dear President Biden,
I write to applaud the Administrations significant efforts to secure voluntary commitments from leading AI vendors related to promoting greater security, safety, and trust through improved development practices. These commitments largely applicable to these vendors most advanced products can materially reduce a range of security and safety risks identified by researchers and developers in recent years. In April, I wrote to a number of these same companies, urging them to prioritize security and safety in their development, product release, and post-deployment practices. Among other things, I asked them to fully map dependencies and downstream implications of compromise of their systems; focus greater financial, technical and personnel resources on internal security; and improve their transparency practices through greater documentation of system capabilities, system limitations, and training data.
These commitments have the potential to shape developer norms and best practices associated with leading-edge AI models. At the same time, even less capable models are susceptible to misuse, security compromise, and proliferation risks. Moreover, a growing roster of highly-capable open source models have been released to the public and would benefit from similar pre-deployment commitments contained in a number of the July 21stobligations. As the current commitments stand, leading vendors do not appear inclined to extending these vital development commitments to the wider range of AI products they have released that fall below this threshold or have been released as open source models.
To be sure, responsibility ultimately lies with Congress to develop laws that advance consumer and patient safety, address national security and cyber-crime risks, and promote secure development practices in this burgeoning and highly consequential industry and in the downstream industries integrating their products. In the interim, the important commitments your Administration has secured can be bolstered in a number of important ways.
First, I strongly encourage your Administration to continue engagement with this industry to extend these all of these commitments more broadly to less capable models that, in part through their wider adoption, can produce the most frequent examples of misuse and compromise.
Second, it is vital to build on these developer- and researcher-facing commitments with a suite of lightweight consumer-facing commitments to prevent the most serious forms of abuse. Most prominent among these should be commitments from leading vendors to adopt development practices, licensing terms, and post-deployment monitoring practices that prevent non-consensual intimate image generation, social-scoring, real-time facial recognition (in contexts not governed by existing legal protections or due process safeguards), and proliferation activity in the context of malicious cyber activity or the production of biological or chemical agents.
Lastly, the Administrations successful high-level engagement with the leadership of these companies must be complemented by a deeper engagement strategy to track national security risks associated with these technologies. In June, the Senate Select Committee on Intelligence on a bipartisan basis advanced our annualIntelligence Authorization Act, a provision of which directed the President to establish a strategy to better engage vendors, downstream commercial users, and independent researchers on the security risks posed by, or directed at, AI systems.
This provision was spurred by conversations with leading vendors, who confided that they would not know how best to report malicious activity such as suspected intrusions of their internal networks, observed efforts by foreign actors to generate or refine malware using their tools, or identified activity by foreign malign actors to generate content to mislead or intimidate voters. To be sure, a highly-capable and well-established set of resources, processes, and organizations including the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation, and the Office of the Director of National Intelligences Foreign Malign Influence Center exist to engage these communities, including through counter-intelligence education and defensive briefings. Nonetheless, it appears that these entities have not been fully activated to engage the range of key stakeholders in this space. For this reason, I would encourage you to pursue the contours of the strategy outlined in our pending bill.
Thank you for your Administrations important leadership in this area. I look forward to working with you to develop bipartisan legislation in this area.
###
Read more here:
Warner Calls on Biden Administration to Remain Engaged in AI ... - Senator Mark Warner