Chamber Response to the UK Consultation on AI Regulation – uschamber.com
June 20, 2023
Response to the UK Consultation - AI regulation: a pro-innovation approach policy proposals
The U.S. Chamber of Commerce (Chamber) is the worlds largest business federation, representing the interests of more than three million enterprises of all sizes and sectors. The Chamber is a longtime advocate for strong commercial ties between the United States and the United Kingdom. Indeed, the Chamber established the U.S.-UK Business Council in 2016 to help U.S. firms navigate the challenges and opportunities from the UKs departure from the European Union. With over 40 U.S. and UK firms as active members, the U.S.-UK Business Council is the premier Washington-based advocacy organization dedicated to strengthening the commercial relationship between the U.S. and the UK.
U.S. and UK companies have together invested over $1.5 trillion in each others economies, directly creating over 2.75 million British and American jobs. We are each others strongest allies, single largest foreign investors, and the U.S. is the UKs largest trading partner.
The Chamber is also a leading business voice on digital economy policy, including on issues of data privacy, cross-border data flows, cybersecurity, digital trade, artificial intelligence, and e-commerce. In the U.S. and globally, we support sound policy frameworks that promote data protection, support economic growth, and foster innovation.
The Chamber welcomes the opportunity to provide His Majestys Government (HMG) with comments on its White Paper on implementing a pro-innovation approach to AI regulation. The Chamber commends the UK governments commitment to advancing a sound AI policy framework that supports economic growth, promotes consumer protection, and fosters innovation. We welcome further opportunities to discuss this input with colleagues from the Department for Science, Innovation and Technology, Office for Artificial Intelligence, and other UK government agencies, including British Embassy Washington as this strategy is implemented.
Additionally, we commend the Prime Minister's plan to host the inaugural Global Summit on AI Safety in the United Kingdom this year. We believe the Summit will serve as a platform to bring together key government representatives, academics, and leading technology companies to facilitate targeted and swift international action, focused on safety, security, and the vast opportunity at the forefront of AI technology.
AI is an innovative and transformational technology. The Chamber has long advocated for AI as a positive force, capable of addressing major societal challenges and spurring economic expansion for the benefit of consumers, businesses, and society. We promote rules based and competitive trade, and alignment around emerging technologies, including through standards promoting the responsible use of AI.
Our member companies already demonstrate the many examples of how AI technologies have positively impacted various industries. For instance, AI-powered predictive maintenance systems have revolutionized manufacturing by reducing downtime, optimizing equipment performance, and improving productivity, leading to tangible economic results. AI algorithms in healthcare have enhanced diagnostics accuracy, leading to faster and more accurate treatments that improve patient outcomes and save lives.
The Chamber has encouraged policymakers in multiple jurisdictions to refrain from instituting overly prescriptive regulations or regulations that do not account for the novel qualities of AI technologies. Potential negative examples include stifling innovation, e.g., if regulations are too restrictive or prescriptive, they may impede the development and deployment of new AI technologies. This can hinder the ability of businesses to explore novel use cases, create disruptive solutions, and drive technological advancements.
Overly prescriptive regulations can also reduce flexibility. AI technologies are rapidly evolving, and regulatory frameworks need to be adaptable to keep pace with these advancements. If regulations are rigid and fail to account for the dynamic nature of AI, they can limit the ability of businesses to adapt and iterate their AI systems as new technologies and methodologies emerge. Further, overly burdensome regulations can create a competitive disadvantage for the UK. For example, if regulations are inconsistent, fragmented, or overly burdensome in the UK compared to the EU, it could create a competitive disadvantage for businesses to operate in the UK. This can lead to a diversion of AI investments and talent to more favorable regulatory environments, impacting the competitiveness of the UK.
Aligned and globally recognized regulatory frameworks can help promote competition and foster global cooperation. Additionally, regulations that fail to consider the unique qualities of AI technologies may not effectively address the risks associated with AI systems. One-size-fits-all regulations might not adequately account for the diverse range of AI applications, their varying levels of risk, or the roles of different actors in the AI lifecycle. This can result in either overregulation that stifles low-risk applications or under regulation that fails to adequately mitigate risks in high-risk areas.
Excessive regulatory requirements can also impose substantial compliance costs on businesses, especially smaller enterprises that may lack the resources to navigate complex regulatory frameworks. If compliance becomes too burdensome in the UK, it could reduce the adoption of AI technologies, particularly for UK SMEs, hindering their ability to compete in the global market and reap the potential benefits of AI.
The better alternative is to develop targeted rules that can effectively address the tradeoffs associated with various AI use-cases and the roles of different actors in the AI developmental lifecycle. These rules should be proportionate and based on risk assessment, technologically impartial, and technically feasible. These approaches not only increase safety and build trust, but also allow for necessary flexibility and innovation, given that AI is a rapidly evolving technology. Controls to reduce the risk of AI harm should focus on areas such as unintended bias mitigation, model monitoring, fairness, and transparency. As the UK proceeds with establishing an AI governance regime, we ask that you keep in the mind the following broad principles:
Develop Risk-Based Approaches to Governing AI
Governments should incorporate risk-based approaches rather than prescriptive requirements into frameworks governing the development, deployment, and use of AI. It is simply not feasible to establish a uniform set of rules that can adequately address the distinctive features of each industry utilizing AI and its effect on individuals. Indeed, we recognize that AI use cases that involve a high risk should face a higher degree of scrutiny than a use case where the risk of concrete harm to individuals is low. New regulations should be risk-based and proportionate with a focus on high risk use cases rather than on entire sectors or technologies. Additionally, any risk assessment should account for the significant social, safety, and economic benefits that may accrue when an AI application replaces a human action.
It is crucial to remember that high risk sectors like autonomous vehicles and healthcare diagnostics for example, are already subject to extensive regulation by established bodies such as the UK Department for Transport (DfT) and Medicines and Healthcare Products Regulatory Agency (MHRA). While the integration of AI technologies within these sectors can introduce new dimensions of complexity and potential risks, it is again crucial to recognize that if AI-specific regulations are needed, they need to complement and align with already existing sector-specific regulations. As opposed to duplicating efforts or creating conflicting requirements which can increase risk.
Coordination between regulatory bodies is vital to ensuring that AI technologies are adequately governed to consider the unique challenges they present while avoiding unnecessary regulatory burdens. By leveraging the expertise and insights of established regulatory agencies like the DfT and MHRA, UK AI-specific regulations can build upon existing frameworks and address the novel aspects and risks associated with AI applications within highly regulated sectors.
Support Private and Public Investment in AI Research & Development (R&D)
Investment in R&D is essential to AI innovation. Governments should encourage and incentivize this investment by partnering with businesses at the forefront of AI, promoting flexible governance frameworks such as regulatory sandboxes, utilizing testbeds, and funding both basic R&D and that which spurs innovation in trustworthy AI. Policymakers should recognize that advancements in AI R&D happen within a global ecosystem where government, the private sector, universities, and other institutions collaborate across borders.
Abide by Internationally Recognized Standards
Industry-led, consensus-based standards are essential to digital innovation. Policymakers should support their development in recognized international standards bodies and consortia. Governments should also leverage industry-led standards, certification, and validation regimes on a voluntary basis whenever possible to facilitate the adoption of AI technologies. Global standards developed in collaboration with the business community that are voluntary, open, transparent, globally recognized, consensus-based, and technology-neutral are the best way to promote common approaches that are technically sound and aligned with policy objectives.
Embrace International Regulatory Cooperation
Regulators can advance multilateral cooperation on AI governance by strengthening mechanisms for global coordination on AI transparency. This includes promoting interoperable approaches to AI governance to enable best practices and minimize the risk of unnecessary regulatory divergences and trade restrictive practices emerging in the digital economy. Additionally, endorsing transparent, multi-stakeholder approaches to AI governance is essential, including in the development of voluntary standards, frameworks, and codes of practice that can bridge the gap between AI principles and its implementation. Multi-stakeholder initiatives have the greatest potential to identify gaps in AI outcomes and capabilities, and to mobilize AI actors to address them.
There are examples that the UK can turn to in this context. The approach being taken in the United States via the National Institute of Standards and Technology (NIST) and its Artificial Intelligence Risk Management Framework (AI RMF), as well as in Singapore and Japan, incorporate many of these characteristics. NIST and the AI RMF emphasize a risk-based approach to AI governance, recognizing the importance of proportionate regulations that account for different use cases and actors in the AI lifecycle. NIST's framework promotes safety, transparency, and accountability while fostering innovation, making it a suitable model for the UK's AI governance approach.
Singapore's Model AI Governance Framework and Japan's AI governance model offer valuable insights into effective AI governance practices. These frameworks also share common characteristics with the Chamber's proposed principles, such as stakeholder engagement, collaboration among government, industry, and academia, and the promotion of responsible and trustworthy AI. They demonstrate a commitment to balancing the benefits of AI innovation while ensuring safety and the well-being of individuals and society. The UK can draw inspiration from these models to develop a robust AI governance regime that aligns with international best practices and addresses the unique challenges posed by AI technologies.
To further enhance international regulatory cooperation, here are some measures HMG could consider in order to promote collaboration. This could be through the establishment of global frameworks that facilitate the harmonization of AI policies across borders. Governments could also consider creating platforms for information sharing and best practice exchange, enabling regulators to learn from one another's experiences and leverage collective knowledge. Additionally, joint research initiatives, for example between the U.S. and UK could foster collaboration among countries, academia, and industry to address common challenges and advance the understanding of AI's impacts. These collaborative efforts would promote consistent and effective regulation, prevent unnecessary regulatory divergences, and create a global ecosystem that encourages responsible AI development and deployment.
Accelerated Cooperation on AI
The Chamber and our members recognize that AI has the power to significantly transform societies and economies. To that end, we share a commitment to government action that unlocks the vast opportunities and addresses the potential risks arising from the rapid advancement of AI technologies. We emphasize the importance of engaging with companies, research institutions, civil society, and our allies and partners to ensure a well-rounded perspective. Our collective aim is to accelerate collaboration on AI, prioritizing the safe and responsible development of this technology.
Ethical Principles
In light of the increasing significance of ethical considerations in AI development and deployment, the Chamber believes it is imperative to address the importance of ethical principles in the context of AI governance. This should encompass essential aspects such as fairness, transparency, accountability, and the responsible use of AI. By incorporating these principles into regulatory frameworks, governments like the UK can promote public trust, minimize the potential for biases or discriminatory outcomes, and ensure that AI technologies are developed and deployed in a manner that aligns with societal values and norms. Emphasizing ethics in AI governance will help foster responsible innovation, mitigate risks, and ensure that the benefits of AI are distributed equitably across the UK population.
Non-Market Economies
Collaboration between the UK and U.S. on AI frameworks is paramount to counter the efforts of non-market economies, particularly China, to dominate the AI landscape. By aligning our approaches and sharing best practices, the UK and the U.S. can leverage each others expertise, innovation ecosystems, and regulatory frameworks to ensure a competitive and ethical AI environment. Strengthening transatlantic cooperation not only enhances the global influence of market-based economies, but also establishes a unified front in advocating for responsible AI governance that upholds democratic values, safeguards privacy and data protection, and promotes fair competition. Together, the UK and the U.S. can shape a global AI landscape that prioritizes innovation, transparency, and the well-being of individuals and societies, countering the influence of non-market economies and fostering an ecosystem that drives global AI advancement.
In conclusion, as the UK strives to be a policy leader in AI governance, it possesses a unique opportunity to inspire and encourage other nations to adopt these broad-based approaches. By championing risk-based frameworks, promoting private and public investment in AI research and development, embracing internationally recognized standards, fostering international regulatory cooperation, and accelerating collaboration on AI, the UK can set a powerful example for responsible and innovative AI governance. Through its leadership, particularly with the global AI summit in London this fall, the UK can help shape a global landscape that fosters trust, supports economic growth, and harnesses the transformative potential of AI for the betterment of societies worldwide.
Contact
Abel Torres
Executive Director, Center for Global Regulatory Cooperation
Zach Helzer
Senior Director, Europe & U.S.-UK Business Council
Read this article:
Chamber Response to the UK Consultation on AI Regulation - uschamber.com
- US ahead in AI innovation, easily surpassing China in Stanfords new ranking - The Associated Press - November 21st, 2024 [November 21st, 2024]
- Announcing recipients of the Google.org AI Opportunity Fund: Europe - The Keyword - November 21st, 2024 [November 21st, 2024]
- AI agents what they are, and how theyll change the way we work - Microsoft - November 21st, 2024 [November 21st, 2024]
- Shannon Vallor says AI does present an existential risk but not the one you think - Vox.com - November 21st, 2024 [November 21st, 2024]
- US gathers allies to talk AI safety as Trump's vow to undo Biden's AI policy overshadows their work - The Associated Press - November 21st, 2024 [November 21st, 2024]
- Google's AI-Powered OSS-Fuzz Tool Finds 26 Vulnerabilities in Open-Source Projects - The Hacker News - November 21st, 2024 [November 21st, 2024]
- The intersection of AI and the downfall of long-form literature - Tufts Daily - November 21st, 2024 [November 21st, 2024]
- Silicon Valley billionaire warns 'absolutely there's a bubble' in AI valuations: 'Nobody would be surprised' if OpenAI 'disappeared next Monday' -... - November 21st, 2024 [November 21st, 2024]
- Advancing red teaming with people and AI - OpenAI - November 21st, 2024 [November 21st, 2024]
- Can Google Scholar survive the AI revolution? - Nature.com - November 21st, 2024 [November 21st, 2024]
- Nearly half of Gen AI adopters want it open source - here's why - ZDNet - November 21st, 2024 [November 21st, 2024]
- Founder of AI education chatbot charged with defrauding investors of $10 million - USA TODAY - November 21st, 2024 [November 21st, 2024]
- Microsoft at 50: An AI Giant. A Kinder Culture. And Still Hellbent on Domination - WIRED - November 21st, 2024 [November 21st, 2024]
- Matthew Libby on the dark underbelly of AI and his new play Data at Arena Stage - DC Theater Arts - November 21st, 2024 [November 21st, 2024]
- Cruise fesses up, Pony AI raises its IPO ambitions, and the TuSimple drama dials back up - TechCrunch - November 21st, 2024 [November 21st, 2024]
- I Called AI Santa Claus. He Hung Up On Me - The Daily Beast - November 21st, 2024 [November 21st, 2024]
- Nvidia says its Blackwell AI chip is full steam ahead - The Verge - November 21st, 2024 [November 21st, 2024]
- AI in drug discovery is nonsense, but call Schrdinger AI if you want, says CEO - STAT - November 21st, 2024 [November 21st, 2024]
- Is This a Sign That SoundHound AI Is Becoming a Safer Stock to Buy? - The Motley Fool - November 21st, 2024 [November 21st, 2024]
- Why the U.S. Launched an International Network of AI Safety Institutes - TIME - November 21st, 2024 [November 21st, 2024]
- Nvidias boss dismisses fears that AI has hit a wall - The Economist - November 21st, 2024 [November 21st, 2024]
- Will the bubble burst for AI in 2025, or will it start to deliver? - The Economist - November 21st, 2024 [November 21st, 2024]
- Thousands of AI agents later, who even remembers what they do? - The Register - November 21st, 2024 [November 21st, 2024]
- Child safety org flags new CSAM with AI trained on real child sex abuse images - Ars Technica - November 21st, 2024 [November 21st, 2024]
- Nvidias Sales Soar as AI Spending Boom Barrels Ahead - The Wall Street Journal - November 21st, 2024 [November 21st, 2024]
- How Oracle Got Its Mojo Back. What's Behind The AI Cloud Push Powering Its 80% Stock Gain. - Investor's Business Daily - November 21st, 2024 [November 21st, 2024]
- KPMG to spend $100 million on AI partnership with Google Cloud - Reuters - November 21st, 2024 [November 21st, 2024]
- Microsoft is the mystery AI company licensing HarperCollins books, says Bloomberg - The Verge - November 21st, 2024 [November 21st, 2024]
- How Students Can AI-Proof Their Careers - The Wall Street Journal - November 21st, 2024 [November 21st, 2024]
- The US Patent and Trademark Office Banned Staff From Using Generative AI - WIRED - November 21st, 2024 [November 21st, 2024]
- Wall Street strategists aren't relying on AI to drive the stock market rally anymore: Morning Brief - Yahoo Finance - November 19th, 2024 [November 19th, 2024]
- Move over chatbots, AI agents are the next big thing. What are they? - Quartz - November 19th, 2024 [November 19th, 2024]
- Meta AI Begins Roll Out on Ray-Ban Meta Glasses in France, Italy, Ireland and Spain - Meta - November 19th, 2024 [November 19th, 2024]
- Exclusive: Leaked Amazon documents identify critical flaws in the delayed AI reboot of Alexa - Fortune - November 19th, 2024 [November 19th, 2024]
- How Mark Zuckerberg went all-in to make Meta a major AI player and threaten OpenAIs dominance - Fortune - November 19th, 2024 [November 19th, 2024]
- AI maths assistant could help solve problems that humans are stuck on - New Scientist - November 19th, 2024 [November 19th, 2024]
- AI Is Now Co-Creator Of Our Collective Intelligence So Watch Your Back - Forbes - November 19th, 2024 [November 19th, 2024]
- Itching to write a book? AI publisher Spines wants to make a deal - TechCrunch - November 19th, 2024 [November 19th, 2024]
- AI is hitting a wall just as the hype around it reaches the stratosphere - CNN - November 19th, 2024 [November 19th, 2024]
- AI can learn to think before it speaks - Financial Times - November 19th, 2024 [November 19th, 2024]
- Can AI Robots Offer Advice That Heals Souls? - Religion Unplugged - November 19th, 2024 [November 19th, 2024]
- Crook breaks into AI biz, points $250K wire payment at their own account - The Register - November 19th, 2024 [November 19th, 2024]
- Symbotic Stock Rises 28%. Heres Why the AI-Robot Company Is Surging. - Barron's - November 19th, 2024 [November 19th, 2024]
- Leaked: Amazon held talks with Instacart, Uber, Ticketmaster, and others for help on its new AI-powered Alexa - Business Insider - November 19th, 2024 [November 19th, 2024]
- Got $3,000? 3 Artificial Intelligence (AI) Stocks to Buy and Hold for the Long Term - The Motley Fool - November 19th, 2024 [November 19th, 2024]
- Theres No Longer Any Doubt That Hollywood Writing Is Powering AI - The Atlantic - November 19th, 2024 [November 19th, 2024]
- Is AI making job applications easier, or creating another problem? - NBC News - November 19th, 2024 [November 19th, 2024]
- Microsoft announces its own Black Hat-like hacking event with big rewards for AI security - The Verge - November 19th, 2024 [November 19th, 2024]
- AI startup Perplexity adds shopping features as search competition tightens - Reuters - November 19th, 2024 [November 19th, 2024]
- Scientists Are Using AI To Improve Vegan Meat Alternatives - Plant Based News - November 19th, 2024 [November 19th, 2024]
- Microsofts new Copilot Actions use AI to automate repetitive tasks - The Verge - November 19th, 2024 [November 19th, 2024]
- AI Spending To Exceed A Quarter Trillion Dollars Next Year - Seeking Alpha - November 19th, 2024 [November 19th, 2024]
- Ben Affleck tells actors and writers not to worry about AI - TechCrunch - November 19th, 2024 [November 19th, 2024]
- New Nvidia AI chips overheating in servers, the Information reports - Reuters - November 19th, 2024 [November 19th, 2024]
- This 'lifelike' AI granny is infuriating phone scammers. Here's how - and why - ZDNet - November 19th, 2024 [November 19th, 2024]
- Nvidia stock sinks on reports of Blackwell AI server issues ahead of earnings - Yahoo Finance - November 19th, 2024 [November 19th, 2024]
- Billionaire Mark Zuckerberg Has Transformed Meta into a Generative AI Leader. But Is the Stock a Buy? - The Motley Fool - November 19th, 2024 [November 19th, 2024]
- The Third Wave Of AI Is Here: Why Agentic AI Will Transform The Way We Work - Forbes - November 19th, 2024 [November 19th, 2024]
- Decoding Trumps Tech And AI Agenda: Innovation And Policy Impacts - Forbes - November 19th, 2024 [November 19th, 2024]
- Google.orgs $20 million fund for AI and science - The Keyword - November 19th, 2024 [November 19th, 2024]
- AI-Driven Dark Patterns: How Artificial Intelligence Is Supercharging Digital Manipulation - Forbes - November 17th, 2024 [November 17th, 2024]
- AI could alter data science as we know it - here's why - ZDNet - November 17th, 2024 [November 17th, 2024]
- Biden and Xi take a first step to limit AI and nuclear decisions at their last meeting - NPR - November 17th, 2024 [November 17th, 2024]
- Biden, Xi agree that humans, not AI, should control nuclear arms - Reuters - November 17th, 2024 [November 17th, 2024]
- Meet Evo, an AI model that can predict the effects of gene mutations with 'unparalleled accuracy' - Livescience.com - November 17th, 2024 [November 17th, 2024]
- I Asked AI What The "Most Beautiful Person" In 27 Countries Would Look Like, And Here Are The Results - BuzzFeed - November 17th, 2024 [November 17th, 2024]
- Graph-based AI model maps the future of innovation - MIT News - November 17th, 2024 [November 17th, 2024]
- Why it Matters That Googles AI Gemini Chatbot Made Death Threats to a Grad Student - Inc. - November 17th, 2024 [November 17th, 2024]
- I just had Elon Musk's Grok AI rate and roast the desk setups of Tom's Guide editors - Tom's Guide - November 17th, 2024 [November 17th, 2024]
- How real-world businesses are transforming with AI - Microsoft - November 17th, 2024 [November 17th, 2024]
- Im Out of Shape. Will an AI Trainer Improve My Fitness? - WIRED - November 17th, 2024 [November 17th, 2024]
- Coca Colas AI-Generated Ad Controversy, Explained - Forbes - November 17th, 2024 [November 17th, 2024]
- 3 New AI Smart Home Features Arrive With Gemini and Google Nest - CNET - November 17th, 2024 [November 17th, 2024]
- 2 Artificial Intelligence (AI) Stocks to Buy on the Dip - The Motley Fool - November 17th, 2024 [November 17th, 2024]
- About a Hero Review: An AI-Assisted Docu-Mystery That Wont Give Werner Herzog Any Sleepless Nights - Variety - November 17th, 2024 [November 17th, 2024]
- This Artificial Intelligence (AI) Stock Soared Since Trump Won the Election, but Is It a Buy? - The Motley Fool - November 17th, 2024 [November 17th, 2024]
- Gemini AI tells the user to die the answer appeared out of nowhere when the user asked Google's Gemini for help with his homework - Tom's Hardware - November 17th, 2024 [November 17th, 2024]
- Musk's concerns over Google DeepMind 'AI Dictatorship' revealed in emails from 2016 communications released during the recent OpenAI court case -... - November 17th, 2024 [November 17th, 2024]
- Bidens final meeting with Xi Jinping reaps agreement on AI and nukes - POLITICO - November 17th, 2024 [November 17th, 2024]
- The AI lab waging a guerrilla war over exploitative AI - MIT Technology Review - November 17th, 2024 [November 17th, 2024]