June 20, 2023
Response to the UK Consultation - AI regulation: a pro-innovation approach policy proposals
The U.S. Chamber of Commerce (Chamber) is the worlds largest business federation, representing the interests of more than three million enterprises of all sizes and sectors. The Chamber is a longtime advocate for strong commercial ties between the United States and the United Kingdom. Indeed, the Chamber established the U.S.-UK Business Council in 2016 to help U.S. firms navigate the challenges and opportunities from the UKs departure from the European Union. With over 40 U.S. and UK firms as active members, the U.S.-UK Business Council is the premier Washington-based advocacy organization dedicated to strengthening the commercial relationship between the U.S. and the UK.
U.S. and UK companies have together invested over $1.5 trillion in each others economies, directly creating over 2.75 million British and American jobs. We are each others strongest allies, single largest foreign investors, and the U.S. is the UKs largest trading partner.
The Chamber is also a leading business voice on digital economy policy, including on issues of data privacy, cross-border data flows, cybersecurity, digital trade, artificial intelligence, and e-commerce. In the U.S. and globally, we support sound policy frameworks that promote data protection, support economic growth, and foster innovation.
The Chamber welcomes the opportunity to provide His Majestys Government (HMG) with comments on its White Paper on implementing a pro-innovation approach to AI regulation. The Chamber commends the UK governments commitment to advancing a sound AI policy framework that supports economic growth, promotes consumer protection, and fosters innovation. We welcome further opportunities to discuss this input with colleagues from the Department for Science, Innovation and Technology, Office for Artificial Intelligence, and other UK government agencies, including British Embassy Washington as this strategy is implemented.
Additionally, we commend the Prime Minister's plan to host the inaugural Global Summit on AI Safety in the United Kingdom this year. We believe the Summit will serve as a platform to bring together key government representatives, academics, and leading technology companies to facilitate targeted and swift international action, focused on safety, security, and the vast opportunity at the forefront of AI technology.
AI is an innovative and transformational technology. The Chamber has long advocated for AI as a positive force, capable of addressing major societal challenges and spurring economic expansion for the benefit of consumers, businesses, and society. We promote rules based and competitive trade, and alignment around emerging technologies, including through standards promoting the responsible use of AI.
Our member companies already demonstrate the many examples of how AI technologies have positively impacted various industries. For instance, AI-powered predictive maintenance systems have revolutionized manufacturing by reducing downtime, optimizing equipment performance, and improving productivity, leading to tangible economic results. AI algorithms in healthcare have enhanced diagnostics accuracy, leading to faster and more accurate treatments that improve patient outcomes and save lives.
The Chamber has encouraged policymakers in multiple jurisdictions to refrain from instituting overly prescriptive regulations or regulations that do not account for the novel qualities of AI technologies. Potential negative examples include stifling innovation, e.g., if regulations are too restrictive or prescriptive, they may impede the development and deployment of new AI technologies. This can hinder the ability of businesses to explore novel use cases, create disruptive solutions, and drive technological advancements.
Overly prescriptive regulations can also reduce flexibility. AI technologies are rapidly evolving, and regulatory frameworks need to be adaptable to keep pace with these advancements. If regulations are rigid and fail to account for the dynamic nature of AI, they can limit the ability of businesses to adapt and iterate their AI systems as new technologies and methodologies emerge. Further, overly burdensome regulations can create a competitive disadvantage for the UK. For example, if regulations are inconsistent, fragmented, or overly burdensome in the UK compared to the EU, it could create a competitive disadvantage for businesses to operate in the UK. This can lead to a diversion of AI investments and talent to more favorable regulatory environments, impacting the competitiveness of the UK.
Aligned and globally recognized regulatory frameworks can help promote competition and foster global cooperation. Additionally, regulations that fail to consider the unique qualities of AI technologies may not effectively address the risks associated with AI systems. One-size-fits-all regulations might not adequately account for the diverse range of AI applications, their varying levels of risk, or the roles of different actors in the AI lifecycle. This can result in either overregulation that stifles low-risk applications or under regulation that fails to adequately mitigate risks in high-risk areas.
Excessive regulatory requirements can also impose substantial compliance costs on businesses, especially smaller enterprises that may lack the resources to navigate complex regulatory frameworks. If compliance becomes too burdensome in the UK, it could reduce the adoption of AI technologies, particularly for UK SMEs, hindering their ability to compete in the global market and reap the potential benefits of AI.
The better alternative is to develop targeted rules that can effectively address the tradeoffs associated with various AI use-cases and the roles of different actors in the AI developmental lifecycle. These rules should be proportionate and based on risk assessment, technologically impartial, and technically feasible. These approaches not only increase safety and build trust, but also allow for necessary flexibility and innovation, given that AI is a rapidly evolving technology. Controls to reduce the risk of AI harm should focus on areas such as unintended bias mitigation, model monitoring, fairness, and transparency. As the UK proceeds with establishing an AI governance regime, we ask that you keep in the mind the following broad principles:
Develop Risk-Based Approaches to Governing AI
Governments should incorporate risk-based approaches rather than prescriptive requirements into frameworks governing the development, deployment, and use of AI. It is simply not feasible to establish a uniform set of rules that can adequately address the distinctive features of each industry utilizing AI and its effect on individuals. Indeed, we recognize that AI use cases that involve a high risk should face a higher degree of scrutiny than a use case where the risk of concrete harm to individuals is low. New regulations should be risk-based and proportionate with a focus on high risk use cases rather than on entire sectors or technologies. Additionally, any risk assessment should account for the significant social, safety, and economic benefits that may accrue when an AI application replaces a human action.
It is crucial to remember that high risk sectors like autonomous vehicles and healthcare diagnostics for example, are already subject to extensive regulation by established bodies such as the UK Department for Transport (DfT) and Medicines and Healthcare Products Regulatory Agency (MHRA). While the integration of AI technologies within these sectors can introduce new dimensions of complexity and potential risks, it is again crucial to recognize that if AI-specific regulations are needed, they need to complement and align with already existing sector-specific regulations. As opposed to duplicating efforts or creating conflicting requirements which can increase risk.
Coordination between regulatory bodies is vital to ensuring that AI technologies are adequately governed to consider the unique challenges they present while avoiding unnecessary regulatory burdens. By leveraging the expertise and insights of established regulatory agencies like the DfT and MHRA, UK AI-specific regulations can build upon existing frameworks and address the novel aspects and risks associated with AI applications within highly regulated sectors.
Support Private and Public Investment in AI Research & Development (R&D)
Investment in R&D is essential to AI innovation. Governments should encourage and incentivize this investment by partnering with businesses at the forefront of AI, promoting flexible governance frameworks such as regulatory sandboxes, utilizing testbeds, and funding both basic R&D and that which spurs innovation in trustworthy AI. Policymakers should recognize that advancements in AI R&D happen within a global ecosystem where government, the private sector, universities, and other institutions collaborate across borders.
Abide by Internationally Recognized Standards
Industry-led, consensus-based standards are essential to digital innovation. Policymakers should support their development in recognized international standards bodies and consortia. Governments should also leverage industry-led standards, certification, and validation regimes on a voluntary basis whenever possible to facilitate the adoption of AI technologies. Global standards developed in collaboration with the business community that are voluntary, open, transparent, globally recognized, consensus-based, and technology-neutral are the best way to promote common approaches that are technically sound and aligned with policy objectives.
Embrace International Regulatory Cooperation
Regulators can advance multilateral cooperation on AI governance by strengthening mechanisms for global coordination on AI transparency. This includes promoting interoperable approaches to AI governance to enable best practices and minimize the risk of unnecessary regulatory divergences and trade restrictive practices emerging in the digital economy. Additionally, endorsing transparent, multi-stakeholder approaches to AI governance is essential, including in the development of voluntary standards, frameworks, and codes of practice that can bridge the gap between AI principles and its implementation. Multi-stakeholder initiatives have the greatest potential to identify gaps in AI outcomes and capabilities, and to mobilize AI actors to address them.
There are examples that the UK can turn to in this context. The approach being taken in the United States via the National Institute of Standards and Technology (NIST) and its Artificial Intelligence Risk Management Framework (AI RMF), as well as in Singapore and Japan, incorporate many of these characteristics. NIST and the AI RMF emphasize a risk-based approach to AI governance, recognizing the importance of proportionate regulations that account for different use cases and actors in the AI lifecycle. NIST's framework promotes safety, transparency, and accountability while fostering innovation, making it a suitable model for the UK's AI governance approach.
Singapore's Model AI Governance Framework and Japan's AI governance model offer valuable insights into effective AI governance practices. These frameworks also share common characteristics with the Chamber's proposed principles, such as stakeholder engagement, collaboration among government, industry, and academia, and the promotion of responsible and trustworthy AI. They demonstrate a commitment to balancing the benefits of AI innovation while ensuring safety and the well-being of individuals and society. The UK can draw inspiration from these models to develop a robust AI governance regime that aligns with international best practices and addresses the unique challenges posed by AI technologies.
To further enhance international regulatory cooperation, here are some measures HMG could consider in order to promote collaboration. This could be through the establishment of global frameworks that facilitate the harmonization of AI policies across borders. Governments could also consider creating platforms for information sharing and best practice exchange, enabling regulators to learn from one another's experiences and leverage collective knowledge. Additionally, joint research initiatives, for example between the U.S. and UK could foster collaboration among countries, academia, and industry to address common challenges and advance the understanding of AI's impacts. These collaborative efforts would promote consistent and effective regulation, prevent unnecessary regulatory divergences, and create a global ecosystem that encourages responsible AI development and deployment.
Accelerated Cooperation on AI
The Chamber and our members recognize that AI has the power to significantly transform societies and economies. To that end, we share a commitment to government action that unlocks the vast opportunities and addresses the potential risks arising from the rapid advancement of AI technologies. We emphasize the importance of engaging with companies, research institutions, civil society, and our allies and partners to ensure a well-rounded perspective. Our collective aim is to accelerate collaboration on AI, prioritizing the safe and responsible development of this technology.
Ethical Principles
In light of the increasing significance of ethical considerations in AI development and deployment, the Chamber believes it is imperative to address the importance of ethical principles in the context of AI governance. This should encompass essential aspects such as fairness, transparency, accountability, and the responsible use of AI. By incorporating these principles into regulatory frameworks, governments like the UK can promote public trust, minimize the potential for biases or discriminatory outcomes, and ensure that AI technologies are developed and deployed in a manner that aligns with societal values and norms. Emphasizing ethics in AI governance will help foster responsible innovation, mitigate risks, and ensure that the benefits of AI are distributed equitably across the UK population.
Non-Market Economies
Collaboration between the UK and U.S. on AI frameworks is paramount to counter the efforts of non-market economies, particularly China, to dominate the AI landscape. By aligning our approaches and sharing best practices, the UK and the U.S. can leverage each others expertise, innovation ecosystems, and regulatory frameworks to ensure a competitive and ethical AI environment. Strengthening transatlantic cooperation not only enhances the global influence of market-based economies, but also establishes a unified front in advocating for responsible AI governance that upholds democratic values, safeguards privacy and data protection, and promotes fair competition. Together, the UK and the U.S. can shape a global AI landscape that prioritizes innovation, transparency, and the well-being of individuals and societies, countering the influence of non-market economies and fostering an ecosystem that drives global AI advancement.
In conclusion, as the UK strives to be a policy leader in AI governance, it possesses a unique opportunity to inspire and encourage other nations to adopt these broad-based approaches. By championing risk-based frameworks, promoting private and public investment in AI research and development, embracing internationally recognized standards, fostering international regulatory cooperation, and accelerating collaboration on AI, the UK can set a powerful example for responsible and innovative AI governance. Through its leadership, particularly with the global AI summit in London this fall, the UK can help shape a global landscape that fosters trust, supports economic growth, and harnesses the transformative potential of AI for the betterment of societies worldwide.
Contact
Abel Torres
Executive Director, Center for Global Regulatory Cooperation
ATorres@uschamber.com
Zach Helzer
Senior Director, Europe & U.S.-UK Business Council
ZHelzer@uschamber.com
Read this article:
Chamber Response to the UK Consultation on AI Regulation - uschamber.com