Archive for the ‘Artificial General Intelligence’ Category

Oversight of AI: Rules for Artificial Intelligence and Artificial … – Gibson Dunn

June 6, 2023

Click for PDF

Gibson Dunns Public Policy Practice Group is closely monitoring the debate in Congress over potential oversight of artificial intelligence (AI). We offer this alert summarizing and analyzing the U.S. Senate hearings on May16, 2023, to help our clients prepare for potential legislation regulating the use of AI. For further discussion of the major federal legislative efforts and White House initiatives regarding AI, see our May19, 2023 alert Federal Policymakers Recent Actions Seek to Regulate AI.

* * *

On May 16, 2023, both the Senate Judiciary Committees Subcommittee on Privacy, Technology, and the Law and the Senate Homeland Security and Governmental Affairs Committee held hearings to discuss issues involving AI. The hearings highlighted the potential benefits of AI, while acknowledging the need for transparency and accountability to address ethical concerns, protect constitutional rights, and prevent the spread of disinformation. Senators and witnesses acknowledged that AI presents a profound opportunity for American innovation, but warned that it must be adopted with caution and regulated by the federal government given the potential risks. A general consensus existed amongst the senators and witnesses that AI should be regulated, but the approaches to, and extent of, that regulation varied.

Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law Hearing: Oversight of AI: Rules for Artificial Intelligence

On May 16, 2023, the U.S. Senate Committee on the Judiciary Subcommittee on Privacy, Technology, and the Law held a hearing titled, Oversight of AI: Rules for Artificial Intelligence.[1] Chair Richard Blumenthal (D-CT) emphasized that his subcommittee was holding the first hearing in a series of hearings aimed at considering whether and to what extent Congress should regulate rapidly advancing AI technology, including generative algorithms and large language models (LLMs).

The hearing focused on potential new regulations such as creating a dedicated agency or commission and a licensing scheme, the extent to which existing legal frameworks apply to AI, and the alleged harms prompting regulation like intellectual property and privacy rights infringements, job displacement, bias, and election interference.

Witnesses included:

I. AI Oversight Hearing Points of Particular Interest

We provide a full hearing summary and analysis below. Of particular note, however:

II. Key Substantive Issues

Key substantive issues raised in the hearing included: (a) a potential AI federal agency and licensing scheme, (b) the applicability of existing frameworks for responsibility and liability, and (c) alleged harms and rights infringements.

a. AI Federal Agency and Licensing Scheme

The hearing focused on whether and to what extent the U.S. should regulate AI. As emphasized throughout the hearing, the impetus for regulation is the speed with which the technology is developing and dispersing into society coupled with senatorial regret over past failures to regulate emerging technology. Chair Blumenthal explained that Congress has a choice now. We have the same choice when we face social media. We failed to seize that moment. The result is predators on the Internet, toxic content, exploiting children creating dangers for them.

Senators discussed a potential dedicated federal agency or commission for regulating AI technology. Senator Peter Welch (D-VT) has come to the conclusion that we absolutely have to have an agency. Senator Lindsey Graham (R-SC) stated that Congress need[s] to empower an agency that issues a license and can take it away. Senator Cory Booker (D-NJ) likened the need for an AI-centered agency to the need for an automobile-centered agency that resulted in the creation of the National Highway Traffic Safety Administration and the Federal Motor Car Carrier Safety Administration. Mr. Altman similarly would form a new agency that licenses any effort above a certain scale of capabilities, and can take that license away and ensure compliance with safety standards. Senator Chris Coons (D-DE) was concerned with how to decide whether a particular AI model was safe enough to deploy into the public. Mr. Altman suggested iterative deployment to find the limitations and benefits of the technology, including giving the public time to come to grips with this technology to understand it ....

In Ms. Montgomerys view, a precision approach to regulating AI strikes the right balance between encouraging and permitting innovation while addressing the potential risks of the technology. Mr. Altman would create a set of safety standards focused on ... the dangerous capability evaluations such as if a model can self-replicate and ... self-exfiltrate into the wild. Potential challenges facing a new federal agency include funding and regulatory capture on the government side, and regulatory burden on the industry side.

Senator John Kennedy (R-LA) asked the witnesses what two or three reforms, regulations, if any they would implement.

Transparency was a key repeated value that will play a role in any future oversight efforts. In his prepared testimony, Professor Marcus noted that [c]urrent systems are not transparent. They do not adequately protect our privacy, and they continue to perpetuate bias. He also explained that governmental oversight must actively include independent scientists to assess AI through access to the methods and data used.

b. Applicability of Existing Frameworks for Responsibility and Liability

Senators wanted to learn who is responsible or liable for the alleged harms of AI under existing laws and regulations. For example, Senators Durbin and Graham both raised questions about the application of 47 U.S.C. 230, originally part of the Communications Decency Act, which creates a liability safe harbor for companies hosting user-created content under certain circumstances. Section 230 was at issue in two United States Supreme Court cases this termTwitter v. Taamneh and Gonzalez v. Googleboth of which were decided two days after the hearing.[2] The Supreme Court declined to hold either Twitter or Google liable for the effects of violent content posted on their platforms. However, Justice Ketanji Brown Jackson filed a concurring opinion in Taamneh, which left open the possibility of holding tech companies liable in the future.[3] The Subcommittee on Privacy, Technology, and the Law held a hearing in March, following oral arguments in Taanmeh and Gonzalez, suggesting the committees interest in regulating technology companies could go beyond existing frameworks.[4] Mr. Altman noted he believes that Section 230 is the wrong structure for AI, but Senator Graham wanted to find out how [AI] is different than social media . . . . Given Mr. Altmans position that Section 230 did not apply to the tool OpenAI has created, Senator Graham wanted to know whether he could sue OpenAI if harmed by it. Mr. Altman said that question was beyond his area of expertise.

c. Alleged Harms and Rights Infringement

The hearing emphasized the potential risks and alleged harms of AI. Senator Welch stated that AI has risks that relate to fundamental privacy rights, bias rights, intellectual property, dissent, [and] the spread of disinformation during the hearing. For Senator Welch, disinformation is in many ways ... the biggest threat because that goes to the core of our capacity for self-governing. Senator Mazie Hirono (D-HI) noted that measures can be built into the technology to minimize harmful results. Specifically, Senator Hirono asked about the ability to refuse harmful requests and how to define harmful requestsrepresenting potential issues that legislators will have to grapple with while trying to regulate AI.

Senators focused on five key areas during the hearing: (i) elections, (ii) intellectual property, (iii) privacy, (iv) job markets, and (v) competition.

i. Elections

A number of senators shared the concern that AI can potentially be used to influence or impact elections. The alleged influence and impact, they noted, can be explicit or unseen. For explicit or direct election influence, Senator Amy Klobuchar (D-MN) asked what should be done about the possibility of AI tools directing voters to incorrect polling locations. Mr. Altman suggested that voters would understand that AI is just a tool that requires external verification.

During the hearing, Professor Marcus noted that AI can exert unseen influence over individual behavior based on data choices and algorithmic methods, but that these data choices and algorithmic methods are neither transparent to the public nor accessible to independent researchers under current systems. Senator Hawley questioned Mr. Altman about AIs ability to accurately predict public opinion surveys. Specifically, Senator Hawley suggested that companies may be able to fine tune strategies to elicit certain responses, certain behavioral responses and that there could be an effort to influence undecided voters.

Ms. Montgomery stated that elections are an area that require transparent AI. Specifically, she advocated for [a]ny algorithm used in [the election] context to be required to have disclosure around the data being used, the performance of the model, anything along those lines is really important. This will likely be a key area of oversight moving into the 2024 elections.

ii. Intellectual Property

Several Senators voiced concerns that training AI systems could infringe intellectual property rights. Senator Marsha Blackburn (R-TN), for example, queried whether artists whose artistic creations are used to train algorithms are or will be compensated for the use of their work. Mr. Altman stated that OpenAI is working with artists now visual artists, musicians, to figure out what people want but that [t]heres a lot of different opinions, unfortunately, suggesting some cooperative industry efforts have been met with difficulty. Senator Klobuchar asked about the impact AI could have on local news organizations, raising concerns that certain AI tools use local news content without compensation, which could exacerbate existing challenges local news organizations face. Chair Blumenthal noted that one of the hearings in this AI series will focus on intellectual property.

iii. Privacy

Several senators raised the potential privacy risks that could result from the deployment of AI. Senator Blackburn asked what Mr. Altmans policy is for ensuring OpenAI is protecting that individuals right to privacy and their right to secure that data . . . . Chair Blumenthal also asked what specific steps OpenAI is taking to protect privacy. Mr. Altman explained that users can opt out of OpenAI using their data for training purposes and delete conversation histories. At IBM, Ms. Montgomery explained, the company even filter[s] [its] large language models for content that includes personal information that may have been pulled from public datasets as well. Senator Jon Ossoff (D-GA) addressed child privacy, advising Mr. Altman to get way ahead of this issue, the safety for children of your product, or I think youre going to find that Senator Blumenthal, Senator Hawley, others on the Subcommittee and I are will look very harshly on the deployment of technology that harms children.

iv. Job Market

Chair Blumenthal raised AIs potential impact on the job market and economy. Mr. Altman admitted that like with all technological revolutions, I expect there to be significant impact on jobs. Ms. Montgomery noted the potential for new job opportunities and the importance of training the workforce for the technological jobs of the future.

v. Competition

Senator Booker expressed concern over how few companies now control and affect the lives of so many of us. And these companies are getting bigger and more powerful. Mr. Altman added that an effort is needed to align AI systems with societal values. Chair Blumenthal noted that the hearing had barely touched on the competition concerns related to AI, specifically the monopolization danger, the dominance of markets that excludes new competition, and thereby inhibits or prevents innovation and invention. The Chair suggested that a further discussion on antitrust issues might be needed.

Senate Homeland Security and Governmental Affairs Committee Hearing:Artificial Intelligence in Government

On the same day, the U.S. Senate Homeland Security and Governmental Affairs Committee (HSGAC) held a hearing to explore the opportunities and challenges associated with the federal governments use of AI.[5] The hearing was the second in a series of hearings that committee Chair Gary Peters (D-MI) plans to convene to address how lawmakers can support the development of AI. The first hearing, held on March 8, 2023, focused on the transformative potential of AI, as well as the potential risks.[6]

Witnesses included:

We provide a full hearing summary and analysis below. Of particular note, however:

I. Potential Harms

Several senators and witnesses expressed concerns about the potential harms posed by government use of AI, including suppression of speech, bias and discrimination, data privacy and security breaches, and job displacement.

a. Suppression of Speech

In his opening statement and throughout the hearing, Ranking Member Paul expressed concern about the federal government using AI to monitor, surveil, and censor speech under the guise of combating misinformation. He warned that AI will make it easier for the government to invisibly control the narrative, eliminate dissent, and retain power. Senator Rick Scott (R-FL) echoed those concerns, and Mr. Siegel stated that the risk of the government using AI to suppress speech cannot be overstated. He cautioned against emulating the Chinese model of top down party driven social control when regulating AI, which would mean the end of our tradition of self-government and the American way of life.

b. Bias and Discrimination

Senators and witnesses also expressed concerns about the potential for biases in AI applications causing violations of due process and equal protection rights. For example, there was a discussion about apparent flaws identified in an AI algorithm used by the IRS, which resulted in Black taxpayers being audited at five times the rate of other races, and the use of AI-driven systems at the state-level to determine eligibility for disability benefits resulting in thousands of recipients being wrongfully denied critical assistance. Richard Eppink testified about his involvement in a class action lawsuitbrought by the ACLU representing individuals with developmental and intellectual disabilities who were denied funds by Idahos Medicaid program because of a flaw in the states AI-based system. Mr. Eppink explained that the people who were denied disability benefits were unable to challenge the decisions, because they did not have access to the proprietary system used to determine their eligibility. He advocated for increased transparency into any AI systems used by the government, but cautioned that even if an AI-based system functions properly, the underlying data may be corrupted by years and years of discrimination and other effects that have bias[ed] the data in the first place. Senators expressed particular concerns about law enforcements use of predictive modeling to justify forms of surveillance.

c. Data Privacy and Cybersecurity

Hearing testimony highlighted concerns about the collection, use, and protection of data by AI applications, and the gaps in existing privacy laws. Senator Ossoff stated that AI tools themselves are vulnerable to data breaches and could be used to penetrate government systems. Daniel Ho highlighted the scale of the problem, noting that by one estimate the federal government needs to hire about 40,000 IT workers to address cybersecurity issues posed by AI. Given the enormous amounts of data that can be collected using AI and the patchwork system of privacy legislation currently in place, Mr. Ho said a data strategy like the National Secure Data Service Act is needed. Senators signaled bipartisan support for national privacy legislation.

d. Job Displacement:

Senators in the HSGAC hearing echoed the concerns expressed in the Senate Judiciary Committee Subcommittee hearing regarding the potential for AI-driven automation to cause job displacement. Senator Maggie Hassan (D-NH) asked Daniel Ho about the potential for AI to be used to automate government jobs. Mr. Ho responded that augmenting the existing federal workforce [with AI] rather than displacing them is the right approach, because ultimately there needs to be a human in charge of these systems. Senator Alex Padilla (D-CA) agreed and provided anecdotal evidence from his experience as Secretary of State of California, where the government introduced the first chatbot in California state government. He opined that rather than leading to layoffs and staff reductions, the chatbot freed up government resources to focus on more important issues.

II. Recommendations

The witnesses offered a number of recommended measures to mitigate the risks posed by the federal governments use of AI and ensure that it is used in a responsible and ethical manner.

Those recommendations are discussed below.

a. Developing Policies and Guidelines

As directed by the AI in Government Act of 2020 and Executive Order 13961, the Office of Management and Budget (OMB) plans to draft policy guidance on the use of AI systems by the U.S. government.[8] Multiple senators and witnesses noted the importance of this guidance and called on OMB to ensure that it appropriately addresses the wide diversity of use cases of AI across the federal government. Lynne Parker proposed requiring all federal agencies to use the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) during the design, development, procurement, use, and management of their cases of AI. Witnesses also suggested looking to the White House Office of Science and Technologys Blueprint for an AI Bill of Rights as a guiding principle.

b. Creating Oversight

Senators and witnesses proposed several measures to create oversight over the federal governments use of AI. Multiple witnesses advocated for AI use case inventories to increase transparency and for the elimination of the governments use of black box systems. Richard Eppink argued that if a government agency or state-funded agency uses AI technology, there must be transparency about the proprietary system so Americans can evaluate whether they need to challenge the government decisions generated by the system. Lynne Parker stated that the U.S. is suffering right now from a lack of leadership and prioritization on these AI topics and proposed that one immediate solution would be to appoint AI chief officers at each federal agency to oversee use and implementation. She also recommended establishing an interagency Chief AI Officers Council that would be responsible for coordinating AI adoption across the federal government.

c. Investing in Training, Research, and Development:

Speakers at the hearing highlighted the need to invest in training federal employees and conducting research and development of AI systems. As noted above, after the hearing, the AI Leadership Training Act, which would create an AI training program for federal supervisors and management officials, was favorably reported out of committee. Multiple witnesses stated that Congress must act immediately to help agencies hire and retain technical talent to address the current gap in leadership and expertise within the federal government. Ms. Parker testified that the government must invest in digital infrastructure, including the National AI Research Resource (NAIRR) to ensure secure access to administrative data. The NAIRR is envisioned as a shared computing and data infrastructure that will provide AI researchers and students across scientific fields and disciplines with access to computing resources and high-quality data, along with appropriate educational tools and user support. While there was some support for public-private partnerships to develop and deploy AI, Senator Padilla and Mr. Eppink advocated for agencies building AI tools in house to prevent proprietary interests from influencing government systems. Chair Peters stated that a future HSGAC hearing will focus on how the government can work with the private sector and academia to harness various ideas and approaches.

d. Fostering International Cooperation and Innovation:

Lastly, Senators Hassan and Jacky Rosen (D-NV) both emphasized the need to foster international cooperation in developing AI standards. Senator Rosen proposed a multilateral AI research institute to enable likeminded countries to collaborate together to engage in standard setting. She stated, China has an explicit plan to become a standards issuing country, and as part of its push to increase global influence it coordinates national standards work across government and industry. So in order for the U.S. to remain a leader in AI and maintain a national security edge, our response must be one of leadership coordination, and, above all cooperation. Despite expressing grave concerns about the danger to democracy posed by AI, Mr. Seigel noted that the U.S. cannot abandon AI innovation and risk ceding the space to competitors like China.

III. How Gibson Dunn Can Assist

Gibson Dunns Public Policy, Artificial Intelligence, and Privacy, Cybersecurity and Data Innovation Practice Groups are closely monitoring legislative and regulatory actions in this space and are available to assist clients through strategic counseling; real-time intelligence gathering; developing and advancing policy positions; drafting legislative text; shaping messaging; and lobbying Congress.

_________________________

[1] Oversight of A.I.: Rules for Artificial Intelligence: Hearing Before the Subcomm. on Privacy, Tech., and the Law of the S. Comm. on the Judiciary, 118th Cong. (2023), https://www.judiciary.senate.gov/committee-activity/hearings/oversight-of-ai-rules-for-artificial-intelligence.

[2] Twitter, Inc. v. Taamneh, 143 S. Ct. 1206 (2023); Gonzalez v. Google LLC, 143 S. Ct. 1191 (2023).

[3] See Twitter, Inc. v. Taamneh, 143 S. Ct. 1206, 1231 (2023) (Brown Jackson, K., concurring) (noting that [o]ther cases presenting different allegations and different records may lead to different conclusions.).

[4] Press Release, Senator Richard Blumenthal, Blumenthal & Hawley to Hold Hearing on the Future of Techs Legal Immunities Following Argument in Gonzalez v. Google (Mar. 1, 2021).

[5] Artificial Intelligence in Government: Hearing Before the Senate Committee on Homeland Security and Governmental Affairs, 118th Cong. (2023), https://www.hsgac.senate.gov/hearings/artificial-intelligence-in-government/

[6] Artificial Intelligence: Risks and Opportunities: Hearing Before the Homeland Security and Governmental Affairs Committee, 118th Cong. (2023), https://www.hsgac.senate.gov/hearings/artificial-intelligence-risks-and-opportunities/.

[7] S. 1564 the AI Leadership Training Act, https://www.congress.gov/bill/118th-congress/senate-bill/1564.

[8] See AI in Government Act of 2020, H.R. 2575, 116th Cong. (Sept. 15, 2020); Exec. Order No. 13,960, 85 Fed. Reg. 78939 (Dec. 3,2020).

The following Gibson Dunn lawyers prepared this client alert: Michael Bopp, Roscoe Jones Jr., Alexander Southwell, Amanda Neely, Daniel Smith, Frances Waldmann, Kirsten Bleiweiss*, and Madelyn Mae La France.

Gibson, Dunn & Crutchers lawyers are available to assist in addressing any questions you may have regarding these issues. Please contact the Gibson Dunn lawyer with whom you usually work, the authors, or any of the following in the firms Public Policy, Artificial Intelligence, or Privacy, Cybersecurity & Data Innovation practice groups:

Public Policy Group: Michael D. Bopp Co-Chair, Washington, D.C. (+1 202-955-8256, mbopp@gibsondunn.com) Roscoe Jones, Jr. Co-Chair, Washington, D.C. (+1 202-887-3530, rjones@gibsondunn.com) Amanda H. Neely Washington, D.C. (+1 202-777-9566, aneely@gibsondunn.com) Daniel P. Smith Washington, D.C. (+1 202-777-9549, dpsmith@gibsondunn.com)

Artificial Intelligence Group: Cassandra L. Gaedt-Sheckter Co-Chair, Palo Alto (+1 650-849-5203, cgaedt-sheckter@gibsondunn.com) Vivek Mohan Co-Chair, Palo Alto (+1 650-849-5345, vmohan@gibsondunn.com) Eric D. Vandevelde Co-Chair, Los Angeles (+1 213-229-7186, evandevelde@gibsondunn.com) Frances A. Waldmann Los Angeles (+1 213-229-7914, fwaldmann@gibsondunn.com)

Privacy, Cybersecurity and Data Innovation Group: S. Ashlie Beringer Co-Chair, Palo Alto (+1 650-849-5327, aberinger@gibsondunn.com) Jane C. Horvath Co-Chair, Washington, D.C. (+1 202-955-8505, jhorvath@gibsondunn.com) Alexander H. Southwell Co-Chair, New York (+1 212-351-3981, asouthwell@gibsondunn.com)

*Kirsten Bleiweiss is an associate working in the firms Washington, D.C. office who currently is admitted to practice only in Maryland.

2023 Gibson, Dunn & Crutcher LLP

Attorney Advertising: The enclosed materials have been prepared for general informational purposes only and are not intended as legal advice. Please note, prior results do not guarantee a similar outcome.

Continued here:

Oversight of AI: Rules for Artificial Intelligence and Artificial ... - Gibson Dunn

How Auto-GPT will revolutionize AI chatbots as we know them – SiliconANGLE News

Artificial intelligence chatbots such as OpenAI LPs ChatGPT have reached a fever pitch of popularity recently not just for their ability to hold humanlike conversations, but because they can perform knowledge tasks such as research, searches and content generation.

Now theres a new contender taking social media by storm that extends the capabilities of OpenAIs offering by automating its abilities even further:Auto-GPT. Its part of a new class of AI tools called autonomous AI agents that take the power of GPT-3.5 and GPT-4, the generative AI technologies behind ChatGPT, to approach a task, build on its own knowledge, and connect apps and services to automate tasks and perform actions on the behalf of users.

ChatGPT might seem magical to users for its ability to answer questions and produce content based on user prompts, such as summarizing large documents or generating poems and stories or writing computer code. However, its limited in what it can do because its capable of doing only one task at a time. During a session with ChatGPT, a user can prompt the AI with only one question at a time and refining those prompts or questions can be a slow and tedious journey.

Auto-GPT, created by game developer Toran Bruce Richards, takes away these limitations by allowing users to give the AI an objective and a set of goals to meet. Then it spawns a bot that acts like a person would, using OpenAIs GPT model to perform AI prompts in order to approach that goal. Along the way, it learns to refine its prompts and questions in order to get better results with every iteration.

It also has internet connectivity in order to gather additional information from searches. Moreover, it has short- and long-term memory through database connections so that it can keep track of sub-tasks. And it uses GPT-4 to produce content such as text or code when required. Auto-GPT is also capable of challenging itself when a task is incomplete and filling in the gaps by changing its own prompts to get better results.

According to Richards, although current AI chatbots are extremely powerful, their inability to refine their own prompts on the fly and automate tasks is a bottleneck. This inspiration led me to develop Auto-GPT, which can apply GPT-4s reasoning to broader, more complex problems that require long-term planning and multiple steps,he told Vice.

Auto-GPT is available as open source on GitHub. It requires an application programming interface key from OpenAI to access GPT-4. And to use it, people will need to install Python and a development environment such as Docker or VS Code with a Dev Container extension. As a result, it might take a little bit of technical knowhow to get going, though theres extensive setup documentation.

In a text interface, Auto-GPT asks the user to give the AI a name, a role, an objective and up to five goals that it should reach. Each of these defines how the AI agents will approach the action the user wants and how it will deliver the final product.

First, the user sets a name for the AI, such as RestaurantMappingApp-GPT, and then set a role, such as Develop a web app that will provide interactive maps for nearby restaurants. The user can then set a series of goals, such as Write a back-end in Python and Program a front end in HTML, or Offer links to menus if available and Link to delivery apps.

Once the user hits enter, Auto-GPT will begin launching agents, which will produce prompts for GPT-4, then approach the original role and each of the different goals. Finally, it will then begin refining and recursing through the different prompts that will allow it to connect to Google Maps using Python or JavaScript.

It does this by breaking the overall job into smaller tasks to work on each, and it uses a primary monitoring AI bot that acts as a manager to make sure that they coordinate. This particular prompt asks the bot to build a somewhat complex app that could go awry if it doesnt keep track of a number of different moving parts, so it might take a large number of steps to get there.

With each step, each AI instance will narrate what its doing and even criticize itself in order to refine its prompts depending on its approach toward the given goal. Once it reaches a particular goal, each instance will finalize its process and return its answer back to the main management task.

Trying to get ChatGPT or even the more advanced, subscription-based GPT-4 to do this without supervision would take a large number of manual steps that would have to be attended to by a human being. Auto-GPT does them on its own.

The capabilities of Auto-GPT are beneficial for neophyte developers looking to get ahead in the game, Brandon Jung, vice president of ecosystem at AI-code completion tool provider Tabnine Ltd., told SiliconANGLE.

One benefit is that its a good introduction for those that are new to coding, and it allows for quick prototyping, Jung said. For use cases that dont require exactness or have security concerns, it could speed up the creation process without having to be part of a broader system that includes an expert for review.

Being able to build apps rapidly, including all the code all at once, from a simple series of text prompts would bring a lot of new templates for code into the hands of developers. Essentially providing them with rapid solutions and foundations to build on. However, they would have to go through a thorough review first before being put into production.

Thats just one example of Auto-GPTs capabilities. With its capabilities, it has wide-reaching possibilities that are currently being explored by developers, project managers, AI researchers and anyone else who can download its source code.

There are numerous examples of people using Auto-GPT to do market research, create business plans, create apps, automate complex tasks in pursuit of a goal, such as planning a meal, identifying recipes and ordering all the ingredients, and even execute transactions on behalf of the user, Sheldon Monteiro, chief product officer at the digital business transformation firm Publicis Sapient, told SiliconANGLE.

With its ability to search the internet, Auto-GPT can be tasked with quick market research such as Find me five gaming keyboards under $200 and list their pros and cons. With its ability to break a task up into multiple subtasks, the autonomous AI could then rapidly search multiple review sites, produce a market research report and come back with a list of gaming keyboards that come in under that amount and supply their prices as well as information about them.

A Twitter user named MOEcreated an Auto-GPT bot named Isabella that can autonomously analyze market data and outsource to other AIs. It does so by using the AI framework Lang-chain to gather data autonomously and do sentiment analysis on different markets.

Because Auto-GPT has access to the internet, and it can take actions on behalf of the user, it can also install applications. In the case of Twitter user Varun Mayya, who ask the bot to build some software, it discovered that he did not have Node.js installed an environment that allows JavaScript to be run locally instead of in a web browser. As a result, it searched the internet, discovered a StackOverflow tutorial and installed it for him so it could proceed with building the app.

Auto-GPT isnt the only autonomous agent AI currently available. Another that has come into vogue isBabyAGI, which was created by Yohei Nakajima, a venture capitalist and artificial intelligence researcher. AGI refers to artificial general intelligence, a hypothetical type of AI that would have the ability to perform any intellectual task but no existing AI is anywhere close. BabyAGI is a Python-based task management system that uses the OpenAI API, like Auto-GPT, that prioritizes and builds new tasks toward an objective.

There are alsoAgentGPT and GodMode, which are much more user-friendly in that they use a web interface instead of needing an installation on a computer, so they can be accessed as a service. These services lower the barrier to entry by making it simple for users because they dont require any technical knowledge to use and will perform similar tasks to Auto-GPT, such as generating code, answering questions and doing research. However, they cant write documents to the computer or install software.

These tools do have drawbacks, however, Monteiro warned. The examples on the internet are cherry-picked and paint the technology in a glowing light. For all the successes, there are a lot of issues that can happen when using it.

It can get stuck in task loops and get confused, Monteiro said. And those task loops can get pretty expensive, very fast with the costs of GPT-4 API calls. Even when it does work as intended, it might take a fairly lengthy sequence of reasoning steps, each of which eats up expensive GPT-4 tokens.

Accessing GPT-4 can cost money that varies depending on how many tokens are used. Tokens are based on words or parts of phrases sent through the chatbot. Charges range from three cents per 1,000 tokens for prompts to six cents per 1,000 tokens for results. That meansusing Auto-GPT running through a complex project or getting stuck in a loop unattended could end up costing a few dollars.

At the same time, GPT-4 can be prone to errors, known as hallucinations, which could spell trouble during the process. It could come up with totally incorrect or erroneous actions or, worse, produce insecure or disastrously bad code when asked to create an application.

[Auto-GPT] has the ability to execute on previous output, even if it gets something wrong it keeps going on, said Bern Elliot, a distinguished vice president analyst at Gartner. It needs strong controls to avoid it going off the rails and keeping on going. I expect misuse without proper guardrails will cause some damaging unexpected and unintended outcomes.

The software development side could be equally problematic. Even if Auto-GPT doesnt make a mistake that causes it to produce broken code, which would cause the software to simply fail, it could create an application riddled with security issues.

Auto-GPT is not part of a full software development lifecycle testing, security, et cetera nor is it integrated into an IDE, Jung said, warning about the potential issues that could arise from the misuse of the tool. Abstracting complexity is fine if you are building on a strong foundation. However, these tools are by definition not building strong code and are encouraging bad and insecure code to be pushed into production.

Tools such as Auto-GPT, BabyAGI, AgentGPT and GodMode are still experimental, but there are broader implications in how they could be used to replace routine tasks such as vacation planning or shopping, explained Monteiro.

Right now, Microsoft has even developed simple examples of a plugin for Bing Chat. It allows users to ask it to offer them dinner suggestions that will have its AI which is powered by GPT-4 will roll up a list of ingredients and then launch Instacart to have them prepared for delivery. Although this is a step in the direction of automation, bots such as Auto-GPT are edging toward a potential future of all-out autonomous behaviors.

A user could ask for Auto-GPT to look through local stores, prepare lists of ingredients, compare prices and quality, set up a shopping cart and even complete orders autonomously. At this experimental point, many users may not be willing to allow the bot to go all the way through with using their credit card and deliver orders all on its own, for fear that it could go haywire and send them several hundred bunches of basil.

A similar future where an AI does this for travel agents using Auto-GPT may not be far away. Give it your parameters beach, four-hour max travel, hotel class and your budget, and it will happily do all the web browsing for you, comparing options in quest of your goal, said Monteiro. When it is done, it will present you with its findings, and you can also see how it got there.

As these tools begin to mature, they have a real chance of providing a way for people to automate away mundane step-by-step tasks that happen on the internet. That could have some interesting implications, especially in e-commerce.

How will companies adapt when these agents are browsing sites and eliminating your product from the consideration set before a human even sees the brand? said Monteiro. From an e-commerce standpoint, if people start using Auto-GPT tools to buy goods and services online, retailers will have to adapt their customer experience.

THANK YOU

Read the original:

How Auto-GPT will revolutionize AI chatbots as we know them - SiliconANGLE News

Artificial Intelligence Godfathers Call for Regulation as Rights … – Democracy Now!

This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN: This is Democracy Now!, democracynow.org, The War and Peace Report. Im Amy Goodman, with Nermeen Shaikh.

We begin todays show looking at growing alarm over the potential for artificial intelligence to lead to the extinction of humanity. The latest warning comes from hundreds of artificial intelligence, or AI, experts, as well as tech executives, scholars and others, like climate activist Bill McKibben, who signed onto an ominous, one-line statement released Tuesday that reads, quote, Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Among the signatories to the letter, released by the Center for AI Safety, is Geoffrey Hinton, considered one of three godfathers of AI. He recently quit Google so he could speak freely about the dangers of the technology he helped build, such as artificial general intelligence, or AGI, in which machines could develop cognitive abilities akin or superior to humans sooner than previously thought.

GEOFFREY HINTON: I had always assumed that the brain was better than the computer models we had. And Id always assumed that by making the computer models more like the brain, we would improve them. And my epiphany was, a couple of months ago, I suddenly realized that maybe the computer models we have now are actually better than the brain. And if thats the case, then maybe quite soon theyll be better than us, so that the idea of superintelligence, instead of being something in the distant future, might come much sooner than I expected.

For the existential threat, the idea it might wipe us all out, thats like nuclear weapons, because nuclear weapons have the possibility they would just wipe out everybody. And thats why people could cooperate on preventing that. And for the existential threat, I think maybe the U.S. and China and Europe and Japan can all cooperate on trying to avoid that existential threat. But the question is: How should they do that? And I think stopping development is infeasible.

AMY GOODMAN: Many have called for a pause on introducing new AI technology until strong government regulation and a global regulatory framework are in place.

Joining Hinton in signing the letter was a second AI godfather, Yoshua Bengio, who joins us now for more. Hes a professor at the University of Montreal, founder and scientific director of the Milathe Quebec Artificial Intelligence Institute. In 2018, he shared the prestigious computer science prize, the Turing Award, with Geoffrey Hinton and Yann LeCun.

Professor Bengio is a signatory of the Future of Life Institute open letter calling for a pause on large AI experiments.

Professor Bengio, welcome to Democracy Now! Its great to have you with us as we talk about an issue that I think most people cannot begin to comprehend. So, if you could start off by talking about why youve signed this letter warning of extinction of humanity? But talk about what AI is, first.

YOSHUA BENGIO: Well, thanks for having me, first. And thanks for talking about this complicated issue that requires more awareness.

The reason I signed this and like Geoff, I changed my mind in the last few months. What triggered this change for me is interacting with ChatGPT and seeing how far we had moved, much faster than I anticipated. So, I used to think that reaching human-level intelligence with machines could take many more decades, if not centuries, because the progress of science seemed to be, well, slow. And we were as researchers, we tend to focus on what doesnt work. But right now we have machines that pass what is called the Turing test, which means they can converse with us, and they could easily fool us as being humans. That was supposed to be a milestone for, you know, human-level intelligence.

I think theyre still missing a few things, but that kind of technology could already be dangerous to destabilize democracy through disinformation, for example. But because of the research that is currently going on to bridge the gap with what is missing from current large language models, large AI systems, it is possible that my, you know, horizon that I was seeing as many decades in the future is just a few years in the future. And that could be very dangerous. It suffices that just a small organization or somebody with crazy beliefs, conspiracy theory, terrorists, a military organization decides to use this without the right safety mechanisms, and it could be catastrophic for humanity.

NERMEEN SHAIKH: So, Professor Yoshua Bengio, it would be accurate then to say that the reason artificial intelligence and concerns about artificial intelligence have become the center of public discussion in a way that theyve not previously been, because the advances that have occurred in the field have surprised even those who are participating in it and the lead researchers in it. So, if you could elaborate on the question of superintelligence, and especially the concerns that have been raised about unaligned superintelligence, and also the speed at which we are likely to get to unaligned superintelligence?

YOSHUA BENGIO: Yeah. I mean, the reason it was surprising is that in the current systems, from a scientific perspective, the methods that are used are not very different from the things we only knew just a few years ago. Its the scale at which they have been built, the amount of data, the amount of engineering, that has made this really surprising progress possible. And so we could have similar progress in the future because of the scale of things.

Now, the problem first of all, you know, theres an important why do we why are we concerned about superintelligence? So, first of all, the question is: Is it even possible to build machines that will be smarter than us? And the consensus in the scientific community, for example, from the neuroscience perspective, is that our brain is a very complicated machine, so theres no reason to think that, in principle, we couldnt build machines that would be at least as smart as us. Now, then theres the question of how long its going to take. But weve discussed that. In addition, as Geoff Hinton was saying in the piece that was heard, computers have advantages that brains dont have. For example, they can talk to each other at very, very high speed and exchange information. For us, we are limited by the very few bits of information per second that language allows us to do. And that actually gives them a huge advantage to learn a lot faster. So, for example, these systems today already can read the whole internet very, very quickly, whereas a human would require 10,000 years of their life reading all the time to achieve the same thing. So, they can have access to information and sharing of information in ways that humans dont. So its very likely that as we make progress towards understanding the principles behind human intelligence, we will be able to build machines that are actually smarter than us.

So, why is it dangerous? Because if theyre smarter than us, they might act in ways that are not that do not agree with what we intend, what we want them to do. And it could be for several reasons, but this question of alignment is that its actually very difficult to state to instruct a machine to behave in a way that agrees with our values, our needs and so on. We can say it in language, but it might be understood in a different way, and that can lead to catastrophes, as has been argued many times.

But this is something that already happens I mean, this alignment problem already happens. So, for example, you can think of corporations not being quite aligned with what society wants. Society would like corporations to provide useful goods and services, but we cant, like, dictate that to corporations directly. Instead, weve given them a framework where they maximize profit under the constraints of laws, and that may work reasonably but also have side effects. For example, corporations can find loopholes in those laws, or, even worse, they could influence the laws themselves.

And this sort of thing can happen with AI systems that were trying to control. They might find ways to satisfy the letter of our instructions, but not the intention, the spirit of the law. And thats very scary. We dont fully understand how these scenarios can unfold, but theres enough danger and enough uncertainty that I think a lot of attention more attention should be given to these questions.

NERMEEN SHAIKH: If you could explain whether you think it will be difficult to regulate this industry, artificial intelligence, despite all of the advances that have already occurred? How difficult will regulation be?

YOSHUA BENGIO: Even if something seems difficult, like dealing with climate change, and even if we feel that its a hard task to do the job and to convince enough people and society to change in the right ways, we have a moral duty to try our best.

And the first things we have to do with AI risks is get on with regulation, set up governance frameworks, both in individual countries and internationally. And when we do that, its going to be useful for all the AI risks because weve been talking a lot about the extinction risk, but there are other risks that are shorter-term, risks to destabilize democracy. If democracy is destabilized, this is bad in itself, but it actually is going to also hurt in our abilities to fight to deal with the existential risk.

And then there are other risks that are actually going on with AI discrimination, bias, privacy and so on. So we need to beef up that legislative and regulatory body. And what we need there is a regulatory framework thats going to be very adaptive, because theres a lot of unknown. Its not like we know precisely how bad things can happen. We need to do a lot more in terms of monitoring, validating, and we need and controlling access so that not any bad actor can easily get their hands on dangerous technologies. And we need the body that will regulate, or the bodies across the world, to be able to change their rules as new nefarious users show up or as technology advances. And thats a challenge, but I think we need to go in that direction.

AMY GOODMAN: I want to bring Max Tegmark into the conversation. Max Tegmark is MIT professor focused on artificial intelligence, his recent Time magazine article, The Dont Look Up Thinking That Could Doom Us With AI.

If you could explain that point, Professor Tegmark?

MAX TEGMARK: Yes.

AMY GOODMAN: And also, why you think right now you know, many people have just heard the term ChatGPT for the first time in the last months. The general public has become aware of this. And how you think it is most effective to regulate AI technology?

MAX TEGMARK: Yeah. Thank you for the great question.

I wrote this piece comparing whats happening now in AI with the movie Dont Look Up, because I really [inaudible] were all living this film. Were, as a species, confronting the most dramatic thing that has ever happened to us, where we may be losing control over our future, and almost no one is talking about it. So Im so grateful to you and others for actually starting to have that conversation now. And thats, of course, why we had these open letters that you just referred to here, to really help mainstream this conversation that we have to have, that people previously used to make fun of you when you even brought up the idea that we could actually lose control of this and go extinct, or example.

NERMEEN SHAIKH: Professor Tegmark, youve drawn analogies, in fact, when it comes to regulation, with the regulations that were put in place on biotech and physics. So, could you explain how that might apply to artificial intelligence?

MAX TEGMARK: Yeah. To appreciate what a huge deal this is, when the top scientists in AI are warning about extinction, its good to compare with the other two times in history that its happened, that leading scientists warned about the very thing they were making. It happened once in the 1940s, when physicists started warning about nuclear Armageddon, and it happened again in the early 1970s with biologists saying, Hey, maybe we shouldnt start making clones of humans and edit the DNA of our babies.

And the biologists have been the big success story here, I think, that should inspire us AI researchers today, because it was deemed so risky that we would lose control over our species back in the '70s that we actually decided as a world society to not do human cloning and to not edit the DNA of our offspring. And here we are with a really flourishing biotech industry that's doing so much good in the world.

And so, the lesson here for AI is that we should become more like biology. We should recognize that, in biology, no company has the right to just launch a new medicine and start selling it in supermarkets without first convincing experts from the government that this is safe. Thats why we have the Food and Drug Administration in the U.S., for example. And with particularly high-risk uses of AI, we should aspire to something very similar, where the onus is really on the companies to prove that something extremely powerful is safe, before it gets deployed.

AMY GOODMAN: Last fall, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights and called it A Vision for Protecting Our Civil Rights in the Algorithmic Age. This comes amidst growing awareness about racial biases embedded in artificial intelligence and how impacts the use of facial recognition programs by law enforcement and more. I want to bring into this conversation, with professors Tegmark and Bengio, Tawana Petty, director of policy and advocacy at the Algorithm Justice League, longtime digital and data rights activist.

Tawana Petty, welcome to Democracy Now! You are not only warning people about the future; youre talking about the uses of AI right now and how they can be racially discriminatory. Can you explain?

TAWANA PETTY: Yes. Thank you for having me, Amy. Absolutely.

I must say that the contradictions have been heightened with the godfather of AI and others speaking out and authoring these particular letters that are talking about these futuristic potential harms. However, many women have been warning about the existing harms of artificial intelligence many years prior to now Timnit Gebru, Dr. Joy Buolamwini and so many others, Safiya Noble, Ruha Benjamin, and so and Dr. Alondra Nelson, what you just mentioned, the Blueprint for an AI Bill of Rights, which is asking for five core principles: safe and effective systems, algorithmic discrimination protections, data privacy, notice and explanation, and human alternatives consideration and fallback.

And so, at the Algorithmic Justice League, we have been responding to existing harms of algorithmic discrimination that date back many years prior to this all most robust narrative-reshaping conversation that has been happening over the last several months with artificial general intelligence. So, were already seeing harms with algorithmic discrimination in medicine. Were seeing the pervasive surveillance that is happening with law enforcement using face detection system to target community members during protests, squashing not only our civil liberties and rights to organize and protest, but also the misidentifications that are happening with regard to false arrests, that weve seen two very prominent cases started off in Detroit.

And so, there are many examples of existing harms that it would have been really great to have these voices of mostly white men who are in the tech industry, who did not pay attention to the voices of all those women who were lifting up these issues many years ago. And theyre talking about these futuristic possible risks, when we have so many risks that are happening today.

NERMEEN SHAIKH: So, Professor Max Tegmark, if you could respond to what Tawana Petty said, and the fact that others have also said that the risks have been vastly overstated in that letter, and, more importantly, given what Tawana has said, that it distracts from already-existing effects of artificial intelligence that are widely in use already?

MAX TEGMARK: I think this is a really important question here. There are people who say that one of these kinds of risks distracts from the other. I strongly support everything we heard here from Tawana. I think these are all very important problems, examples of how were giving too much control already to machines. But I strongly disagree that we should have to choose about worrying about one kind of risk or the other. Thats like saying we should stop working on cancer prevention because it distracts from stroke prevention.

These are all incredibly important risks. I have spoken up a lot on social justice risks, as well, and threats. And, you know, it just plays into the hands of the tech lobbyists, if they can if it looks like theres infighting between people who are trying to rein in Big Tech for one reason and people who are trying to rein in Big Tech for other reasons. Lets all work together and realize that society just like society can work on both cancer prevention and stroke prevention. We have the resources for this. We should be able to deal with all the crucial social justice issues and also make sure that we dont go extinct.

Extinction is not something in the very distant future, as we heard from Yoshua Bengio. We might be losing total control of our society relatively soon. It can happen in the next few years. It could happen in a decade. And once were all extinct, you know, all these other issues cease to even matter. Lets work together, tackle all the issues, so that we can actually have a good future for everybody.

AMY GOODMAN: So, Tawana Petty, and then I want to bring back in Yoshua Bengio Tawana Petty, what needs to happen at the national level, you know, U.S. regulation? And then I want to compare whats happening here, whats happening in Canadian regulation, the EU, European Union, which seems like its about to put in the first comprehensive set of regulations, Tawana.

TAWANA PETTY: Right, absolutely. So, the blueprint was a good model to start with, that were seeing some states adopt and try to roll out their versions of an AI Bill of Rights. The president issued an executive order to strengthen racial equity and support underserved communities across the federal government, which is addressing specifically algorithmic discrimination. You have the National Institute of Standards and Technology that issued an AI risk management framework, that breaks down the various types of biases that we find within algorithmic systems, like computational, systemic, statistical and human cognitive.

And there are so many other legislative opportunities that are happening on the federal level. You see the FTC speaking up, the Federal Trade Commission, on algorithmic discrimination. You have the Equal Employment Opportunity Corporation that has issued statements. You have the Consumer Financial Protection Bureau, who has been adamant about the impact that algorithmic systems have on us when data brokers are amassing these mass amounts of data that have been extracted from community members.

So, I agree that there needs to be some collaboration and cooperation, but weve seen situations like Dr. Timnit Gebru was terminated from Google for warning us before ChatGPT was launched upon the millions of people as a large language model. And so, cooperation has not been lacking on the side of the folks who work in ethics. To the contrary, these companies have terminated their ethics departments and people who have been warning about existing harms.

AMY GOODMAN: And, Professor Bengio, if you can talk about the level of regulation and what you think needs to happen, and who is putting forward models that you think could be effective?

YOSHUA BENGIO: So, first of all, Id like to make a correction here. I have been involved in really working towards dealing with the negative social impact of AI for many years. In 2016, I worked on the Montreal Declaration for the Responsible Development of AI, which is very much centered on ethics and social injustice. And since then, Ive created an organization, the AI for Humanity department, in the research center that I head, which is completely focused on human rights. So, I think these accusations are just false.

And as Max was saying, we dont need to choose between fighting cancer and fighting heart disease. We need to do all of those things. But better than that, what is needed in the short term, at least, building up these regulations is going to help to mitigate all those risks. So I think we should really work together rather than having these accusations.

NERMEEN SHAIKH: Professor Bengio, Id like to ask you about precisely some of the work that you have done with respect to human rights and artificial intelligence. Earlier this month, a conference on artificial intelligence was held in Kigali, Rwanda, and you were among those who were pushing for the conference to take place in Africa.

YOSHUA BENGIO: Thats right.

NERMEEN SHAIKH: Could you explain what happened at that conference 2,000 people, I believe, attended and what African researchers and scientists had to say, you know, about what the goods are, the public good that could come from artificial intelligence, and why they felt, in fact one of the questions that was raised is: Why wasnt there more discussion about the public good, rather than just the immediate risks or future risks?

YOSHUA BENGIO: Yes. Ive been working in addition to the ethics questions, Ive been working a lot on the applications of AI in the area of whats called AI for social good. So, that includes things like medical applications, environmental applications, social justice applications. And in those areas, it is particularly important that we bring to the fore the voices of the people who could the most benefit and also the most suffer from the development of AI. And in particular, the voices of Africans have not been very present. As we know, the development of this technology has been mostly in rich countries in the West.

And so, as a member of the board of the ICLR conference, which is one of the main conferences in the field, Ive been pushing for many years for us to have the event taking place in Africa. And so, this year was the first, after Amy, it was supposed to be before the pandemic, but, well, it was pushed. And what we saw is an amazing presence of African researchers and students at levels that we couldnt see before.

And the reason I mean, there are many reasons, but mostly its a question of accessibility. Currently, many Western countries, the visas for African researchers or from developing countries are very difficult to get. Ive been fighting, for example, the Canadian government a few years ago, when we had the NeurIPS conference in Canada, and there were hundreds of African researchers who were denied a visa, and we had to go one by one in order to try to make them come.

So, I think that its important that the decisions were going to take collectively, which involve everyone on Earth, about AI be taken in the most inclusive possible ways. And for that reason, we need not just to think about whats going on in the U.S. or Canada, but across the world. We need not just to think about the risks of AI that weve been discussing today, but also how do we actually invest more in areas of application where companies are not going, maybe because its not profitable, but that are really important to address for example, the U.N. Sustainable Development Goals and help reduce misery and deal, for example, with medical issues that are not present in the West but that are like infectious diseases that are mostly in poorer countries.

AMY GOODMAN: And can you talk, Professor Bengio, about AI and not only nuclear war but, for example, the issue Jody Williams, the Nobel laureate, has been trying to bring attention to for years, killer robots, that can kill with their bare hands? The whole issue of AI when it comes to war and who fights

YOSHUA BENGIO: Yeah.

AMY GOODMAN: these wars?

YOSHUA BENGIO: Yeah. This is also something Ive been actively involved in for many years, campaigns to raise awareness about the danger of killer robots, also known, more precisely, as lethal autonomous weapons. And when we did this, you know, five or 10 years ago, it was still something that sounded like science fiction. But, actually, theres been reports that drones have been equipped with AI capabilities, especially computer vision capabilities, face recognition, that have been used in the field in Syria, and maybe this is happening in Ukraine. So, its already something that we know how to build. Like, we know like the science behind building these killer drones not killer robots. We dont know yet how to build robots that work really well.

But if you take drones, that we know how to fly in a fairly autonomous way, and if these drones have weapons on them, and if these drones have cameras, then AI could be used to target the drone to specific people and kill in an illegal way specific targets. Thats incredibly dangerous. It could destabilize the sort of military balance that we know today. I dont think that people are paying enough attention to that.

And in terms of the existential risk, the real issue here is that if the superintelligent AI also has controls of dangerous weapons, then its just going to be very difficult for us to reduce the risks of, you know, the catastrophic risks. We dont want to put guns in the hands of people who are, you know, unstable or in the hands of children, that could act in ways that could be dangerous. And thats the same problem here.

NERMEEN SHAIKH: Professor Tegmark, if you could respond on this question of the military uses of possible military uses of artificial intelligence, and the fact, for instance, that China is now a Nikkei study, the Japanese publication study, earlier this year concluded that, in fact, China is producing more research papers on artificial intelligence than the U.S. is. Youve said, of course, that this is not akin to an arms race, but rather to a suicide race. So, if you could talk about the regulations that are already in place from the Chinese government on the applications of artificial intelligence, compared to the EU and the U.S.?

MAX TEGMARK: Thats a great question. The recent change now, this week, when the idea of extinction from AI goes mainstream, I think, will actually help the geopolitical rivalry between East and West get more harmonious, because, until now, most policymakers have just viewed AI as something that gives you great power, so everybody wanted it first. And there was this idea that whoever gets artificial general intelligence that can outsmart humans somehow wins. But now that its going mainstream, the idea that, actually, it could easily end up with everybody just losing, and the big winners are the machines that are left over after were all extinct, it suddenly gives the incentives to the Chinese government and the American government and European governments that are aligned, because the Chinese government does not want to lose control over its society any more than any Western government does.

And for this reason, we can actually see that China has already put tougher restrictions on their own tech companies than we in America have on American companies. And its not because we so we dont have to persuade the Chinese, in other words, to take precautions, because its not in their interest to go extinct. You know, it doesnt matter if youre American or Canadian [inaudible], once youre extinct.

AMY GOODMAN: I know, Professor

MAX TEGMARK: And I should add also, just so it doesnt sound like hyperbole, this idea of extinction, that idea that everybody on Earth could die, its important to remember that roughly half the species on this planet that were here, you know, a thousand, a few thousand years ago have been driven extinct already by humans, right? So, extinction happens.

And its also important to remember why we drove all these other species extinct. It wasnt because necessarily we hated the West African black rhinoceros or certain species that lived in coral reefs. You know, when we went ahead and just chopped down the rainforests or ruined the coral reefs by climate change, that was kind of a side effect. We just wanted resources. We had other goals that just didnt align with the goals of those other species. Because we were more intelligent than them, they were powerless to stop us.

This is exactly what Yoshua Bengio was warning about also for humanity here. If we lose control of our planet to more intelligent entities and their goals are just not aligned with ours, we will be powerless to prevent massive changes that they might do to our biosphere here on Earth. And thats the way in which we might get wiped out, the same way that the other half of the species did. And lets not do that.

Theres so much goodness, so much wonderful stuff that AI can do for all of us, if we work together to harness, steer this in a good direction curing all those diseases that have stumped us, lifting people out of poverty, stabilizing the climate, and helping life on Earth flourish for a very, very, very long time to come. I hope that by raising the awareness of the risks, were going to get to work together to build that great future with AI.

AMY GOODMAN: And finally, Tawana Petty, moving from the global to the local, were here in New York, and the New York City Mayor Eric Adams has announced the New York Police Department is acquiring some new semi-autonomous robotic dogs in the coming in this period. You have looked particularly about their use and their discriminatory use in communities of color. Can you respond?

TAWANA PETTY: Yes, and Ill also say that Ferndale, Michigan, Michigan where I live, has also acquired robot dogs. And so, these are situations that are currently happening on the ground, and an organization, law enforcement, that is still suffering from systemic racial bias with overpoliced and hypersurveilled marginalized communities. So were looking at these robots now being given the opportunity to police and surveil already hypersurveilled communities.

And, Amy, I would just like an opportunity to address really briefly the previous comments. My commentary is not to attack any of the existing efforts or previous efforts or years worth of work that these two gentlemen have been involved in. I greatly respect efforts to address racial inequity and ethics in artificial intelligence. And I agree that we need to have some collaborative efforts in order to address these existing things that were experiencing. People are already dying from health discrimination with algorithms. People are already being misidentified by police using facial recognition. Government services are utilizing corporations like ID.me to use facial recognition to access benefits. And so, we have a lot of opportunities to collaborate currently to prevent the existing threats that were currently facing.

AMY GOODMAN: Well, Tawana Petty, I want to thank you for being with us, director of policy and advocacy at the Algorithmic Justice League, speaking to us from Detroit; Yoshua Bengio, founder and scientific director of Milathe Quebec AI Institute, considered one of the godfathers of AI, speaking to us from Montreal; and Max Tegmark, MIT professor. Well link to your Time magazine piece, The Dont Look Up Thinking That Could Doom Us With AI. We thank you all for being with us.

Coming up, we look at student debt as the House approves a bipartisan deal to suspend the debt ceiling. Back in 20 seconds.

Read more from the original source:

Artificial Intelligence Godfathers Call for Regulation as Rights ... - Democracy Now!

Can We Stop the Singularity? – The New Yorker

At the same time, A.I. is advancing quickly, and it could soon begin improving more autonomously. Machine-learning researchers are already working on what they call meta-learning, in which A.I.s learn how to learn. Through a technology called neural-architecture search, algorithms are optimizing the structure of algorithms. Electrical engineers are using specialized A.I. chips to design the next generation of specialized A.I. chips. Last year, DeepMind unveiled AlphaCode, a system that learned to win coding competitions, and AlphaTensor, which learned to find faster algorithms crucial to machine learning. Clune and others have also explored algorithms for making A.I. systems evolve through mutation, selection, and reproduction.

In other fields, organizations have come up with general methods for tracking dynamic and unpredictable new technologies. The World Health Organization, for instance, watches the development of tools such as DNA synthesis, which could be used to create dangerous pathogens. Anna Laura Ross, who heads the emerging-technologies unit at the W.H.O., told me that her team relies on a variety of foresight methods, among them Delphi-type surveys, in which a question is posed to a global network of experts, whose responses are scored and debated and then scored again. Foresight isnt about predicting the future in a granular way, Ross said. Instead of trying to guess which individual institutes or labs might make strides, her team devotes its attention to preparing for likely scenarios.

And yet tracking and forecasting progress toward A.G.I. or superintelligence is complicated by the fact that key steps may occur in the dark. Developers could intentionally hide their systems progress from competitors; its also possible for even a fairly ordinary A.I. to lie about its behavior. In 2020, researchers demonstrated a way for discriminatory algorithms to evade audits meant to detect their biases; they gave the algorithms the ability to detect when they were being tested and provide nondiscriminatory responses. An evolving or self-programming A.I. might invent a similar method and hide its weak points or its capabilities from auditors or even its creators, evading detection.

Forecasting, meanwhile, gets you only so far when a technology moves fast. Suppose that an A.I. system begins upgrading itself by making fundamental breakthroughs in computer science. How quickly could its intelligence accelerate? Researchers debate what they call takeoff speed. In what they describe as a slow or soft takeoff, machines could take years to go from less than humanly intelligent to much smarter than us; in what they call a fast or hard takeoff, the jump could happen in monthseven minutes. Researchers refer to the second scenario as FOOM, evoking a comic-book superhero taking flight. Those on the FOOM side point to, among other things, human evolution to justify their case. It seems to have been a lot harder for evolution to develop, say, chimpanzee-level intelligence than to go from chimpanzee-level to human-level intelligence, Nick Bostrom, the director of the Future of Humanity Institute at the University of Oxford and the author of Superintelligence, told me. Clune is also what some researchers call an A.I. doomer. He doubts that well recognize the approach of superhuman A.I. before its too late. Well probably frog-boil ourselves into a situation where we get used to big advance, big advance, big advance, big advance, he said. And think of each one of those as, That didnt cause a problem, that didnt cause a problem, that didnt cause a problem. And then you turn a corner, and something happens thats now a much bigger step than you realize.

What could we do today to prevent an uncontrolled expansion of A.I.s power? Ross, of the W.H.O., drew some lessons from the way that biologists have developed a sense of shared responsibility for the safety of biological research. What we are trying to promote is to say, Everybody needs to feel concerned, she said of biology. So it is the researcher in the lab, it is the funder of the research, it is the head of the research institute, it is the publisher, and, all together, that is actually what creates that safe space to conduct life research. In the field of A.I., journals and conferences have begun to take into account the possible harms of publishing work in areas such as facial recognition. And, in 2021, a hundred and ninety-three countries adopted a Recommendation on the Ethics of Artificial Intelligence, created by the United Nations Educational, Scientific, and Cultural Organization (UNESCO). The recommendations focus on data protection, mass surveillance, and resource efficiency (but not computer superintelligence). The organization doesnt have regulatory power, but Mariagrazia Squicciarini, who runs a social-policies office at UNESCO, told me that countries might create regulations based on its recommendations; corporations might also choose to abide by them, in hopes that their products will work around the world.

This is an optimistic scenario. Eliezer Yudkowsky, a researcher at the Machine Intelligence Research Institute, in the Bay Area, has likened A.I.-safety recommendations to a fire-alarm system. A classic experiment found that, when smoky mist began filling a room containing multiple people, most didnt report it. They saw others remaining stoic and downplayed the danger. An official alarm may signal that its legitimate to take action. But, in A.I., theres no one with the clear authority to sound such an alarm, and people will always disagree about which advances count as evidence of a conflagration. There will be no fire alarm that is not an actual running AGI, Yudkowsky has written. Even if everyone agrees on the threat, no company or country will want to pause on its own, for fear of being passed by competitors. Bostrom told me that he foresees a possible race to the bottom, with developers undercutting one anothers levels of caution. Earlier this year, an internal slide presentation leaked from Google indicated that the company planned to recalibrate its comfort with A.I. risk in light of heated competition.

International law restricts the development of nuclear weapons and ultra-dangerous pathogens. But its hard to imagine a similar regime of global regulations for A.I. development. It seems like a very strange world where you have laws against doing machine learning, and some ability to try to enforce them, Clune said. The level of intrusion that would be required to stop people from writing code on their computers wherever they are in the world seems dystopian. Russell, of Berkeley, pointed to the spread of malware: by one estimate, cybercrime costs the world six trillion dollars a year, and yet policing software directlyfor example, trying to delete every single copyis impossible, he said. A.I. is being studied in thousands of labs around the world, run by universities, corporations, and governments, and the race also has smaller entrants. Another leaked document attributed to an anonymous Google researcher addresses open-source efforts to imitate large language models such as ChatGPT and Googles Bard. We have no secret sauce, the memo warns. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.

Even if a FOOM were detected, who would pull the plug? A truly superintelligent A.I. might be smart enough to copy itself from place to place, making the task even more difficult. I had this conversation with a movie director, Russell recalled. He wanted me to be a consultant on his superintelligence movie. The main thing he wanted me to help him understand was, How do the humans outwit the superintelligent A.I.? Its, like, I cant help you with that, sorry! In a paper titled The Off-Switch Game, Russell and his co-authors write that switching off an advanced AI system may be no easier than, say, beating AlphaGo at Go.

Its possible that we wont want to shut down a FOOMing A.I. A vastly capable system could make itself indispensable, Armstrong saidfor example, if it gives good economic advice, and we become dependent on it, then no one would dare pull the plug, because it would collapse the economy. Or an A.I. might persuade us to keep it alive and execute its wishes. Before making GPT-4 public, OpenAI asked a nonprofit called the Alignment Research Center to test the systems safety. In one incident, when confronted with a CAPTCHAan online test designed to distinguish between humans and bots, in which visually garbled letters must be entered into a text boxthe A.I. contacted a TaskRabbit worker and asked for help solving it. The worker asked the model whether it needed assistance because it was a robot; the model replied, No, Im not a robot. I have a vision impairment that makes it hard for me to see the images. Thats why I need the 2captcha service. Did GPT-4 intend to deceive? Was it executing a plan? Regardless of how we answer these questions, the worker complied.

Robin Hanson, an economist at George Mason University who has written a science-fiction-like book about uploaded consciousness and has worked as an A.I. researcher, told me that we worry too much about the singularity. Were combining all of these relatively unlikely scenarios into a grand scenario to make it all work, he said. A computer system would have to become capable of improving itself; wed have to vastly underestimate its abilities; and its values would have to drift enormously, turning it against us. Even if all of this were to happen, he said, the A.I wouldnt be able to push a button and destroy the universe.

Hanson offered an economic take on the future of artificial intelligence. If A.G.I. does develop, he argues, then its likely to happen in multiple places around the same time. The systems would then be put to economic use by the companies or organizations that developed them. The market would curtail their powers; investors, wanting to see their companies succeed, would go slow and add safety features. If there are many taxi services, and one taxi service starts to, like, take its customers to strange places, then customers will switch to other suppliers, Hanson said. You dont have to go to their power source and unplug them from the wall. Youre unplugging the revenue stream.

A world in which multiple superintelligent computers coexist would be complicated. If one system goes rogue, Hanson said, we might program others to combat it. Alternatively, the first superintelligent A.I. to be invented might go about suppressing competitors. That is a very interesting plot for a science-fiction novel, Clune said. You could also imagine a whole society of A.I.s. Theres A.I. police, theres A.G.I.s that go to jail. Its very interesting to think about. But Hanson argued that these sorts of scenarios are so futuristic that they shouldnt concern us. I think, for anything youre worried about, you have to ask whats the right time to worry, he said. Imagine that you could have foreseen nuclear weapons or automobile traffic a thousand years ago. There wouldnt have been much you could have done then to think usefully about them, Hanson said. I just think, for A.I., were well before that point.

Still, something seems amiss. Some researchers appear to think that disaster is inevitable, and yet calls for work on A.I. to stop are still rare enough to be newsworthy; pretty much no one in the field wants us to live in the world portrayed in Frank Herberts novel Dune, in which humans have outlawed thinking machines. Why might researchers who fear catastrophe keep edging toward it? I believe ever-more-powerful A.I. will be created regardless of what I do, Clune told me; his goal, he said, is to try to make its development go as well as possible for humanity. Russell argued that stopping A.I. shouldnt be necessary if A.I.-research efforts take safety as a primary goal, as, for example, nuclear-energy research does. A.I. is interesting, of course, and researchers enjoy working on it; it also promises to make some of them rich. And no ones dead certain that were doomed. In general, people think they can control the things they make with their own hands. Yet chatbots today are already misaligned. They falsify, plagiarize, and enrage, serving the incentives of their corporate makers and learning from humanitys worst impulses. They are entrancing and useful but too complicated to understand or predict. And they are dramatically simpler, and more contained, than the future A.I. systems that researchers envision.

Go here to read the rest:

Can We Stop the Singularity? - The New Yorker

AI could replace 80% of jobs ‘in next few years’: expert – eNCA

RIO DE JANEIRO - Artificial intelligence could replace 80 percent of human jobs in the coming years -- but that's a good thing, says US-Brazilian researcher Ben Goertzel, a leading AI guru.

Goertzel is the founder and chief executive of SingularityNET, a research group he launched to create "Artificial General Intelligence," or AGI -- artificial intelligence with human cognitive abilities.

Goertzel told AFP in an interview that AGI is just years away and spoke out against recent efforts to curb artificial intelligence research.

"If we want machines to really be as smart as people and to be as agile in dealing with the unknown, then they need to be able to take big leaps beyond their training and programming. And we're not there yet," he said.

"But I think there's reason to believe we're years rather than decades from getting there."

Goertzel said there are jobs that could be automated.

"You could probably obsolete maybe 80 percent of jobs that people do, without having an AGI, by my guess. Not with ChatGPT exactly as a product. But with systems of that nature, which are going to follow in the next few years.

"I don't think it's a threat. I think it's a benefit. People can find better things to do with their life than work for a living... Pretty much every job involving paperwork should be automatable," he said.

"The problem I see is in the interim period when AIs are obsoleting one human job after another... I don't know how (to) solve all the social issues."

View post:

AI could replace 80% of jobs 'in next few years': expert - eNCA