The Politics of Artificial Intelligence (AI) – National and New Jersey … – InsiderNJ
On May 27, former Secretary of State Henry Kissinger will attain the age of 100. Over the last few months, I have been involved in authoring an historical essayKissinger at 100 His Complex Historical Legacy.
The essay is scheduled to be published around the time of Kissingers birthday by the Jandoli Institute, the public policy center for the Jandoli School of Communication at St. Bonaventure University. The institutes executive director is Rich Lee, a former State House reporter who also served as Deputy Communication Director for former Governor Jim McGreevey. I will also be developing a podcast regarding my essay.
For me, this project is truly a career capstone, utilizing all my analytic skills developed over a lifetime. This includes, inter alia, my studies as a political science honors scholar as a Northwestern University undergraduate, my service as a Navy officer, my years as a corporate and private practice attorney, my career as a public official, including my leadership of two major federal and state agencies, my accomplishments as a college professor, and my most recent post-retirement career as an opinion journalist.
Whether one is an admirer or critic of Dr. Henry Kissinger, there is no question that he has been a transformative figure, with a greater impact on American history than any 20th century American other than our presidents. Researching his life and career is truly a Sisyphean endeavor.
Kissinger has authored thirteen books, a plethora of articles, and numerous media appearances. In jocular fashion, I have told friends and family members that researching Henry Kissinger is like studying the Torah you never finish it!
So about a month ago, I thought that I had finished all my Kissinger research until I had the good fortune to meet with a friend of mine who also, unbeknownst to me, was a friend of Henry Kissinger. When I informed him of my Kissinger project, he proceeded to display for me on his I phone numerous photos of him and the legendary Dr. K!
Then, he asked me what were my research sources. I proudly told him the list of my readings, video tape viewings, and interviews. He responded by saying, Very good, but you have a critical omission. You did not read the book, The Age of AI (artificial intelligence) and Our Human Future.
The book was co-authored by Henry Kissinger, Eric Schmidt, former CEO of Google, and Daniel Huttenlocher, the Inaugural Dean of the MIT Schwarzman College of Computing. For ease of reference, and with all due respect to his co-authors, I will refer to this work as the Kissinger AI book.
I told my friend that I was aware of the book, but I had chosen not to include it in my essay because of my focus on Kissinger as a foreign policy maker and diplomat. My friend, however, admonished me, You do not understand. For Henry, his involvement with AI is a legacy item.
So I immediately ordered the book. My friend was correct. The Kissinger AI book should be a must read for high governmental officials, New Jersey and federal. Every New Jersey cabinet member and authority executive director should have this book on his or her desk.
Within the last month, AI has become a growing arena of national focus, sparked in large part by the resignation of Dr. Geoffrey Hinton from his job at Google. Dr. Hinton is known as the Godfather of AI. He resignedso he can freely speak out about the risks of AI. A part of him, he said, now regrets his lifes work.
In New Jersey, late last year, a bill was introduced in the Assembly, A4909, which would mandate thatemployers could use only hiring software that has been subjected to a bias audit, which looks for any patterns of discrimination. It would require annual reviews of whether programs comply with state law.
The bill was generated because of increasing concern that a growing number of AI systems had either a gender, racial, or disability bias. As an example,Reuters reported in 2018that Amazon had stopped using an AI recruiting tool because it penalized applicants with resumes that referred to womens activities or degrees from two all-womens colleges.
In February, NorthJersey.com journalist Daniel Munoz authored a comprehensive column dealing with AI and its potential dangers and biases in the hiring process. Included in the column was an interview with Assemblywoman Sadaf Jaffer (D-Mercer) a prime sponsor of this legislation.
It should be noted that the Kissinger AI book strongly recommends the auditing of AI systems by humans, rather than self-auditing by machines themselves. The human auditing can both increase the effectiveness of the AI while mitigating its dangers.
And today, on Twitter, Assembly Majority Leader Lou Greenwald (D-Camden) stated as follows: The power that Artificial Intelligence possesses makes it a potentially dangerous tool for people looking to spread misinformation. This is why I will be introducing legislation that looks to limit the harmful uses it has on election campaigns.
The beneficial effects of AI are real, as are the dangers. The politics of AI is the subject of increasing focus at both the national and New Jersey level.
The Kissinger AI book is highly relevant to all AI issues, both federal and state. The three-fold focus of the book makes it an indispensable basic guide to AI politics.
First, it gives a concise, contextual definition of AI. Second, it describes in depth the potential benefits and dangers of AI. Third, it proposes some solutions of a beginning nature to deal with the emerging negative impacts of AI.
In terms of contextual definition, the Kissinger AI book describes two empirical tests of what constitutes AI.
The first is the Alan Turing test, stating that if a software process enabled a machine to operate so proficiently that observers could not distinguish its behavior from a humans, the machine should be labeled intelligent.
Second is the John McCarthy test, defining AI as machines that can perform tasks that are characteristic of human intelligence.
The Kissinger AI book also describes the impact of AI on the reasoning process, so integral to decision making. The three components of reason are information, knowledge, and wisdom. When information becomes contextualized, it leads to knowledge. When knowledge leads to conviction, it becomes wisdom. Yet AI is without the reflection and self-awareness qualities that are essential to wisdom.
This lack of wisdom, combined with three essential features of AI magnifies its enormous danger in certain situations: 1) Its usefor both warlike and peaceful purposes; 2) its massive destructive force; and 3) its capacity to be deployed and spread easily, quickly, and widely.
The most alarming feature of AI is on the horizon: the arrival of artificial general intelligence (AGI). This means AI capable of completing any intellectual task humans are capable of, in contrast to todays narrow AI, which is developed to complete a specific task.
It is the growing capacity of unsupervised self-learning by AI systems which is facilitating the potential of the arrival of AGI. With AGI comes autonomy and autonomy in weapons systems increases the potential for accidental war.
The potential of AI leading to accidental war, along with the two above mentioned dangers publicized in New Jersey of AI generated job discrimination and political disinformation are the negative aspects of AI which will receive the most focus in the forthcoming debate.
Yet AI is not without its extremely beneficial uses, most notably in the development of new prescription drugs. So the obvious task of government, federal and state, is to filter out the dangers and facilitate the beneficial uses.
As a first step, the Kissinger AI book recommends that new national governmental authorities be created with two objectives: 1) America must remain intellectually and strategically competitive in AI; and 2) Studies should be undertaken to assess the cultural implications of AI.
In New Jersey, the best way to governmentally meet this challenge would be to create a new cabinet level Department of Science, Information, and Technology.
We currently have in New Jersey the Commission on Science, Information, and Technology, which with limited funding does a most commendable job in fulfilling its mission, namely: Responsibility for strengthening the innovation economy within the State, encouraging collaboration and connectivity between industry and academia, and the translation of innovations into successful high growth businesses.
A Department of Science, Information, and Technology would have three additional powers: 1) Regulatory powers regarding auditing, self-learning, and AGI; and 2) the ability to commission more in-depth studies regarding AI cultural impact; and 3) the ability to coordinate scientific policy throughout the executive branch. Obviously, an increased level of funding would be necessary to execute these three functions.
I also have a recommendation for the first New Jersey Commissioner of Science, Innovation, and Technology, State Senator Andrew Zwicker (D-Middlesex). His brilliance and competence as a scientist as demonstrated from his service at the Princeton Plasma Laboratory and his proven integrity and ethics in state government make him an ideal candidate for this role.
And to Henry Kissinger, my fellow Jew, I say to you: Mazal Tov on your 100th birthday! And like Moses in the Torah, may you live at least 120 years!
Alan J. Steinberg served as regional administrator ofRegion2 EPA during the administration of former President George W. Bush and as executive director of the New Jersey Meadowlands Commission.
(Visited 322 times, 7 visits today)
Visit link:
The Politics of Artificial Intelligence (AI) - National and New Jersey ... - InsiderNJ
- How Do You Get to Artificial General Intelligence? Think Lighter - WIRED - November 28th, 2024 [November 28th, 2024]
- How much time do we have before Artificial General Intelligence (AGI) to turns into Artificial Self-preserving - The Times of India - November 5th, 2024 [November 5th, 2024]
- Simuli to Leap Forward in the Trek to Artificial General Intelligence through 2027 Hyperdimensional AI Ecosystem - USA TODAY - November 5th, 2024 [November 5th, 2024]
- Implications of Artificial General Intelligence on National and International Security - Yoshua Bengio - - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - Business Insider - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - MSN - October 31st, 2024 [October 31st, 2024]
- Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI - MSN - October 31st, 2024 [October 31st, 2024]
- Artificial General Intelligence (AGI) Market to Reach $26.9 Billion by 2031 As Revealed In New Report - WhaTech - September 26th, 2024 [September 26th, 2024]
- 19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN - September 26th, 2024 [September 26th, 2024]
- Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -... - August 16th, 2024 [August 16th, 2024]
- Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra - August 11th, 2024 [August 11th, 2024]
- OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED - July 22nd, 2024 [July 22nd, 2024]
- OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub - July 22nd, 2024 [July 22nd, 2024]
- One of the Best Ways to Invest in AI Is Dont - InvestorPlace - July 22nd, 2024 [July 22nd, 2024]
- OpenAI is plagued by safety concerns - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica - July 17th, 2024 [July 17th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite - July 17th, 2024 [July 17th, 2024]
- Heres how OpenAI will determine how powerful its AI systems are - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz - July 17th, 2024 [July 17th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune - July 17th, 2024 [July 17th, 2024]
- AI News Today July 15, 2024 - The Dales Report - July 17th, 2024 [July 17th, 2024]
- The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx - July 17th, 2024 [July 17th, 2024]
- What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer - July 17th, 2024 [July 17th, 2024]
- Companies are losing faith in AI, and AI is losing money - Android Headlines - July 17th, 2024 [July 17th, 2024]
- AGI isn't here (yet): How to make informed, strategic decisions in the meantime - VentureBeat - June 16th, 2024 [June 16th, 2024]
- Apple's AI Privacy Measures, Elon Musk's Robot Prediction, And More: This Week In Artificial Intelligence - Alphabet ... - Benzinga - June 16th, 2024 [June 16th, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire - June 16th, 2024 [June 16th, 2024]
- Apple's big AI announcements were all about AI 'for the rest of us'Google, Meta, Amazon and, yes, OpenAI should ... - Fortune - June 16th, 2024 [June 16th, 2024]
- Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman - The New York Times - June 16th, 2024 [June 16th, 2024]
- Staying Ahead of the AI Train - ATD - June 16th, 2024 [June 16th, 2024]
- OpenAI disbands its AI risk mitigation team - - May 20th, 2024 [May 20th, 2024]
- BEYOND LOCAL: 'Noise' in the machine: Human differences in judgment lead to problems for AI - The Longmont Leader - May 20th, 2024 [May 20th, 2024]
- Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties - Futurism - May 20th, 2024 [May 20th, 2024]
- What Is AI? How Artificial Intelligence Works (2024) - Shopify - May 20th, 2024 [May 20th, 2024]
- Vitalik Buterin says OpenAI's GPT-4 has passed the Turing test - Cointelegraph - May 20th, 2024 [May 20th, 2024]
- "I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - Vox.com - May 18th, 2024 [May 18th, 2024]
- 63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved - PC Gamer - May 18th, 2024 [May 18th, 2024]
- Top OpenAI researcher resigns, saying company prioritized 'shiny products' over AI safety - Fortune - May 18th, 2024 [May 18th, 2024]
- The revolution in artificial intelligence and artificial general intelligence - Washington Times - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Yahoo! Voices - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Moore County News Press - May 18th, 2024 [May 18th, 2024]
- Generative AI Is Totally Shameless. I Want to Be It - WIRED - May 18th, 2024 [May 18th, 2024]
- OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge - May 18th, 2024 [May 18th, 2024]
- Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv - May 18th, 2024 [May 18th, 2024]
- A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press - May 18th, 2024 [May 18th, 2024]
- DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph - April 20th, 2024 [April 20th, 2024]
- Congressional panel outlines five guardrails for AI use in House - FedScoop - April 20th, 2024 [April 20th, 2024]
- The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl - April 20th, 2024 [April 20th, 2024]
- DeepMind Head: Google AI Spending Could Exceed $100 Billion - PYMNTS.com - April 20th, 2024 [April 20th, 2024]
- Say hi to Tong Tong, world's first AGI child-image figure - ecns - April 20th, 2024 [April 20th, 2024]
- Silicon Scholars: AI and The Muslim Ummah - IslamiCity - April 20th, 2024 [April 20th, 2024]
- AI stocks aren't like the dot-com bubble. Here's why - Quartz - April 20th, 2024 [April 20th, 2024]
- AI vs. AGI: The Race for Performance, Battling the Cost? for NASDAQ:GOOG by Moshkelgosha - TradingView - April 20th, 2024 [April 20th, 2024]
- We've Been Here Before: AI Promised Humanlike Machines In 1958 - The Good Men Project - April 20th, 2024 [April 20th, 2024]
- Google will spend more than $100 billion on AI, exec says - Quartz - April 20th, 2024 [April 20th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Bakersfield Californian - April 8th, 2024 [April 8th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Caledonian-Record - April 8th, 2024 [April 8th, 2024]
- What is AGI and how is it different from AI? - ReadWrite - April 8th, 2024 [April 8th, 2024]
- Artificial intelligence in healthcare: defining the most common terms - HealthITAnalytics.com - April 8th, 2024 [April 8th, 2024]
- We're Focusing on the Wrong Kind of AI Apocalypse - TIME - April 8th, 2024 [April 8th, 2024]
- Xi Jinping's vision in supporting the artificial intelligence at home and abroad - Modern Diplomacy - April 8th, 2024 [April 8th, 2024]
- As 'The Matrix' turns 25, the chilling artificial intelligence (AI) projection at its core isn't as outlandish as it once seemed - TechRadar - April 8th, 2024 [April 8th, 2024]
- AI & robotics briefing: Why superintelligent AI won't sneak up on us - Nature.com - January 10th, 2024 [January 10th, 2024]
- Get Ready for the Great AI Disappointment - WIRED - January 10th, 2024 [January 10th, 2024]
- Part 3 Capitalism in the Age of Artificial General Intelligence (AGI) - Medium - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI): what it is and why its discovery can change the world - Medium - January 10th, 2024 [January 10th, 2024]
- Exploring the Path to Artificial General Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- The Acceleration Towards Artificial General Intelligence (AGI) and Its Implications - Medriva - January 10th, 2024 [January 10th, 2024]
- OpenAI Warns: "AGI Is Coming" - Do we have a reason to worry? - Medium - January 10th, 2024 [January 10th, 2024]
- The fight over ethics intensifies as artificial intelligence quickly changes the world - 9 & 10 News - January 10th, 2024 [January 10th, 2024]
- AI as the Third Window into Humanity: Understanding Human Behavior and Emotions - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI) in Radiation Oncology: Transformative Technology - Medriva - January 10th, 2024 [January 10th, 2024]
- Exploring the Potential of AGI: Opportunities and Challenges - Medium - January 10th, 2024 [January 10th, 2024]
- Full-Spectrum Cognitive Development Incorporating AI for Evolution and Collective Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial Superintelligence - Understanding a Future Tech that Will Change the World! - MobileAppDaily - January 10th, 2024 [January 10th, 2024]
- Title: AI Unveiled: Exploring the Realm of Artificial Intelligence - Medium - January 10th, 2024 [January 10th, 2024]