On Language And Intelligence – Science 2.0
A revolution is taking place, but we seem to not yet realize it.Paradigm shifting technologies often produce an abrupt transition when they get adopted. However, that transition is not easy to recognize early on: the effects of an exponential trend appear linear at the begninning, so the explosive force of the transition that occurs a little later takes many by surprise.
Let us look at the status of development of large language models. This is a relatively new technology that is powered by recent advances in machine learning - in particular, the capability we have acquired to train very large neural networks tasked with producing meaningful text in answer to arbitrarily complex questions. The networks that perform this task today have billions or trillions of parameters, whose values are learned by processing huge datasets of text mined from the internet. The training of these models takes enormous amounts of computing power, and correspondingly large amounts of money (of the order of 10 million dollars per training, and up).
The scaling up of the size of these large language models, in particular ChatGPT3 and 4, has brought to what appears like a phase transition in their performance. Yet we have grown accustomed to treat artificial intelligence developments with a contempt: every time something new comes up which in the past used to be considered a far, hard-to-achieve target, we react by some shoulder-shrugging. Self-driving cars? Just a dumb neural network trained with lots of images. Speech recognition? Nothing but mathematical transformation of sound into time series. Computers beating humans at chess and go? Only an effect of CPU scaling. We continue to shift the bar up, and keep claiming that artificial intelligence is "something else", which is yet to come. But is it?
I am not a true expert in artificial intelligence - I am a physicist, for goodness' sake! But I do work with complex machine learning systems, and I have been an observer of the field for several decades now. So I feel entitled to tell you what I think about the matter. What I see is that the sensation produced by the recently made available ChatGPT models mostly lays in observing the wealth of applications that these tools have, and their game-changing effect on our society; but we should look further in it.
The potential dangers of unrestrained, uncontrolled use of the new technology is a real concern which has brought to the open letter arguing for a 6-months pause on the development of these models, by the "Future of Life" institute. It seems indeed a reasonable course of action to wait before developing further more powerful language models, and use the time to try to assess the situation and create a system of checks and balances to prevent damage to crucial elements of our civilizations: in particular, the exploitation of these AI technologies might result in manipulation and reshaping public opinion, for the purpose of gaining political control. But there are also other potential threats.
If you have never had a conversation with ChatGPT I suggest that you try it out for yourself. The system is capable to not only correctly interpret quite complex questions, but to produce text and answers that are of very high quality. After a while, it feels like you are really talking to a sentient being. Now, we must be careful here - of course, we cannot call "sentient" a computer program that puts together words according to mathematical recipes, can we? And by the way, it is not difficult to get ChatGPT produce false statements, or completely made up references. But so can we when we talk with other humans!
I have started to use ChatGPT as a companion in my studies, a better, smarter, faster, more powerful version of Google. Yesterday I tested it by formalizing in seven lines of text a problem that would probably have taken twenty minutes to precisely explain to a colleague - those seven lines of text were quite thick with math, written as you would write math in an email ("Consider a likelihood ratio of Poisson measurements, R= L_1(Poisson(N_i|mu_i,1)) / L_0(Poisson(N_i|mu_i,0) where i runs on a set of observed counts ...."). Well, ChatGPT not only provided me with a correct answer to my question, but it also used the same kind of language in its answer; and when I asked it to produce code that performed the operations leading to the solution of my problem, it did so flawlessly. Of course you have to be careful when using these outputs: there is absolutely no guarantee that the programs will be correct, or that the answers are correct. But neither can you say that about the answer of a colleague!
Intelligence is a concept very hard to define: there is a huge literature on what it is, what are its components, how we can quantify it or recognize it. I won't get into that matter, but I want to observe that one of the ways we typically assess an individual as an intelligent person is by hearing he or she talk. The capability to produce complex language and elaborate abstract concepts is undoubtedly a mark of intelligent beings. And when we are hit by a stroke, maybe a small hemorrage in our brain, we may temporarily lose our ability to speak or to put together meaningful sentences.
Further, consider Alzheimer: people who are progressively hit harder and harder by that impairing condition gradually lose their ability to put together correct sentences. I lost my mother that way six years ago, and I remember observing that in very close connection with her capability to speak, came a gradual deterioration of her intelligence. The two things are inextricably linked: we appear to be able to put together intelligent thought by processing text in our brain, even if we do not speak.
Because of the above, I believe that we must acknowledge that these large language models possess distinct traits of intelligence. It does not matter to me much if they put together their flawless answers by mathematical operations between large matrices of weights and biases: what matters is the result of those operations, and the fact that it is hard to distinguish -if not superior- to what a human mind can produce.
Of course, large language models are static systems: once they are trained -as I said, with considerable effort and expense, not to mention CO2 impact- they do not further "learn" by interacting with their users. They also do not have any means of acquiring information and processing it by sensory inputs. These limitations make these systems quite different to what we have always considered could be the capabilities of a true "artificial general intelligence". Indeed, a world-class expert on the matter like Yann Le Cun insists on saying that "on the way to AGI, large language models are an off-track" in twitter and in other venues, and he is of course right: these instruments will never "come alive" and become independent. They will be limited to one task: producing text in response to a prompt. Not real intelligence, not really. And yet...
Yet I cannot help thinking that we have to rethink what we call "intelligence" in the light of the capabilities of these systems. If they match our speech and writing skills, they have to be credited to be reasoning. The reasoning they perform is different from the reasoning that takes place in our brains to some extent, but not overly so after all: we also reason by using weights and biases that are encoded in our neurons. So we are not that different from large language models, at least in how we produce language.
Whether humanity will benefit and exploit for good causes the empowerment provided by ChatGPT and its successors - because I am convinced that there will be further more powerful models in our near future - or whether it will succumb to this new technology, the jury is still out on. But for sure these are interesting times!
See the original post:
On Language And Intelligence - Science 2.0
- How Do You Get to Artificial General Intelligence? Think Lighter - WIRED - November 28th, 2024 [November 28th, 2024]
- How much time do we have before Artificial General Intelligence (AGI) to turns into Artificial Self-preserving - The Times of India - November 5th, 2024 [November 5th, 2024]
- Simuli to Leap Forward in the Trek to Artificial General Intelligence through 2027 Hyperdimensional AI Ecosystem - USA TODAY - November 5th, 2024 [November 5th, 2024]
- Implications of Artificial General Intelligence on National and International Security - Yoshua Bengio - - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - Business Insider - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - MSN - October 31st, 2024 [October 31st, 2024]
- Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI - MSN - October 31st, 2024 [October 31st, 2024]
- Artificial General Intelligence (AGI) Market to Reach $26.9 Billion by 2031 As Revealed In New Report - WhaTech - September 26th, 2024 [September 26th, 2024]
- 19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN - September 26th, 2024 [September 26th, 2024]
- Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -... - August 16th, 2024 [August 16th, 2024]
- Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra - August 11th, 2024 [August 11th, 2024]
- OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED - July 22nd, 2024 [July 22nd, 2024]
- OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub - July 22nd, 2024 [July 22nd, 2024]
- One of the Best Ways to Invest in AI Is Dont - InvestorPlace - July 22nd, 2024 [July 22nd, 2024]
- OpenAI is plagued by safety concerns - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica - July 17th, 2024 [July 17th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite - July 17th, 2024 [July 17th, 2024]
- Heres how OpenAI will determine how powerful its AI systems are - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz - July 17th, 2024 [July 17th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune - July 17th, 2024 [July 17th, 2024]
- AI News Today July 15, 2024 - The Dales Report - July 17th, 2024 [July 17th, 2024]
- The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx - July 17th, 2024 [July 17th, 2024]
- What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer - July 17th, 2024 [July 17th, 2024]
- Companies are losing faith in AI, and AI is losing money - Android Headlines - July 17th, 2024 [July 17th, 2024]
- AGI isn't here (yet): How to make informed, strategic decisions in the meantime - VentureBeat - June 16th, 2024 [June 16th, 2024]
- Apple's AI Privacy Measures, Elon Musk's Robot Prediction, And More: This Week In Artificial Intelligence - Alphabet ... - Benzinga - June 16th, 2024 [June 16th, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire - June 16th, 2024 [June 16th, 2024]
- Apple's big AI announcements were all about AI 'for the rest of us'Google, Meta, Amazon and, yes, OpenAI should ... - Fortune - June 16th, 2024 [June 16th, 2024]
- Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman - The New York Times - June 16th, 2024 [June 16th, 2024]
- Staying Ahead of the AI Train - ATD - June 16th, 2024 [June 16th, 2024]
- OpenAI disbands its AI risk mitigation team - - May 20th, 2024 [May 20th, 2024]
- BEYOND LOCAL: 'Noise' in the machine: Human differences in judgment lead to problems for AI - The Longmont Leader - May 20th, 2024 [May 20th, 2024]
- Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties - Futurism - May 20th, 2024 [May 20th, 2024]
- What Is AI? How Artificial Intelligence Works (2024) - Shopify - May 20th, 2024 [May 20th, 2024]
- Vitalik Buterin says OpenAI's GPT-4 has passed the Turing test - Cointelegraph - May 20th, 2024 [May 20th, 2024]
- "I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - Vox.com - May 18th, 2024 [May 18th, 2024]
- 63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved - PC Gamer - May 18th, 2024 [May 18th, 2024]
- Top OpenAI researcher resigns, saying company prioritized 'shiny products' over AI safety - Fortune - May 18th, 2024 [May 18th, 2024]
- The revolution in artificial intelligence and artificial general intelligence - Washington Times - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Yahoo! Voices - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Moore County News Press - May 18th, 2024 [May 18th, 2024]
- Generative AI Is Totally Shameless. I Want to Be It - WIRED - May 18th, 2024 [May 18th, 2024]
- OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge - May 18th, 2024 [May 18th, 2024]
- Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv - May 18th, 2024 [May 18th, 2024]
- A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press - May 18th, 2024 [May 18th, 2024]
- DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph - April 20th, 2024 [April 20th, 2024]
- Congressional panel outlines five guardrails for AI use in House - FedScoop - April 20th, 2024 [April 20th, 2024]
- The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl - April 20th, 2024 [April 20th, 2024]
- DeepMind Head: Google AI Spending Could Exceed $100 Billion - PYMNTS.com - April 20th, 2024 [April 20th, 2024]
- Say hi to Tong Tong, world's first AGI child-image figure - ecns - April 20th, 2024 [April 20th, 2024]
- Silicon Scholars: AI and The Muslim Ummah - IslamiCity - April 20th, 2024 [April 20th, 2024]
- AI stocks aren't like the dot-com bubble. Here's why - Quartz - April 20th, 2024 [April 20th, 2024]
- AI vs. AGI: The Race for Performance, Battling the Cost? for NASDAQ:GOOG by Moshkelgosha - TradingView - April 20th, 2024 [April 20th, 2024]
- We've Been Here Before: AI Promised Humanlike Machines In 1958 - The Good Men Project - April 20th, 2024 [April 20th, 2024]
- Google will spend more than $100 billion on AI, exec says - Quartz - April 20th, 2024 [April 20th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Bakersfield Californian - April 8th, 2024 [April 8th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Caledonian-Record - April 8th, 2024 [April 8th, 2024]
- What is AGI and how is it different from AI? - ReadWrite - April 8th, 2024 [April 8th, 2024]
- Artificial intelligence in healthcare: defining the most common terms - HealthITAnalytics.com - April 8th, 2024 [April 8th, 2024]
- We're Focusing on the Wrong Kind of AI Apocalypse - TIME - April 8th, 2024 [April 8th, 2024]
- Xi Jinping's vision in supporting the artificial intelligence at home and abroad - Modern Diplomacy - April 8th, 2024 [April 8th, 2024]
- As 'The Matrix' turns 25, the chilling artificial intelligence (AI) projection at its core isn't as outlandish as it once seemed - TechRadar - April 8th, 2024 [April 8th, 2024]
- AI & robotics briefing: Why superintelligent AI won't sneak up on us - Nature.com - January 10th, 2024 [January 10th, 2024]
- Get Ready for the Great AI Disappointment - WIRED - January 10th, 2024 [January 10th, 2024]
- Part 3 Capitalism in the Age of Artificial General Intelligence (AGI) - Medium - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI): what it is and why its discovery can change the world - Medium - January 10th, 2024 [January 10th, 2024]
- Exploring the Path to Artificial General Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- The Acceleration Towards Artificial General Intelligence (AGI) and Its Implications - Medriva - January 10th, 2024 [January 10th, 2024]
- OpenAI Warns: "AGI Is Coming" - Do we have a reason to worry? - Medium - January 10th, 2024 [January 10th, 2024]
- The fight over ethics intensifies as artificial intelligence quickly changes the world - 9 & 10 News - January 10th, 2024 [January 10th, 2024]
- AI as the Third Window into Humanity: Understanding Human Behavior and Emotions - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI) in Radiation Oncology: Transformative Technology - Medriva - January 10th, 2024 [January 10th, 2024]
- Exploring the Potential of AGI: Opportunities and Challenges - Medium - January 10th, 2024 [January 10th, 2024]
- Full-Spectrum Cognitive Development Incorporating AI for Evolution and Collective Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial Superintelligence - Understanding a Future Tech that Will Change the World! - MobileAppDaily - January 10th, 2024 [January 10th, 2024]
- Title: AI Unveiled: Exploring the Realm of Artificial Intelligence - Medium - January 10th, 2024 [January 10th, 2024]