Archive for the ‘Artificial General Intelligence’ Category

The Simple Reason Why AGI (Artificial General Intelligence) Is Not … – Medium

Photo by Jonathan Kemper on Unsplash

Were living in an era where the line between science fiction and reality is blurring faster than ever. Everywhere you look, theres talk about Artificial General Intelligence (AGI), a form of AI that can understand, learn, and apply knowledge across a broad range of tasks, much like a human. Its a hot topic, a cool conversation piece, and a tantalizing technological dream.

But heres the kicker: its not going to happen. And the reason is simple yet profound.

First off, lets get one thing straight: Im not a cynic. Im not the guy who says, Thats impossible! just for kicks. But when it comes to AGI, theres a fundamental issue that most tech prophets conveniently overlook. Its about understanding human intelligence itself.

Think about it. We, as a species, are still grappling with the complexities of our own minds. Neuroscience, psychology, philosophytheyve all been chipping away at the enigma of human consciousness for centuries, yet were nowhere close to fully understanding it. How, then, can we expect to create a generalized form of intelligence that mimics our own?

The advocates of AGI often talk about the exponential growth of technology, Moores Law, and all that jazz. Sure, weve made leaps and bounds in computational power and machine learning. But AGI isnt just a fancier algorithm or a more powerful processor. Its about replicating the nuanced, often irrational, and deeply complex nature of human thought and reasoning. And thats where the overzealous optimism falls flat.

Lets dive deeper. Human intelligence isnt just about processing information. Its about emotion, intuition, morality, creativity, and a myriad of other intangibles that machines, as of now, cant even begin to comprehend. You cant code empathy. You cant quantify the soul-stirring depth of a poem. How do you program a machine to understand the nuanced ethics of a complicated situation, or to appreciate the beauty of a sunset?

But wait, theres more. Theres an inherent arrogance in assuming we can create an AGI. Its like saying, We can play God. But can we? Were part of nature, not above it. Our attempts to

View original post here:

The Simple Reason Why AGI (Artificial General Intelligence) Is Not ... - Medium

What does the future hold for generative AI? – MIT News

Speaking at the Generative AI: Shaping the Future symposium on Nov. 28, the kickoff event of MITs Generative AI Week, keynote speaker and iRobot co-founder Rodney Brooks warned attendees against uncritically overestimating the capabilities of this emerging technology, which underpins increasingly powerful tools like OpenAIs ChatGPT and Googles Bard.

Hype leads to hubris, and hubris leads to conceit, and conceit leads to failure, cautioned Brooks, who is also a professor emeritus at MIT, a former director of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and founder of Robust.AI.

No one technology has ever surpassed everything else, he added.

The symposium, which drew hundreds of attendees from academia and industry to the Institutes Kresge Auditorium, was laced with messages of hope about the opportunities generative AI offers for making the world a better place, including through art and creativity, interspersed with cautionary tales about what could go wrong if these AI tools are not developed responsibly.

Generative AI is a term to describe machine-learning models that learn to generate new material that looks like the data they were trained on. These models have exhibited some incredible capabilities, such as the ability to produce human-like creative writing, translate languages, generate functional computer code, or craft realistic images from text prompts.

In her opening remarks to launch the symposium, MIT President Sally Kornbluth highlighted several projects faculty and students have undertaken to use generative AI to make a positive impact in the world. For example, the work of the Axim Collaborative, an online education initiative launched by MIT and Harvard, includes exploring the educational aspects of generative AI to help underserved students.

The Institute also recently announced seed grants for 27 interdisciplinary faculty research projects centered on how AI will transform peoples lives across society.

In hosting Generative AI Week, MIT hopes to not only showcase this type of innovation, but also generate collaborative collisions among attendees, Kornbluth said.

Collaboration involving academics, policymakers, and industry will be critical if we are to safely integrate a rapidly evolving technology like generative AI in ways that are humane and help humans solve problems, she told the audience.

I honestly cannot think of a challenge more closely aligned with MITs mission. It is a profound responsibility, but I have every confidence that we can face it, if we face it head on and if we face it as a community, she said.

While generative AI holds the potential to help solve some of the planets most pressing problems, the emergence of these powerful machine learning models has blurred the distinction between science fiction and reality, said CSAIL Director Daniela Rus in her opening remarks. It is no longer a question of whether we can make machinesthat produce new content, she said,but how we can use these tools to enhance businesses and ensure sustainability.

Today, we will discuss the possibility of a future where generative AI does not just exist as a technological marvel, but stands as a source of hope and a force for good, said Rus, who is also the Andrew and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science.

But before the discussion dove deeply into the capabilities of generative AI, attendees were first asked to ponder their humanity, as MIT Professor Joshua Bennett read an original poem.

Bennett, a professor in the MIT Literature Section and Distinguished Chair of the Humanities, was asked to write a poem about what it means to be human, and drew inspiration from his daughter, who was born three weeks ago.

The poem told of his experiences as a boy watching Star Trekwith his father and touched on the importance of passing traditions down to the next generation.

In his keynote remarks, Brooks set out to unpack some of the deep, scientific questions surrounding generative AI, as well as explore what the technology can tell us about ourselves.

To begin, he sought to dispel some of the mystery swirling around generative AI tools like ChatGPT by explaining the basics of how this large language model works. ChatGPT, for instance, generates text one word at a time by determining what the next word should be in the context of what it has already written. While a human might write a story by thinking about entire phrases, ChatGPT only focuses on the next word, Brooks explained.

ChatGPT 3.5 is built on a machine-learning model that has 175 billion parameters and has been exposed to billions of pages of text on the web during training. (The newest iteration, ChatGPT 4, is even larger.) It learns correlations between words in this massive corpus of text and uses this knowledge to propose what word might come next when given a prompt.

The model has demonstrated some incredible capabilities, such as the ability to write a sonnet about robots in the style of Shakespeares famous Sonnet 18. During his talk, Brooks showcased the sonnet he asked ChatGPT to write side-by-side with his own sonnet.

But while researchers still dont fully understand exactly how these models work, Brooks assured the audience that generative AIs seemingly incredible capabilities are not magic, and it doesnt mean these models can do anything.

His biggest fears about generative AI dont revolve around models that could someday surpass human intelligence. Rather, he is most worried about researchers who may throw away decades of excellent work that was nearing a breakthrough, just to jump on shiny new advancements in generative AI; venture capital firms that blindly swarm toward technologies that can yield the highest margins; or the possibility that a whole generation of engineers will forget about other forms of software and AI.

At the end of the day, those who believe generative AI can solve the worlds problems and those who believe it will only generate new problems have at least one thing in common: Both groups tend to overestimate the technology, he said.

What is the conceit with generative AI? The conceit is that it is somehow going to lead to artificial general intelligence. By itself, it is not, Brooks said.

Following Brooks presentation, a group of MIT faculty spoke about their work using generative AI and participated in a panel discussion about future advances, important but underexplored research topics, and the challenges of AI regulation and policy.

The panel consisted of Jacob Andreas, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a member of CSAIL; Antonio Torralba, the Delta Electronics Professor of EECS and a member of CSAIL; Ev Fedorenko, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research at MIT; and Armando Solar-Lezama, a Distinguished Professor of Computing and associate director of CSAIL. It was moderated by William T. Freeman, the Thomas and Gerd Perkins Professor of EECS and a member of CSAIL.

The panelists discussed several potential future research directions around generative AI, including the possibility of integrating perceptual systems, drawing on human senses like touch and smell, rather than focusing primarily on language and images. The researchers also spoke about the importance of engaging with policymakers and the public to ensure generative AI tools are produced and deployed responsibly.

One of the big risks with generative AI today is the risk of digital snake oil. There is a big risk of a lot of products going out that claim to do miraculous things but in the long run could be very harmful, Solar-Lezama said.

The morning session concluded with an excerpt from the 1925 science fiction novel Metropolis, read by senior Joy Ma, a physics and theater arts major, followed by a roundtable discussion on the future of generative AI. The discussion included Joshua Tenenbaum, a professor in the Department of Brain and Cognitive Sciences and a member of CSAIL; Dina Katabi, the Thuan and Nicole Pham Professor in EECS and a principal investigator in CSAIL and the MIT Jameel Clinic; and Max Tegmark, professor of physics; and was moderated by Daniela Rus.

One focus of the discussion was the possibility of developing generative AI models that can go beyond what we can do as humans, such as tools that can sense someones emotions by using electromagnetic signals to understand how a persons breathing and heart rate are changing.

But one key to integrating AI like this into the real world safely is to ensure that we can trust it, Tegmark said. If we know an AI tool will meet the specifications we insist on, then we no longer have to be afraid of building really powerful systems that go out and do things for us in the world, he said.

See the original post here:

What does the future hold for generative AI? - MIT News

One year after its public launch, ChatGPT has succeeded in igniting … – Morningstar

By Therese Poletti

It was called an "iPhone moment" for artificial intelligence, likened to the development of the World Wide Web for the internet, dubbed a Cambrian explosion, equated to the invention of the lightbulb or the printing press.

Whatever the superlatives, there is no doubt that the debut one year ago of a very fast and fluid chatbot called ChatGPT, even with its inaccuracies, has become the most important moment for Silicon Valley since the the debut of Apple Inc.'s (AAPL) iPhone in 2007 - or even the dot-com boom years earlier.

"It might be the most important human invention ever," said Jerry Kaplan, an entrepreneur, author, AI expert and Stanford University adjunct lecturer, at a discussion sponsored by Reinvent Futures this summer. "We have created a tool that can use tools. That is a very fundamental difference, and we have crossed that boundary."

ChatGPT, using a technology called generative AI, and its latest iterations and copycats have upended the hierarchies of nearly every technology sector, including semiconductors, consumer, hardware, software and the cloud, while prompting companies in farther-out sectors, like dining and energy, to tout their AI efforts as well. It also has had a huge effect on Wall Street and the stocks of companies deemed to be winners and losers in AI.

Also read: What is ChatGPT? Well you can ask it yourself.

The technology has also rekindled the startup community, especially in San Francisco, with new ideas and funding, bucking an overall downturn in venture-capital financing, and bringing hopes for future IPOs.

ChatGPT wasn't the first chatbot to go viral. Just six years ago, some of you might remember Microsoft Corp.'s (MSFT) ill-fated Tay, a chatbot that was launched on Twitter and then pulled by the software giant 16 hours later, after it began spewing offensive tweets as responses, as it learned by offensive comments made to it in tweets.

Last year's release of ChatGPT-3, though, revolutionized the AI field by making it accessible to many to use for mundane, or even creative, writing tasks. Ethan Mollick, an associate professor at the Wharton School, wrote in the Harvard Business Review a year ago that ChatGPT was a tipping point for AI, because it crossed a threshold where "it is generally useful for a wide range of tasks, from creating software to generating business ideas to writing a wedding toast."

The business world jumped on its potential and has not looked back since. Companies are experimenting and using AI to eliminate jobs and become more efficient. Morgan Stanley said last month that it believes 44% of the labor force will be affected over the next few years, with an economic impact of $2.1 trillion, eventually soaring to $4.1 trillion.

"Every industry, not just IT, every industry is talking about it or talking about how to integrate into their business," said Haibing Lu, a professor and department chair of information systems and analytics at the Leavey School of Business at Santa Clara University. "AI is on the radar of every business. I even talked to people in the winery biz. They are very traditional...but even those small business think about how AI will impact their business. Everyone realizes the impact of AI."

Companies began looking at bringing AI into their systems, leading to a massive IT investment in both their onsite data centers and in cloud computing systems, in a rush to add the massive computing power needed to run AI. Earlier this year, IDC predicted that spending by companies on AI-centric systems would reach $153 billion this year, with banking and retail leading the way, a jump of almost 27% from 2022. Companies in the thick of it, like chip makers Nvidia Corp. (NVDA), Broadcom Inc. (AVGO) and software giant Microsoft, which invested in OpenAI in 2019, have seen their stocks soar 229%, 70% and 60%, respectively, this year.

Nvidia is the true standout, providing graphics processors designed to run large language models and training applications for AI-focused data centers that its chief executive, Jensen Huang, calls "AI factories." It has seen its revenue double, and then triple in the past two quarters compared with a year ago, as it is swamped with demand for its chips and software.

"Companies are now creating the 'chief AI officer,'" said David Borish, an AI strategist. "I get pinged about that all the time." The role of a chief AI strategist or an AI officer is a new role, he said.

But the rush to embrace the vast improvements in a technology that has been over 50 years in development is fueling fears about how powerful AI is becoming, with debates raging about the march toward so-called artificial general intelligence, the point at which a machine becomes as smart as humans, and the needs for guardrails on its development.

That debate may also have been the crux for Silicon Valley's version of a soap opera at OpenAI, when this month's sudden firing of Chief Executive Sam Altman by the board - and his quick return - captured the attention of techies and civilians alike. Altman was even hired by Microsoft CEO Satya Nadella, only to negotiate a return to OpenAI days later, after nearly all the company's 800 employees threatened to quit and go to Microsoft.

The company also added Bret Taylor, the former co-CEO of Salesforce Inc. (CRM) and Larry Summers, the former Treasury secretary, to its small board of directors, joining Quora CEO Adam D'Angelo. Two other directors stepped down in the brouhaha before Thanksgiving.

A few days after Altman's return, Reuters reported what could have been the reason for his initial ouster. Some staff researchers reportedly wrote to the board, warning of a project called Q* (Q-Star), saying that it was a breakthrough toward artificial general intelligence but that it could also threaten humanity. Reuters reported that the board was concerned about commercializing AI's advances before understanding the consequences.

This has lead to more hand-wringing about whether the machines will eventually take over. But some savvy technologists believe it is time to calm down. Kaplan pointed out over the summer that artificial general intelligence, also called AGI, has a lunatic fringe all riled up and scared.

"They are not coming for us, because there is no 'they,'" he said, adding that he had a perfect answer to that problem: "Let's not do that."

"I think what it is, is the craftsman was bewitched by his craft," said Paul Saffo, a Silicon Valley forecaster and a consulting professor at the School of Engineering at Stanford University. Saffo said that over the long history of AI, researchers have long over-promised and under-delivered. "It is the opposite of biotech. Biotech mumbles and understate their innovations and it turns out to be a big deal. But AI in particular has a long history of wildly overstating what they are doing."

So far, though, the capabilities of generative AI have captured the world. And it's too late to put the genie back in the bottle.

-Therese Poletti

This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.

(END) Dow Jones Newswires

12-02-23 1001ET

Read the rest here:

One year after its public launch, ChatGPT has succeeded in igniting ... - Morningstar

Macy’s Could See Over $7.5 Billion in Additional Business Gains … – CMSWire

Page 1 of IHL AI Readiness Profile for Macy's

Page 2 of IHL AI Readiness Profile for Macy's

Page 3 of IHL AI Readiness Profile for Macy's

New research projects increased sales opportunities, improvements in gross margins, and lower expenses due to AI Readiness

Greg Buzek, President IHL Group

According to the research, Macys could see as much as $3.8 billion in increased sales, $2.1 billion in improved gross margins through lower product costs, more optimized pricing, and supply chain improvements, and then reduce by $1.7 billion sales and generative administrative costs through 2029.

Our research approach was to start by looking at opportunities from an industry-level, then to the segment and specific retailer level leveraging our public and private data, said Greg Buzek, President of IHL Group. We then applied a 9-point algorithm to each company that measured items like data maturity, analytics maturity, alignment with key vendors, as well as free cash flow.

The research includes gains that can be made through traditional AI/ML technologies, Generative AI, and the potential for Artificial General Intelligence. These figures do not include any savings from reducing headcount, rather they focus on creating more efficiency and supporting growth/lower expenses through greater efficiencies only.

These figures do not consider any cost savings resulting from workforce reduction. Instead, it solely emphasizes the creation of greater efficiency to support growth and reduce expenses.

In total, each of the retailer profiles includes the following data:

Total AI Impact from 2022-2029: Combined impact from traditional AI/ML, Generative AI, and Artificial General Intelligence. Annual Impact by Income Statement Category: Gains in sales, gross margins, or lower operating costs. Total AI Readiness Score and Rankings vs Competitors: Shows competitiveness in segment and overall retail market AI Impact by Line of Business: Explore the AI potential in Merchandising/Supply Chain, Sales & Marketing, Commerce, Infrastructure, BI/Analytics, Store Systems, and other areas such as Collaboration, ERP, and Legal. Benefits by Specific Solutions: For instance, under Merchandising/Supply Chain gain insights on benefits gained via Order Management, Assortment and Allocation Planning, Distribution Systems, Warehouse Management, etc.

For a glimpse into the rich data and insights provided by these profiles, you can access the Macys profile here.

The Retail AI Readiness Profiles are available for individual companies or enterprises can access the entire directory of profiles with ongoing access to updated data as systems evolve.

About IHL Group:

IHL Group is a global research and advisory firm headquartered in Franklin, Tennessee, that provides market analysis and business consulting services for retailers and information technology companies that focus on the retail, hospitality, and consumer goods industries. For more information, see http://www.ihlservices.com, call 615-591-2955 or e-mail [emailprotected]. Press inquiries, please use [emailprotected] or the phone number above.

Note: This report is intended for informational purposes and does not constitute financial or investment advice. Please refer to the complete report and methodology for a detailed understanding of the data and analysis.

Gregory Buzek IHL Group +1 615-591-2955 email us here

Read more here:

Macy's Could See Over $7.5 Billion in Additional Business Gains ... - CMSWire

Securing the cloud and AI: Insights from Laceworks CISO – SiliconANGLE News

Artificial intelligence is more than just a buzzword. Its the result of many technologies coming together, starting at the hardware layers.

AI is being used to generate code and protect algorithms while also being used for security in analyzing cloud and code usage, explainedMerritt Baer (pictured), field chief information security officer of Lacework Inc. Regulating the acceleration of artificial general intelligence is a current cultural tension, with some advocating for acceleration and others for slowing it down.

Weve talked about AWS Nitro before and some of the confidential computing benefits that folks get from the fact that AWS built it to not be human accessible, Baer said. So, you dont have to pay extra for that factor. This is part of a longer tale about the chip industry and other things. Its important in that, I think right now, of course, AI is a buzzword, but what were really seeing is the culmination of a lot of technologies coming to bear.

Baer spoke with theCUBE industry analyst John Furrier at the Supercloud 5: The Battle for AI Supremacy event, during an exclusive broadcast on theCUBE, SiliconANGLE Medias livestreaming studio. They discussed how securing the cloud and addressing potential security threats in the new generation of AI is crucial.

Security around AI will be important, with the ability to do more and know more, and the most likely source of an attack being a valid credential being misused, according to Baer.Companies and industries are having a reckoning and need to define the values they want to live by with tech being human-constructed and the potential for underserved communities to gain more accessibility, while also aiming for more, better and faster progress in a deliberate and conscious way.

I think that security around AI and also the security of your AI will be areas that we care about for the foreseeable future. But were going to be doing stuff at an accelerated pace with that high power compute, with the ability to do more and know when youre hitting a wire, Baer said. A proverbial wire, like a threshold. So, being able to get real-time alerting and really low latency alerting around things that look anomalous.

Security should not be seen as a cost center, but as part of the business proposition, Baer explained. Lacework is delivering effective capabilities to help customers take action and improve over time. Improving security team response time, instant response, threat detection and identity monitoring are crucial for CISOs, who often feel isolated in their roles and can benefit from automation.

As a CISO, you want to reduce the likelihood of a bad day. You want to notice when your bad day starts and have it have little to no impact. Then you also want to get that for the ROI that you are aware of for something that you have already bargained for, Baer said. Is it known? Is it an unknown? These are business decisions, and security executives are increasingly realizing that they need to present this in a sense that makes investments relevant.

Heres the complete video interview, part of SiliconANGLEs and theCUBEs coverage of the Supercloud 5: The Battle for AI Supremacy event:

THANK YOU

See the rest here:

Securing the cloud and AI: Insights from Laceworks CISO - SiliconANGLE News