Archive for the ‘Artificial Intelligence’ Category

Hugh Linehan: Even with Spielberg-style cuddliness, there’s a cold, dark void at the heart of artificial intelligence – The Irish Times

I didnt much care for AI Artificial Intelligence when it came out, in 2001. The films origin story a decades-long, endlessly reworked Stanley Kubrick project picked up by Steven Spielberg and put into production within months of Kubricks death was, it seemed at the time, probably responsible for its many flaws. I agreed with the San Francisco Chronicle when it wrote that we end up with the structureless, meandering, slow-motion endlessness of Kubrick combined with the fuzzy, cuddly mindlessness of Spielberg. Its a coupling from hell.

But a couple of decades and several technological leaps forward later, the film looks a more convincing version of where we are heading than it did at the start of this century. Most of that is a question of pure form; the phenomenon known in English as the uncanny valley was coined in 1978 by the robotics professor Masahiro Mori to describe the sense of unease generated by machines that look, sound or behave almost but not quite like humans.

In AI the same queasiness is generated not by a robot (although that is one of the films supposed themes) but by the very contrary world views and obsessions of its two creators. The movie is itself a sort of uncanny valley. As Tim Greiving pointed out in a 20th-anniversary appreciation for the Ringer, when you cut AI open, you find cold Kubrick machinery underneath warm Spielberg skin.

Kubrick spent almost 30 years trying to develop Brian Aldisss short story Supertoys Last All Summer Long. By the early 1980s it had been reconfigured as a Pinocchio allegory, with David, an artificial boy, rejected by his human mother and going on a quest with his Jiminy Cricket-like friend Teddy in search of a Blue Fairy who will explain the mystery of his existence. Having hired and fired several screenwriters, as was his wont, Kubrick showed it to his friend Spielberg, who described it as the best story youve ever had to tell.

Kubrick was an obsessive genius with a bleak view of the human condition expressed through a canon of unique films that he managed to finance by pretending they were in mainstream genres such as historical drama or horror. Spielberg is a populist master of commercial cinema with a humanist sensibility that seeks a transcendent redemption to every narrative arc. Kubrick, who never had a blockbuster hit on the scale of Jaws or ET: The Extra-Terrestrial, thought AI could be his shot at topping the box office. But as the years wore on, the gaps between his films became longer and longer. In the final 20 years of his life he only made three and died of a heart attack while completing post-production on the last of those, Eyes Wide Shut. With Minority Report delayed by Tom Cruises unavailability, Spielberg jumped in.

Kubricks long-time confidant and collaborator Jan Harlan insists the director truly believed Steven would be the better director for this film and I think he was right.

He wasnt. The film has the ho-hum competence we associate with middling Spielberg. An 11-year-old Haley Joel Osment, fresh from his Oscar nomination for The Sixth Sense, is at the core of everything as the lost robot boy. The set pieces in a 22nd-century dystopia scarred by climate change are unmemorable. There is no sense of the internet, much less of the intelligence explosion that IJ Good posited in 1965, four years before Aldiss wrote Supertoys Last All Summer Long and 35 years before the film AI was made. Good predicted a tipping point at which technology achieves sentience and autonomy from humans. In that sense, The Terminator is a more accurate vision of the future.

But, with all its flaws (or maybe because of them), AI still feels a more plausible future than Arnold Schwarzenegger chasing us with a big gun. A decaying capitalist society. A climate disaster. The end of humanity. It just doesnt sound like a Spielberg movie. Spielberg was faithful to Kubricks preparatory notes and adjusted his shooting style to match the older mans visual sensibility. But that warm fuzziness is still there, encasing Kubricks far chillier vision. And despite what the San Francisco Chronicle said, theres none of the deadpan monotony of classic Kubrickian sequences in 2001: A Space Odyssey, Barry Lyndon or The Shining.

Viewed in 2024, though, AI Artificial Intelligence bears many of the qualities that are becoming familiar from the chatbots and generative products that are beginning to infiltrate our day-to-day lives courtesy of Google, Microsoft and soon, apparently, Apple. The humanlike touches. The ingratiating tone. And, beneath it all, the cold, dark void.

View post:
Hugh Linehan: Even with Spielberg-style cuddliness, there's a cold, dark void at the heart of artificial intelligence - The Irish Times

Bill Gates on his nuclear energy investment, AI’s challenges – NPR

Bill Gate poses for a portrait at NPR headquarters in Washington, D.C., June 13, 2024. Ben de la Cruz/NPR hide caption

Artificial intelligence may come for our jobs one day, but before that happens, the data centers it relies on are going to need a lot of electricity.

So how do we power them and millions of U.S. homes and businesses without generating more climate-warming gases?

Microsoft founder, billionaire philanthropist and investor Bill Gates is betting that nuclear power is key to meeting that need and hes digging into his own pockets to try and make it happen.

Gates has invested $1 billion into a nuclear power plant that broke ground in Kemmerer, Wyo., this week. The new facility, designed by the Gates-founded TerraPower, will be smaller than traditional fission nuclear power plants and, in theory, safer because it will use sodium instead of water to cool the reactors core.

TerraPower estimates the plant could be built for up to $4 billion, which would be a bargain when compared to other nuclear projects recently completed in the U.S. Two nuclear reactors built from scratch in Georgia cost nearly $35 billion, the Associated Press reports.

Construction on the TerraPower plant is expected to be completed by 2030.

Gates sat for an interview at NPR headquarters with Morning Edition host Steve Inskeep to discuss his multibillion dollar nuclear power investment and how he views the benefits and challenges of artificial intelligence, which the plant hes backing may someday power.

This interview has been edited for length and clarity.

Steve Inskeep: Let me ask about a couple of groups that you need to persuade, and one of them is long-time skeptics of the safety of nuclear power, including environmental groups, people who will put pressure on some of the political leaders that you've been meeting here in Washington. Are you convinced you can make a case that will persuade them?

Bill Gates: Well, absolutely. The safety case for this design is incredibly strong just because of the passive mechanisms involved. People have been talking about it for 60 years, that this is the way these things should work.

Meaning if it breaks down, it just cools off.

Exactly.

Something doesn't have to actively happen to cool it.

There's no high pressure on the reactor. Nothing that's pushing to get out. Water, as it's heated up, creates high pressure. And we have no high pressure and no complex systems needed to guarantee the safety. The Nuclear Regulatory Commission is the best in the world, and they'll question us and challenge us. And, you know, that's fantastic. That's a lot of what the next six years are all about.

Taillights trace the path of a motor vehicle at the Naughton Power Plant, Jan. 13, 2022, in Kemmerer, Wyo. Bill Gates and his energy company are starting construction at their Wyoming site adjacent to the coal plant for a next-generation nuclear power plant he believes will revolutionize how power is generated. Natalie Behring/AP/FR170146 AP hide caption

Let me ask about somebody else you need to persuade, and that is markets showing them that this makes financial sense. Sam Altman, CEO of OpenAI, is promoting and investing in nuclear power and is connected with a company that put its stock on the market and it immediately fell. Other projects that started to seem too expensive have been canceled in recent years. Can you persuade the markets?

Well, the current reactors are too expensive. There are companies working on fission and there's companies working on fusion. Fusion is further out. I hope that succeeds. I hope that in the long run it is a huge competitor to this TerraPower nuclear fission. Unlike previous reactors, we're not asking the ratepayers in a particular geography to guarantee the costs. So this reactor, all of the costs of building this are with the private company, TerraPower, in which I'm the biggest investor. And for strategic reasons, the U.S. government is helping with the first-of-the-kind costs.

The U.S. Department of Energy is funding half the costs of TerraPowers project, which includes the cost of designing and licensing the reactor, the AP reports.

I wonder if you can approach an ordinary investor and say, This is a good risk. It's going to pay off in a reasonable time frame?

You know, we're not choosing to take this company public, because understanding all of these issues are very complex. Many of our investors will be strategic investors who want to supply components, or they come from countries like Japan and Korea, where renewables are not as easy because of the geography. And so they want to go completely green. They, even more than the U.S., will need nuclear to do that.

What is the connection between AI and nuclear power?

Well, I suppose people want innovation to give us even cheaper electricity while making it clean. People who are optimistic about innovation in software and AI bring that optimism to the other things they do. There is a more direct connection, though, which is that the additional data centers that we'll be building look like they'll be as much as a 10% additional load for electricity. The U.S. hasn't needed much new electricity but with the rise in a variety of things from electric cars and buses to electric heat pumps to heating homes, demand for electricity is going to go up a lot. And now these data centers are adding to that. So the big tech companies are out looking at how they can help facilitate more power, so that these data centers can serve the exploding AI demand.

I'm interested in whether you see artificial intelligence as something that potentially could exacerbate income inequality, something that you as a philanthropist would think about.

Well, I think the two domains that I'm most involved in seeing how AI can help are health and education. I was in Newark, New Jersey, recently seeing the Khan Academy AI called Khanmigo being used in math classes, and I was very impressed how the teachers were using it to look at the data, divide students up to have personalized tutoring at the level of a kid who's behind or a kid who's ahead.

Whenever I get, like, a medical bill or a medical diagnosis, I put it in the AI and get it to explain it to me. You know, it's incredible at that. And if we look at countries like in Africa where the shortage of doctors is even more dramatic than in the United States, the idea that we can get more medical advice to pregnant women or anybody suffering from malaria, I'm very excited. And so driving it forward appropriately in those two domains I see as completely beneficial.

Did you understand what I was asking, about the concentration of power?

Absolutely. This is a very, very competitive field. I mean, Google is doing great work. Meta. Amazon. And it's not like there's a limited amount of money for new startups in this area. I mean, Elon Musk just raised $6 billion. It's kind of like the internet was in the year 2000. The barriers to entry are very, very low, which means we're moving quickly.

And the other thing about a concentration of power Do you worry about, you know, more money for investors and fewer jobs for ordinary people? Like they can get this wonderful AI technology, but they dont have a job?

I do worry about that. Basically, if you increase productivity, that should give you more options. We don't let robots play baseball. We're just never going to be interested in that. If robots get really good, and AIs get really good, are we in some ways going to want, in terms of job creation, to put limits on that, or tax those things? Ive raised that in the past. They're not good enough yet to be raising those issues. But you know, say in three to five years, they could be good enough.

But for now, your hope is the AI doesn't replace my job. It makes me more productive in the job that I already have.

Well, there are few jobs that will replace you just like computers did. In most things today, AI is a co-pilot, it raises your productivity. But if you're a support person, taking support calls and you're twice as productive, some companies will take that productivity and answer more calls and have more quality of answer. Some companies will need less people, now freeing up labor to do other things. Do they go and help reduce class size or help the handicapped or help with the elderly? If we're able to produce more, then the pie is bigger. But are we smart in terms of tax policies or how we distribute that, so we actually take freed-up labor and put into things wed like to have?

The Bill & Melinda Gates Foundation is an NPR funder.

The audio version of this story was produced by Kaity Kline and edited by Reena Advani. The digital version was edited by Amina Khan.

Continue reading here:
Bill Gates on his nuclear energy investment, AI's challenges - NPR

World leaders discussing global issues at three-day G7 summit, including Pope Francis on artificial intelligence – The Dialog

VATICAN CITY Political leaders have a responsibility to create the conditions necessary for artificial intelligence to be at the service of humanity and to help mitigate its risks, Pope Francis told world leaders.

We cannot allow a tool as powerful and indispensable as artificial intelligence to reinforce such a (technocratic) paradigm, but rather, we must make artificial intelligence a bulwark against the threat, he said in his address June 14 at the Group of Seven summit being held in southern Italy.

This is precisely where political action is urgently needed, he said.

Many people believe politics is a distasteful word, often due to the mistakes, corruption and inefficiency of some politicians not all of them, some. There are also attempts to discredit politics, to replace it with economics or to twist it to one ideology or another, he said.

But the world cannot function without healthy politics, the pope said, and effective progress toward universal fraternity and social peace requires a sound political life.

The pope addressed leaders at the G7s special outreach session dedicated to artificial intelligence. In addition to the G7 members the United States, Japan, Canada, Germany, France, Italy and Great Britain the forum included specially invited heads of state, including the leaders of Argentina, India and Brazil.

The G7 summit was being held in Borgo Egnazia in Puglia June 13-15 to discuss a series of global issues, such as migration, climate change and development in Africa, and the situation in the Middle East and Ukraine. The pope was scheduled to meet privately with 10 heads of state and global leaders in bilateral meetings before and after his talk, including U.S. President Joe Biden and Ukrainian President Volodymyr Zelenskyy.

Because of time limits set for speakers during the outreach session, the pope read only a portion of his five-page speech, although the full text was made part of the official record. The Vatican provided a copy of the full text.

In his speech, the pope called artificial intelligence an exciting and fearsome tool. It could be used to expand access to knowledge to everyone, to advance scientific research rapidly and to give demanding and arduous work to machines.

Yet at the same time, it could bring with it a greater injustice between advanced and developing nations or between dominant and oppressed social classes, raising the dangerous possibility that a throwaway culture be preferred to a culture of encounter,' he said.

Like every tool and technology, he said, the benefits or harm it will bring will depend on its use.

While he called for the global community to find shared principles for a more ethical use of AI, Pope Francis also called for an outright ban of certain applications.

For example, he repeated his insistence that so-called lethal autonomous weapons be banned, saying no machine should ever choose to take the life of a human being.

Decision-making must always be left to the human person, he said. Human dignity itself depends on there being proper human control over the choices made by artificial intelligence programs.

Humanity would be condemned to a future without hope if we took away peoples ability to make decisions about themselves and their lives, by dooming them to depend on the choices of machines, he said. In his text, he specifically criticized judges using AI with prisoners personal data, such as their ethnicity, background, education, psychological assessments and credit rating, to determine whether the prisoner is likely to re-offend upon release and therefore require home-confinement.

The pope also cautioned, students especially, against generative artificial intelligence, which are magnificent tools and easily make available online applications for composing a text or producing an image on any theme or subject.

However, he said, these tools are not generative, in that they do not develop new analyses or concepts; they are merely reinforcing as they can only repeat what they find, giving it an appealing form and without checking whether it contains errors or preconceptions.

Generative AI not only runs the risk of legitimizing fake news and strengthening a dominant cultures advantage, but, in short, it also undermines the educational process itself, his text said.

It is precisely the ethos concerning the understanding of the value and dignity of the human person that is most at risk in the implementation and development of these systems, he told the leaders. Indeed, we must remember that no innovation is neutral.

Technology impacts social relations in some way and represents some kind of arrangement of power, thus enabling certain people to perform specific actions while preventing others from performing different ones, he said. In a more or less explicit way, this constitutive power dimension of technology always includes the worldview of those who invented and developed it.

In order for artificial intelligence programs to be tools that build up the good and create a better tomorrow, he said, they must always be aimed at the good of every human being, and they must have an ethical inspiration, underlining his support of the Rome Call for AI Ethics launched in 2020.

It is up to everyone to make good use of artificial intelligence, he said, but the onus is on politics to create the conditions for such good use to be possible and fruitful.

Read the original:
World leaders discussing global issues at three-day G7 summit, including Pope Francis on artificial intelligence - The Dialog

Nanox Launches Artificial Intelligence Functionality in Second Opinions Platform – Imaging Technology News

June 5, 2024 Nano-X Imaging, an innovative medical imaging technology company, today announced that its deep-learning medical imaging analytics subsidiary, Nanox AI Ltd., has launched an artificial intelligence (AI) functionality in the Second Opinions online medical consultation service. Second Opinions is a platform provided by USARAD Holdings INC, a subsidiary of Nano-X Imaging Ltd., that provides teleradiology services. The platform connects patients with radiologists and other subspecialty physicians for additional consultation on their medical diagnoses. Second Opinions has integrated three ofNanox.AIs FDA 510(k)-cleared AI solutions, enabling patients to conveniently get second opinions from experts in various medical and surgical subspecialties including radiology, neurology, oncology and orthopedic surgery. The integration ofNanox.AIs tools is intended to promote the early detection of chronic conditions on chest and abdominal CT scans:

These AI-driven insights are reviewed and approved by Second Opinions physicians and incorporated into reports for patients who submit eligible chest and abdominal CT scans.

We are excited to bring AI-powered, early detection through the Second Opinions platform to patients seeking peace of mind concerning their health and diagnoses, said Erez Meltzer, Nanox Chief Executive Officer. The integration ofNanox.AIs solutions into the Second Opinions service will help empower radiologists and other healthcare providers by providing them with advanced AI tools that aim to improve patient outcomes. We will continue exploring opportunities to leverage our AI technology to promote accessible early diagnosis and preventative management.

Learn more about Second Opinions and its new AI capabilities atArtificial Intelligence (AI) Service - Second Opinions.

For more information:www.nanox.vision

Read more from the original source:
Nanox Launches Artificial Intelligence Functionality in Second Opinions Platform - Imaging Technology News

Is AI ready for takeoff? Analysis finds only 11% of firms have gone beyond – SCMR

Technology investment is having a nearly immediate impact on the bottom line, but when it comes to the most transformative technology todayartificial intelligencethe real investment isnt happening.

That is the conclusion of research firm Zero100, which concluded that while most businesses are interested in AI and are rapidly investing in AI, most have not moved past the pilot stage at this point.

AI is fundamentally changing the landscape of supply chain managementand its happening at a faster rate than weve seen before, said Kevin OMarah, chief research officer and co-founder of Zero100. Its the biggest tech inflection point since the internet and, while AI experiments have been ongoing, the rise of generative AI is pushing digitization to the forefront. Boards recognize that the ability to digitize and embrace AI will be the difference between prosperity and decline over the next decade. They now need a clear path forward to capitalize on this opportunity.

Despite this, though, Zero100s analysis of public earnings calls has found few successes to tout. According to the firms research, only 11% of companies have deployed AI beyond the pilot stage, and while 88% of CEOs spoke of their companys AI vision, only one in four was able to cite the results of an AI project.

When it comes to supply chain technology investment generally, the cloud-based integration platform Cleo found that an overwhelming majority of companies saw benefits from the deployment within 24 months.

The report,Cleos 2024 Ecosystem Integration Global Market Report,found that 97% of companies surveyed had invested in supply chain technologies in 2023 and 35% stated that investment led to increased benefits. A full 81% said that supply chain technology investments delivered business improvements within 24 months generally, and 80% indicated they saw increased revenue in the same year the investment was made.

A companys supply chain is simply a series of commitments that tether across an ecosystem and must be delivered upon,Tushar Patel, CMO at Cleo, said in a release. And for companies to uphold those critical business commitments, they need to consistently invest in their supply chain technology, otherwise they stand to take a hit to their relationshipsimpacting their bottom line.

But investment in AI seems to be taking a bit longer. Zero100 recommends companies employ a 90-day AI fast track plan. The plan creates three separate 90-day attack plans to accelerate digital adoption. It is:

According toresearch from Gartner, top-performing supply chains are investing in artificial intelligence and machine learning at twice the rate of their lower-performing peers. Those same firms are also able to leverage their size to utilize productivity as a focal point for sustaining business momentum over the next three years. Conversely, lower-performing companies are more likely to utilize efficiency or cost savings.

"Top-performing supply chain organizations make investment decisions with a different lens than their lower-performing peers," saidKen Chadwick, VP analyst inGartner's Supply Chain Practice. "Enhancing productivity is the key factor that will drive future success and the key to unlocking that productivity lies in leveraging intangible assets. We see this divide especially in the digital domain where the best organizations are far ahead in optimizing their supply chain data with AI/ML applications to unlock value."

Gartner surveyed 818 supply chain practitioners across geography and industry from August through October 2023. Organizations were scored across five key metrics measuring business and people outcomes to determine their performance level. High performers were defined as those organizations that exceeded expectations over the past 12 months across the five measurements.

When it comes to the processes using AI/ML, 40% of high performers are using AI/ML in demand forecasting, versus just 19% of low performers.

See the article here:
Is AI ready for takeoff? Analysis finds only 11% of firms have gone beyond - SCMR