Archive for the ‘Ai’ Category

Father of AI says tech fears misplaced: You cannot stop it – Fox News

A German computer scientist known as the "father of AI" said fears over the technology are misplaced and there is no stopping artificial intelligence's progress.

"You cannot stop it," Jrgen Schmidhuber said of artificial intelligence and the current international race to build more powerful systems, according to The Guardian. "Surely not on an international level because one country might may have really different goals from another country. So, of course, they are not going to participate in some sort of moratorium."

Schmidhuber worked on artificial neural networks in the 1990s, with his research later spawning language-processing models for technologies such as Google Translate, The Guardian reported.

He currently serves as the director of the King Abdullah University of Science and Technologys AI initiative in Saudi Arabia, and he states in his bio that he has been working on building "a self-improving Artificial Intelligence (AI) smarter than himself" since he was roughly 15 years old.

AI COULD GO 'TERMINATOR,' GAIN UPPER HAND OVER HUMANS IN DARWINIAN RULES OF EVOLUTION, REPORT WARNS

JrgenSchmidhuber (Getty Images)

Schmidhuber said that he doesnt believe anyone should try to halt progress on developing powerful artificial intelligence systems, arguing that "in 95% of all cases, AI research is really about our old motto, which is make human lives longer and healthier and easier."

Schmidhuber also said that concerns over AI are misplaced and that developing AI-powered tools for good purposes will counter bad actors using the technology.

FUTURE OF AI: NEW TECH WILL CREATE DIGITAL HUMANS, COULD USE MORE ENERGY THAN ALL WORKING PEOPLE BY 2025

"Its just that the same tools that are now being used to improve lives can be used by bad actors, but they can also be used against the bad actors," he said, according to The Guardian.

Schmidhuber said concerns over AI are misplaced and that developing AI-powered tools for good purposes will counter bad actors using the technology. (Bloomberg via Getty Images)

"And I would be much more worried about the old dangers of nuclear bombs than about the new little dangers of AI that we see now."

TECH CEO WARNS AI RISKS 'HUMAN EXTINCTION' AS EXPERTS RALLY BEHIND SIX-MONTH PAUSE

His comments come as other tech leaders and experts have sounded the alarm that the powerful technology poses risks to humanity. Tesla founder Elon Musk and Apple co-founder Steve Wozniak joined thousands of other tech experts in signing a letter in March calling for AI labs to pause their research until safety measures are put in place.

Artificial intelligence pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto on Dec. 4, 2017. (Reuters/Mark Blinch/File)

ARTIFICIAL INTELLIGENCE 'GODFATHER' ON AI POSSIBLY WIPING OUT HUMANITY: IT'S NOT INCONCEIVABLE

Geoffrey Hinton, known as the "godfather of AI," announced this month that he quit his job at Google to speak out on his tech fears. On Friday, Hinton said AI could pose "more urgent" risks to humanity than climate change but even though he shares similar concerns to tech leaders such as Musk, he said pausing AI research at labs is "utterly unrealistic."

"I'm in the camp that thinks this is an existential risk, and its close enough that we ought to be working very hard right now and putting a lot of resources into figuring out what we can do about it," he told Reuters.

Schmidhuber, who has openly criticized Hinton for allegedly failing to cite fellow researchers in his studies, told The Guardian that AI will exceed human intelligence and ultimately benefit people as they use the AI systems, which follows comments hes made in the past.

CLICK HERE TO GET THE FOX NEWS APP

"Ive been working on [AI] for several decades, since the '80s basically, and I still believe it will be possible to witness that AIs are going to be much smarter than myself, such that I can retire," Schmidhuber said in 2018.

Continue reading here:

Father of AI says tech fears misplaced: You cannot stop it - Fox News

FACT SHEET: Biden-Harris Administration Announces New Actions … – The White House

Today, the Biden-Harris Administration is announcing new actions that will further promote responsible American innovation in artificial intelligence (AI) and protect peoples rights and safety. These steps build on the Administrations strong record of leadership to ensure technology improves the lives of the American people, and break new ground in the federal governments ongoing effort to advance a cohesive and comprehensive approach to AI-related risks and opportunities.

AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks. President Biden has been clear that when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy. Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.

Vice President Harris and senior Administration officials will meet today with CEOs of four American companies at the forefront of AI innovationAlphabet, Anthropic, Microsoft, and OpenAIto underscore this responsibility and emphasize the importance of driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risks and potential harms to individuals and our society. The meeting is part of a broader, ongoing effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues.

This effort builds on the considerable steps the Administration has taken to date to promote responsible innovation. These include the landmarkBlueprint for an AI Bill of Rightsandrelated executive actionsannounced last fall, as well as theAI Risk Management Frameworkand aroadmap for standing up a National AI Research Resourcereleased earlier this year.

The Administration has also taken important actions to protect Americans in the AI age. In February, President Biden signed anExecutive Orderthat directs federal agencies to root out bias in their design and use of new technologies, including AI, and to protect the public from algorithmic discrimination. Last week, the Federal Trade Commission, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission, and Department of Justices Civil Rights Division issued ajoint statementunderscoring their collective commitment to leverage their existing legal authorities to protect the American people from AI-related harms.

The Administration is also actively working to address the national security concerns raised by AI, especially in critical areas like cybersecurity, biosecurity, and safety. This includes enlisting the support of government cybersecurity experts from across the national security community to ensure leading AI companies have access to best practices, including protection of AI models and networks.

Todays announcements include:

###

See the rest here:

FACT SHEET: Biden-Harris Administration Announces New Actions ... - The White House

The future of AI: How tech could transform our lives in the Dayton … – Dayton Daily News

The model was then asked to expand on how this would affect Dayton in particular, followed by how it would affect those with bachelors degrees.

Since its release in November, ChatGPT has garnered millions of users, and has already disrupted many areas of life and work. The generative AI chatbot functions conversationally, able to respond to questions and synthesize those answers.

At the same time, the explosion of ChatGPT usage has raised significant questions about the future of work and the ethics of artificial intelligence and machine learning as a whole.

Machine learning models, or artificial intelligence, are files that has been trained to recognize types of patterns and to predict outcomes from those patterns, often those that humans cant see.

Humans working to create machines to think like we do is nothing new, said Pablo Iannello, professor of law and technology at the University of Dayton. But for the first time in history, machines are able to communicate with each other and learn from each other without any kind of human input.

Artificial intelligence becomes really important when you combine different things: one is machine learning, another is the internet of things, and the third one is blockchain, Iannello said.

If you combine those three things at the very high speed of programming and learning, then you have the situation in which we are today: You have computers that can learn by themselves.

The internet of things is the idea that any object can collect and transmit data to the internet, like smart refrigerators or car sensors. Blockchain is technology that decentralizes the record of digital transactions along computational nodes, famously associated with cryptocurrency.

Large language models like ChatGPT, as well as image generators like Midjourney and Dall-E, draw their data from the billions of words and images that exist on the internet.

ChatGPT has already been used to write everything from childrens books to code. It can also be manipulated into producing incorrect answers for basic math problems, and will fabricate facts and evidence with confidence, said Wright State computer science professor Krishnaprasad Thirunarayan.

That leaves me with mixed feelings, he said. These tools promise a fertile area of research on trustworthy information processing but, on the other hand, they are not yet ready for prime-time deployment as a personal assistant.

Like any tool, artificial intelligence can be used for good, or it can be used for malicious purposes. Facial recognition software that can help apprehend criminals can also be misused by governments to track and harass citizens, either deliberately or through mistaken identities, Thirunarayan said.

Premature overreliance on these not-yet-fool-proof-technologies without sufficient safeguards can have dire consequences, Thirunarayan said.

Artificial intelligence tools propose to disrupt the practice of law in multiple ways. Paralegals and other legal professionals are among those at risk of having their jobs automated by language learning models.

But the legal world also faces a major challenge: Developing laws and regulations that protect the humans that interact with AI tools.

Laws tend to lag behind the technological world, and the societal values that come along with those developments, said Pablo Iannello, professor of law and technology at the University of Dayton.

Artificial intelligence is changing the way we see life. Law is going to change because the world is changing, Iannello said.

Current law for gathering data is based around the concept of consent, Iannello said. Anytime you go to a website or create an account on Facebook or Google, you accept the terms and conditions, which includes data collection.

You have your cookie policy, and you will track things from my browser so that you can send me ads, he said. With AI, this is going to change, because they may predict how your tastes are going to change in the next five years. You will have to click Accept about tastes that you have not even developed. So can you legally do that?

According to the most recent AI Impacts Survey, nearly half of 731 leading AI researchers think there is at least a 10% chance that an AI capable of learning at the same level as a human being would lead to an extremely negative outcome.

The worst thing is that it looks nice, Iannello said. We dont have to worry about politicians. We dont have to worry about corrupt people. We dont have to worry about corruption because machines will solve the problems.

But if that happens, whos going to control the machines?

In March, OpenAI released a report that found about 80% of the U.S. workforce could have at least 10% of their tasks affected by AI, while nearly 20% of workers may see at least 50% of their tasks impacted.

A March report by investment banking giant Goldman-Sachs found that generative AI as a whole could expose the equivalent of 300 million full-time jobs to automation worldwide.

If it is trained on an extensive code base, (AI) can lead to mundane programming tasks being templatized and eliminated. This can mean more time to do non-trivial and potentially more interesting tasks, but can also simultaneously mean loss of routine jobs, Thirunarayan said.

The influence spans all wage levels, with higher-income jobs potentially facing greater exposure, according to OpenAI researchers. Among the most affected are office and administrative support systems, finance and accounting, healthcare, customer service, and creative industries like public relations and art.

A lot of people were aware that AI is is trending towards maybe supplementing or impacting many jobs, perhaps in areas like truck driving, for example, and I think a lot of folks thought white collar workers were more immune, said David Wright, Director of Academic Technology & Curriculum Innovation at the University of Dayton.

But almost everyone whos had any sense of what AI is today and what it can look like tomorrow, we knew that this is going to affect everyone.

The Goldman-Sachs report posited that while many jobs would be exposed to automation, others would be created to offset them in areas of supporting machine learning and information technology.

However, other studies show that the wage declines that affected blue collar workers in the last 40 years are now headed for white collar workers as well. In 2021, the National Bureau of Economic Research claimed automation technology has been the primary driver of U.S. income inequality, and that 50% to 70% of wage declines since 1980 come from blue-collar workers replaced by automation.

All these issues can have far-reaching consequences: They can increase the social divide between the haves and the have-nots, and between the technologically savvy and those without comparable skills. On the other hand, these changes can relieve us of mundane chores and make time for the pursuit of higher goals, Thirunarayan said.

In March, ChatGPT passed the bar exam with flying colors, approaching the 90th percentile of aspiring lawyers who take the test, researchers say. However, as yet, ChatGPTs most recent iteration, GPT-4, has not been able to pass the exam to become a Certified Professional Accountant.

Thats because, in part, ChatGPT struggles with computations and critical thinking, said David Rich, a senior manager and CPA with Clark Schaefer Hackett.

Rich said he uses GPT-4 two to three times a week, on everything from doing accounting research, to writing memos, though the output text does take a decent bit of editing, he said.

Im a pretty picky writer, but its always nice to have a good starting place, even if its just ideas. Its probably saved me about 80% of the time I would have spent getting that initial first draft, Rich said.

ChatGPT isnt the only artificial intelligence disrupting the accounting world. The American Association of CPAs is one of several organizations developing whats called Dynamic Audit Solutions, to improve how auditors perform their audits.

The reasons businesses value CPAs include personal relationships, critical thinking, and the accountants ability to be intimately familiar with the ins and outs of their business, something a machine cant replicate, Rich said.

If its large manufacturing company, Im familiar with how the CEO interacts with the CFO, how they interact with the board. Thats just something that AI is never going to be able to do. I wont say never, but it would have a hard time really capturing the value proposition that were bringing, Rich said.

ChatGPT has thrown a wrench at higher education. If used correctly, the software can easily write essays virtually indistinguishable from those of a human college student. Students at the University of Dayton are among many now doing their homework with ChatGPT, forcing the University to reckon with how it teaches classes across all disciplines.

AI is something that looms very large for us, both in terms of how it impacts learning, and how it affects students and how theyre learning today, Wright said.

The phenomenon has been met with mixed reception by educators nationwide. While some have called for better anti-cheating software, others have said this is indicative of a broader shift in work.

Another challenge is how to incorporate AI so that when the students graduate, they have the skills needed to succeed in the workplace, wherever and whatever they do, Wright said.

While AI may be sufficient for college essays, it lacks in producing practical, professional written work, said Gery Deer, who owns and operates GLD Communications in Jamestown and the newspaper the Jamestown Comet.

I think where I can really smell it is that its a little too formulaic, he said.

Despite this, ChatGPT is poised to take a sizeable chunk of public relations work. Deer says he has already lost work to ChatGPT, but thats not the biggest worry.

Theres enough work to go around, so Im less worried about that. The downside is theres nobody proofing it. Theres no regard for the audience in this material, he said.

Quality work costs money, but creative work is seen as one of the easiest to cut costs from, Deer said.

Im not so much worried about losing my job, Deer said. I am more concerned with the level of junk that Im going to have to now compete with.

A group of artists filed a class-action lawsuit against image generators Stable Diffusion and Midjourney in January. AI image generators train on millions of images created by thousands of artists who post their work on the internet. As the model learns from the art contributed to the dataset, users are able to generate images in those artists styles in seconds but as it stands, the artist whose style is referenced will never see a cent.

Style is all an artist has, Deer said. As a writer, all I can do is rearrange the words, but its my style that creates that.

Top 10 occupations most exposed to machine Large Language Models (ChatGPT) according to humans:

Mathematicians

Tax Preparers

Financial Quantitative Analysts

Writers and Authors

Web and Digital Interface Designers

Survey Researchers

Interpreters and Translators

Public Relations Specialists

Animal Scientists

Poets, Lyricists and Creative Writers

Top 10 occupations most exposed to machine Large Language Models according to ChatGPT:

Mathematicians

Accountants and Auditors

News Analysts, Reporters, and Journalists

Legal Secretaries and Administrative Assistants

Clinical Data Managers

Climate Change Policy Analysts

Blockchain Engineers

Court Reporters and Simultaneous Captioners

Proofreaders and Copy Markers

Correspondence Clerks

Source: OpenAI

Read the original post:

The future of AI: How tech could transform our lives in the Dayton ... - Dayton Daily News

This Week In XR: After AI Sucks The Air Out Of The Metaverse, It Will Remake XR – Forbes

This was the slowest, least dramatic news week in XR since I started this column in October of 2017. AI is sucking all the oxygen out of the room. I posted five Forbes stories this week, including this one, about AI. Not because Im not interested in XR. Its just right now, AI feels more urgent.

On the This Week In XR podcast Friday morning, co-host and Magic Leap founder Rony Abovitz said AI is what XR has been waiting for. Co-host Ted Schilowitz, Futurist at Paramount Global says the Apple Mixed Reality headset will change everyones thinking.

Its possible after AI Sucks the Air Out of the Metaverse, it will Remake It. We will literally talk worlds into existence.

^^ This new Snapchat Lens is a virtual try-on of an artists concept of Apples new Reality One XR headset. ^^

When a restaurant or other establishment sends you a we havent seen you in a while! email message, you know they must be hunting for their customers. In this case the product is a free social VR platform that offers a multitude of experiences that vary in quality. Social VR is tricky, and many platforms have failed. This particular VR and PC platform has no creator economy incentivizing builders and no obvious scalable enterprise application. Theyre reportedly working on a mobile app, which has helped others. Their only revenue comes from a community of power users who pay a membership fee for enhanced features. This company raised a lot of money when they were hot, but I wonder how things are really going.

Shots from AWE Expo 2021.

AWE

Fighting Climate Change With XR Tech And $100,000. AWE announced a contest which will award $100K to the best XR concept that fights climage change. The winner will be announced at the AWE Conference and Expo in Santa Clara, CA. May 30 - June 2nd. Over 150 teams have submitted projects. AWE is the XR event of the year, with over 5,000 people attending. The conference will certainly be focused on AIs impact, and I hope to see demos and hear ideas about new capabilities AI is bringing to XR applications. How will this influence the developing metaverse? The big boys like Apple, Meta, Google, and Microsoft have their own conferences and dont exhibit, but youll find a few of their execs on panels. As a result, sponsors Qualcomm, Unity, and Niantic have more visibility. Apples presumed unveiling of their XR device will be at their WWDC conference a week later, June 5th. Thats going to create an interesting dynamic at AWE.

Sandbox Location-based VR Launches Shard: Dragonfire. In a free roam VR experience, users are physically present together in a large black box wearing VR headsets and body trackers. This is the only true full body VR experience. You walk around freely. Its warehouse scale. This cant be done at home. There is nothing like it. Fellow players are perfectly mapped avatars. In this multiplayer game, players use weapons and magic to succeed. The game is different every time to enhance repeat play. Sandbox also features Star Trek and several other experience at their 35 locations.

This Week in XR is also a podcast hosted by the author of this column, Ted Schilowitz, Futurist, Paramount Global, and Rony Abovitz, founder of Magic Leap. This week our hosts are their own guests, focused on AI news, and how it will have a positive impact XR. We can be found on Spotify, iTunes, and YouTube.

AI Weekly

AI Weekly: AI Leaders At White House, OpenAI Adds $300 Million, Empathetic Pi Chatbot Launches

Metaphysic Deep Fakes TED My conversation with Tom Graham whose company Metaphysic created the fake video "Deep Tom Cruise."

Is AI The History Eraser Button? My interview with Tom got me thinking about where we're going with all this, which makes you question what it even means to be human.

AI-Powered Characters Changing The Game This is not unrelated to the AI stories above. We may create AI characters to change our memories.

Charlie Fink is the author of the AR-enabled books "Metaverse," (2017) and "Convergence" (2019). In the early 90s, Fink was EVP & COO of VR pioneer Virtual World Entertainment. He teaches at Chapman University in Orange, CA.

Read more here:

This Week In XR: After AI Sucks The Air Out Of The Metaverse, It Will Remake XR - Forbes

Astrophysicist Neil deGrasse Tyson offers optimistic view of AI, ‘long awaited force’ of ‘reform’ – Fox News

Astrophysicist Neil deGrasse Tyson sees artificial intelligence as a much-needed stress-test for modern society, with a view that it will lead humanity to refine some of its more outdated ideas and systems now that the "genie is out of the bottle."

"Of course AI will replace jobs," Tyson said in comments to Fox News Digital. "Entire sectors of our economy have gone obsolete in the presence of technology ever since the dawn of the industrial era.

"The historical flaw in the reasoning is to presume that when jobs disappear, there will be no other jobs for people to do," he argued. "More people are employed in the world than ever before, yet none of them are making buggy whips. Just because you cant see a new job sector on the horizon, does not mean its not there."

AI has proven a catalyst for societal fears and hopes since OpenAI released ChatGPT-4 to the public for testing and interaction. AI relies on data to improve, and as a large language model system, that data comes from conversations, prompts and interactions with actual human beings.

HARRIS TAKES LEAD AT AI MEETING WITH TECH CEOS AS BIDEN LIGHTENS HIS WHITE HOUSE SCHEDULE

Neil deGrasse Tyson attends the 23rd Annual Webby Awards on May 13, 2019, in New York City. (Michael Loccisano/Getty Images for Webby Awards)

Some tech leaders raised concerns about what would come next from such a powerful AI model, calling for a six-month pause on development. Others discussed the AI as potentially the most transformative technology since the industrial revolution and the printing press.

Tyson has more consistently discussed the positive potential of AI as a "long-needed, long-awaited force" of "reform."

"When computing power rapidly exceeded the humanmental ability to calculate, scientists and engineers did not go running for the hills: We embraced it," he said. "We absorbed it. The ongoing advances allowed us to think about and solve ever deeper, ever more complex problems on Earth and in the universe."

AI COULD BE NAIL IN THE COFFIN FOR THE INTERNET,' WARNS ASTROPHYSICIST

Gayle King and Neil deGrasse Tyson at The 92nd Street Y on Oct. 19, 2022, in New York City. (Gary Gershoff/Getty Images)

"Now that computers have mastered language and culture, feeding off everything weve put on the internet, my first thought is cool, let it do thankless language stuff that nobody really wants to do anyway, and for which people hardly ever get visible credit, like write manuals or brochures or figure captions or wiki pages," Tyson added.

He argued that teachers worrying about students using ChatGPT or other AI to cheat on essays and term papers could instead see this as an opportunity to reshape education.

"If students cheat on a term paper by getting ChatGPT to write it for them, should we blame the student? Or is it the fault of an education system that weve honed over the past century to value grades more than students value learning?" Tyson asked.

GOOGLE DEEPMIND CEO MAKES PREDICTION ON WHEN HUMAN-LEVEL AI WILL BE HERE

The ChatGPT artificial intelligence software, which generates human-like conversation. (Getty images)

"ChatGPT may be the long-needed, long-awaited force to reform how and why we value what we learn in school.

"The urge to declare this time is different is strong, as AI also begins to replace our creativity," he explained. "If thats inevitable, then bring it on.

"If AI can compose a better opera than a human can, then let it do so," he continued. "That opera will be performed by people, viewed by a human audience that holds jobs we do not yet foresee. And even if robots did perform the opera, that itself could be an interesting sight."

CLICK HERE TO GET THE FOX NEWS APP

While some worry about the lack of oversight and legislation currently in place to handle AI and its development, Tyson noted that the number of countries with AI ministers or czars "is growing."

"At times like this, one can futilely try to ban the progress of AI. Or instead, push for the rapid development of tools to tame it."

Read the original here:

Astrophysicist Neil deGrasse Tyson offers optimistic view of AI, 'long awaited force' of 'reform' - Fox News