Archive for the ‘Ai’ Category

Carl’s Jr. and Hardee’s to roll out AI drive-thru ordering – USA TODAY

Screenwriters take aim at artificial intelligence, ChatGPT

Not six months since the release of ChatGPT, generative artificial intelligence is already prompting widespread unease throughout Hollywood. Concern over chatbots writing or rewriting scripts is one of the reasons TV and film screenwriters took to picket lines earlier this week. (May 5)

AP

CKE Restaurants Holdings, the parent company of fast food chains Carls Jr. and Hardees, is rolling out artificial intelligence at its drive-thrus.

The company is partnering with AI companies Presto Automation, OpenCity, and Valyant AI to automate voice ordering at participating drive-thru locations across the country, according to news releases. Carl's Jr. and Hardee's operate roughly 2,800 restaurants across 44 states.

The partnerships are meant to boost accuracy, speed, and revenue and help fast-food chains manage staffing shortages.

CKE chief technology officer Phil Crawford noted that a pilot program with Presto yielded positive results, with deployed stores recording a "significant" uptick in revenue thanks to the technologys ability to upsell customers, according to a news release.

In a February earnings call, Presto CEO Rajat Suri said the companys AI "never forgets to upsell, and upsells better than a human." The company also lists Del Taco and Checkers as clients.

CKE is also using OpenCitys voice ordering platform, Tori, and Valyant AIs conversational AI platform, Holly, at select restaurants, according to news releases.

"The AI technology has transformed our drive-thru experience, providing us with a competitive edge in the market and helping us to better serve our guests," Crawford said in a Thursday news release from OpenCity.

Warren Buffett on AI: Buffett and Charlie Munger discuss profits, AI and more at Berkshire Hathaway meeting

Biden on AI: Biden, taking on the robot economy, announces $140 million investment in AI research

See original here:

Carl's Jr. and Hardee's to roll out AI drive-thru ordering - USA TODAY

ChatGPT and the new AI are wreaking havoc on cybersecurity in … – ZDNet

Generative artificial intelligence is transforming cybersecurity, aiding both attackers and defenders. Cybercriminals are harnessing AI to launch sophisticated and novel attacks at large scale. And defenders are using the same technology to protect critical infrastructure, government organizations, and corporate networks, said Christopher Ahlberg, CEO of threat intelligence platform Recorded Future.

Generative AI has helped bad actors innovate and develop new attack strategies, enabling them to stay one step ahead of cybersecurity defenses. AI helps cybercriminals automate attacks, scan attack surfaces, and generate content that resonates with various geographic regions and demographics, allowing them to target a broader range of potential victims across different countries. Cybercriminals adopted the technology to create convincing phishing emails. AI-generated text helps attackers produce highly personalized emails and text messages more likely to deceive targets.

"I think you don't have to think very creatively to realize that, man, this can actually help [cybercriminals] be authors, which is a problem," Ahlberg said.

Also:AI could automate 25% of all jobs. Here's which are most (and least) at risk

Defenders are using AI to fend off attacks. Organizations are using the tech to prevent leaks and find network vulnerabilities proactively. It also dynamically automates tasks such as setting up alerts for specific keywords and detecting sensitive information online. Threat hunters are using AI to identify unusual patterns and summarize large amounts of data, connecting the dots across multiple sources of information and hidden patterns.

The work still requires human experts, but Ahlberg says the generative AI technology we're seeing in projects like ChatGPT can help.

"We want to speed up the analysis cycle [to] help us analyze at the speed of thought," he said. "That's a very hard thing to do and I think we're seeing a breakthrough here, which is pretty exciting."

Ahlberg also discussed the potential threats that highly intelligent machines might bring. As the world becomes increasingly digital and interconnected, the ability to bend reality and shape perceptions could be exploited by malicious actors. These threats are not limited to nation-states, making the landscape even more complex and asymmetric.

Also:ChatGPT is more like an 'alien intelligence' than a human brain, says futurist

AI has the potential to help protect against these emerging threats, but it also presents its own set of risks. For example, machines with high processing capabilities could hack systems faster and more effectively than humans. To counter these threats, we need to ensure that AI is used defensively and with a clear understanding of who is in control.

As AI becomes more integrated into society, it's important for lawmakers, judges, and other decision-makers to understand the technology and its implications. Building strong alliances between technical experts and policymakers will be crucial in navigating the future of AI in threat hunting and beyond.

AI's opportunities, challenges, and ethical considerations in cybersecurity are complex and evolving. Ensuring unbiased AI models and maintaining human involvement in decision-making will help manage ethical challenges. Vigilance, collaboration, and a clear understanding of the technology will be crucial in addressing the potential long-term threats of highly intelligent machines.

Also:How ChatGPT works

Ahlberg also raised concerns about China, Russia, and economic adversaries deploying autonomous machines. These countries likely won't slow down AI development or share ethical considerations. While having the ability to "pull the plug" on such machines is a smart safeguard, he suggests that the integration of technology into society and the global economy will likely make it hard to detach. Ahlberg emphasizes the need to design products and machines with clarity about who controls them.

"The big thing that the internet did in all of this is that the internet sort of became the place where all the world's information migrated," said Ahlberg. "These large language models are doing pretty magical things to speed up that thinking cycle."

He added, "In the next 25 years, the world becomes a reflection of the internet."

Go here to read the rest:

ChatGPT and the new AI are wreaking havoc on cybersecurity in ... - ZDNet

Father of AI says tech fears misplaced: You cannot stop it – Fox News

A German computer scientist known as the "father of AI" said fears over the technology are misplaced and there is no stopping artificial intelligence's progress.

"You cannot stop it," Jrgen Schmidhuber said of artificial intelligence and the current international race to build more powerful systems, according to The Guardian. "Surely not on an international level because one country might may have really different goals from another country. So, of course, they are not going to participate in some sort of moratorium."

Schmidhuber worked on artificial neural networks in the 1990s, with his research later spawning language-processing models for technologies such as Google Translate, The Guardian reported.

He currently serves as the director of the King Abdullah University of Science and Technologys AI initiative in Saudi Arabia, and he states in his bio that he has been working on building "a self-improving Artificial Intelligence (AI) smarter than himself" since he was roughly 15 years old.

AI COULD GO 'TERMINATOR,' GAIN UPPER HAND OVER HUMANS IN DARWINIAN RULES OF EVOLUTION, REPORT WARNS

JrgenSchmidhuber (Getty Images)

Schmidhuber said that he doesnt believe anyone should try to halt progress on developing powerful artificial intelligence systems, arguing that "in 95% of all cases, AI research is really about our old motto, which is make human lives longer and healthier and easier."

Schmidhuber also said that concerns over AI are misplaced and that developing AI-powered tools for good purposes will counter bad actors using the technology.

FUTURE OF AI: NEW TECH WILL CREATE DIGITAL HUMANS, COULD USE MORE ENERGY THAN ALL WORKING PEOPLE BY 2025

"Its just that the same tools that are now being used to improve lives can be used by bad actors, but they can also be used against the bad actors," he said, according to The Guardian.

Schmidhuber said concerns over AI are misplaced and that developing AI-powered tools for good purposes will counter bad actors using the technology. (Bloomberg via Getty Images)

"And I would be much more worried about the old dangers of nuclear bombs than about the new little dangers of AI that we see now."

TECH CEO WARNS AI RISKS 'HUMAN EXTINCTION' AS EXPERTS RALLY BEHIND SIX-MONTH PAUSE

His comments come as other tech leaders and experts have sounded the alarm that the powerful technology poses risks to humanity. Tesla founder Elon Musk and Apple co-founder Steve Wozniak joined thousands of other tech experts in signing a letter in March calling for AI labs to pause their research until safety measures are put in place.

Artificial intelligence pioneer Geoffrey Hinton speaks at the Thomson Reuters Financial and Risk Summit in Toronto on Dec. 4, 2017. (Reuters/Mark Blinch/File)

ARTIFICIAL INTELLIGENCE 'GODFATHER' ON AI POSSIBLY WIPING OUT HUMANITY: IT'S NOT INCONCEIVABLE

Geoffrey Hinton, known as the "godfather of AI," announced this month that he quit his job at Google to speak out on his tech fears. On Friday, Hinton said AI could pose "more urgent" risks to humanity than climate change but even though he shares similar concerns to tech leaders such as Musk, he said pausing AI research at labs is "utterly unrealistic."

"I'm in the camp that thinks this is an existential risk, and its close enough that we ought to be working very hard right now and putting a lot of resources into figuring out what we can do about it," he told Reuters.

Schmidhuber, who has openly criticized Hinton for allegedly failing to cite fellow researchers in his studies, told The Guardian that AI will exceed human intelligence and ultimately benefit people as they use the AI systems, which follows comments hes made in the past.

CLICK HERE TO GET THE FOX NEWS APP

"Ive been working on [AI] for several decades, since the '80s basically, and I still believe it will be possible to witness that AIs are going to be much smarter than myself, such that I can retire," Schmidhuber said in 2018.

Continue reading here:

Father of AI says tech fears misplaced: You cannot stop it - Fox News

FACT SHEET: Biden-Harris Administration Announces New Actions … – The White House

Today, the Biden-Harris Administration is announcing new actions that will further promote responsible American innovation in artificial intelligence (AI) and protect peoples rights and safety. These steps build on the Administrations strong record of leadership to ensure technology improves the lives of the American people, and break new ground in the federal governments ongoing effort to advance a cohesive and comprehensive approach to AI-related risks and opportunities.

AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks. President Biden has been clear that when it comes to AI, we must place people and communities at the center by supporting responsible innovation that serves the public good, while protecting our society, security, and economy. Importantly, this means that companies have a fundamental responsibility to make sure their products are safe before they are deployed or made public.

Vice President Harris and senior Administration officials will meet today with CEOs of four American companies at the forefront of AI innovationAlphabet, Anthropic, Microsoft, and OpenAIto underscore this responsibility and emphasize the importance of driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risks and potential harms to individuals and our society. The meeting is part of a broader, ongoing effort to engage with advocates, companies, researchers, civil rights organizations, not-for-profit organizations, communities, international partners, and others on critical AI issues.

This effort builds on the considerable steps the Administration has taken to date to promote responsible innovation. These include the landmarkBlueprint for an AI Bill of Rightsandrelated executive actionsannounced last fall, as well as theAI Risk Management Frameworkand aroadmap for standing up a National AI Research Resourcereleased earlier this year.

The Administration has also taken important actions to protect Americans in the AI age. In February, President Biden signed anExecutive Orderthat directs federal agencies to root out bias in their design and use of new technologies, including AI, and to protect the public from algorithmic discrimination. Last week, the Federal Trade Commission, Consumer Financial Protection Bureau, Equal Employment Opportunity Commission, and Department of Justices Civil Rights Division issued ajoint statementunderscoring their collective commitment to leverage their existing legal authorities to protect the American people from AI-related harms.

The Administration is also actively working to address the national security concerns raised by AI, especially in critical areas like cybersecurity, biosecurity, and safety. This includes enlisting the support of government cybersecurity experts from across the national security community to ensure leading AI companies have access to best practices, including protection of AI models and networks.

Todays announcements include:

###

See the rest here:

FACT SHEET: Biden-Harris Administration Announces New Actions ... - The White House

The future of AI: How tech could transform our lives in the Dayton … – Dayton Daily News

The model was then asked to expand on how this would affect Dayton in particular, followed by how it would affect those with bachelors degrees.

Since its release in November, ChatGPT has garnered millions of users, and has already disrupted many areas of life and work. The generative AI chatbot functions conversationally, able to respond to questions and synthesize those answers.

At the same time, the explosion of ChatGPT usage has raised significant questions about the future of work and the ethics of artificial intelligence and machine learning as a whole.

Machine learning models, or artificial intelligence, are files that has been trained to recognize types of patterns and to predict outcomes from those patterns, often those that humans cant see.

Humans working to create machines to think like we do is nothing new, said Pablo Iannello, professor of law and technology at the University of Dayton. But for the first time in history, machines are able to communicate with each other and learn from each other without any kind of human input.

Artificial intelligence becomes really important when you combine different things: one is machine learning, another is the internet of things, and the third one is blockchain, Iannello said.

If you combine those three things at the very high speed of programming and learning, then you have the situation in which we are today: You have computers that can learn by themselves.

The internet of things is the idea that any object can collect and transmit data to the internet, like smart refrigerators or car sensors. Blockchain is technology that decentralizes the record of digital transactions along computational nodes, famously associated with cryptocurrency.

Large language models like ChatGPT, as well as image generators like Midjourney and Dall-E, draw their data from the billions of words and images that exist on the internet.

ChatGPT has already been used to write everything from childrens books to code. It can also be manipulated into producing incorrect answers for basic math problems, and will fabricate facts and evidence with confidence, said Wright State computer science professor Krishnaprasad Thirunarayan.

That leaves me with mixed feelings, he said. These tools promise a fertile area of research on trustworthy information processing but, on the other hand, they are not yet ready for prime-time deployment as a personal assistant.

Like any tool, artificial intelligence can be used for good, or it can be used for malicious purposes. Facial recognition software that can help apprehend criminals can also be misused by governments to track and harass citizens, either deliberately or through mistaken identities, Thirunarayan said.

Premature overreliance on these not-yet-fool-proof-technologies without sufficient safeguards can have dire consequences, Thirunarayan said.

Artificial intelligence tools propose to disrupt the practice of law in multiple ways. Paralegals and other legal professionals are among those at risk of having their jobs automated by language learning models.

But the legal world also faces a major challenge: Developing laws and regulations that protect the humans that interact with AI tools.

Laws tend to lag behind the technological world, and the societal values that come along with those developments, said Pablo Iannello, professor of law and technology at the University of Dayton.

Artificial intelligence is changing the way we see life. Law is going to change because the world is changing, Iannello said.

Current law for gathering data is based around the concept of consent, Iannello said. Anytime you go to a website or create an account on Facebook or Google, you accept the terms and conditions, which includes data collection.

You have your cookie policy, and you will track things from my browser so that you can send me ads, he said. With AI, this is going to change, because they may predict how your tastes are going to change in the next five years. You will have to click Accept about tastes that you have not even developed. So can you legally do that?

According to the most recent AI Impacts Survey, nearly half of 731 leading AI researchers think there is at least a 10% chance that an AI capable of learning at the same level as a human being would lead to an extremely negative outcome.

The worst thing is that it looks nice, Iannello said. We dont have to worry about politicians. We dont have to worry about corrupt people. We dont have to worry about corruption because machines will solve the problems.

But if that happens, whos going to control the machines?

In March, OpenAI released a report that found about 80% of the U.S. workforce could have at least 10% of their tasks affected by AI, while nearly 20% of workers may see at least 50% of their tasks impacted.

A March report by investment banking giant Goldman-Sachs found that generative AI as a whole could expose the equivalent of 300 million full-time jobs to automation worldwide.

If it is trained on an extensive code base, (AI) can lead to mundane programming tasks being templatized and eliminated. This can mean more time to do non-trivial and potentially more interesting tasks, but can also simultaneously mean loss of routine jobs, Thirunarayan said.

The influence spans all wage levels, with higher-income jobs potentially facing greater exposure, according to OpenAI researchers. Among the most affected are office and administrative support systems, finance and accounting, healthcare, customer service, and creative industries like public relations and art.

A lot of people were aware that AI is is trending towards maybe supplementing or impacting many jobs, perhaps in areas like truck driving, for example, and I think a lot of folks thought white collar workers were more immune, said David Wright, Director of Academic Technology & Curriculum Innovation at the University of Dayton.

But almost everyone whos had any sense of what AI is today and what it can look like tomorrow, we knew that this is going to affect everyone.

The Goldman-Sachs report posited that while many jobs would be exposed to automation, others would be created to offset them in areas of supporting machine learning and information technology.

However, other studies show that the wage declines that affected blue collar workers in the last 40 years are now headed for white collar workers as well. In 2021, the National Bureau of Economic Research claimed automation technology has been the primary driver of U.S. income inequality, and that 50% to 70% of wage declines since 1980 come from blue-collar workers replaced by automation.

All these issues can have far-reaching consequences: They can increase the social divide between the haves and the have-nots, and between the technologically savvy and those without comparable skills. On the other hand, these changes can relieve us of mundane chores and make time for the pursuit of higher goals, Thirunarayan said.

In March, ChatGPT passed the bar exam with flying colors, approaching the 90th percentile of aspiring lawyers who take the test, researchers say. However, as yet, ChatGPTs most recent iteration, GPT-4, has not been able to pass the exam to become a Certified Professional Accountant.

Thats because, in part, ChatGPT struggles with computations and critical thinking, said David Rich, a senior manager and CPA with Clark Schaefer Hackett.

Rich said he uses GPT-4 two to three times a week, on everything from doing accounting research, to writing memos, though the output text does take a decent bit of editing, he said.

Im a pretty picky writer, but its always nice to have a good starting place, even if its just ideas. Its probably saved me about 80% of the time I would have spent getting that initial first draft, Rich said.

ChatGPT isnt the only artificial intelligence disrupting the accounting world. The American Association of CPAs is one of several organizations developing whats called Dynamic Audit Solutions, to improve how auditors perform their audits.

The reasons businesses value CPAs include personal relationships, critical thinking, and the accountants ability to be intimately familiar with the ins and outs of their business, something a machine cant replicate, Rich said.

If its large manufacturing company, Im familiar with how the CEO interacts with the CFO, how they interact with the board. Thats just something that AI is never going to be able to do. I wont say never, but it would have a hard time really capturing the value proposition that were bringing, Rich said.

ChatGPT has thrown a wrench at higher education. If used correctly, the software can easily write essays virtually indistinguishable from those of a human college student. Students at the University of Dayton are among many now doing their homework with ChatGPT, forcing the University to reckon with how it teaches classes across all disciplines.

AI is something that looms very large for us, both in terms of how it impacts learning, and how it affects students and how theyre learning today, Wright said.

The phenomenon has been met with mixed reception by educators nationwide. While some have called for better anti-cheating software, others have said this is indicative of a broader shift in work.

Another challenge is how to incorporate AI so that when the students graduate, they have the skills needed to succeed in the workplace, wherever and whatever they do, Wright said.

While AI may be sufficient for college essays, it lacks in producing practical, professional written work, said Gery Deer, who owns and operates GLD Communications in Jamestown and the newspaper the Jamestown Comet.

I think where I can really smell it is that its a little too formulaic, he said.

Despite this, ChatGPT is poised to take a sizeable chunk of public relations work. Deer says he has already lost work to ChatGPT, but thats not the biggest worry.

Theres enough work to go around, so Im less worried about that. The downside is theres nobody proofing it. Theres no regard for the audience in this material, he said.

Quality work costs money, but creative work is seen as one of the easiest to cut costs from, Deer said.

Im not so much worried about losing my job, Deer said. I am more concerned with the level of junk that Im going to have to now compete with.

A group of artists filed a class-action lawsuit against image generators Stable Diffusion and Midjourney in January. AI image generators train on millions of images created by thousands of artists who post their work on the internet. As the model learns from the art contributed to the dataset, users are able to generate images in those artists styles in seconds but as it stands, the artist whose style is referenced will never see a cent.

Style is all an artist has, Deer said. As a writer, all I can do is rearrange the words, but its my style that creates that.

Top 10 occupations most exposed to machine Large Language Models (ChatGPT) according to humans:

Mathematicians

Tax Preparers

Financial Quantitative Analysts

Writers and Authors

Web and Digital Interface Designers

Survey Researchers

Interpreters and Translators

Public Relations Specialists

Animal Scientists

Poets, Lyricists and Creative Writers

Top 10 occupations most exposed to machine Large Language Models according to ChatGPT:

Mathematicians

Accountants and Auditors

News Analysts, Reporters, and Journalists

Legal Secretaries and Administrative Assistants

Clinical Data Managers

Climate Change Policy Analysts

Blockchain Engineers

Court Reporters and Simultaneous Captioners

Proofreaders and Copy Markers

Correspondence Clerks

Source: OpenAI

Read the original post:

The future of AI: How tech could transform our lives in the Dayton ... - Dayton Daily News