Archive for the ‘Ai’ Category

Astrophysicist Neil deGrasse Tyson offers optimistic view of AI, ‘long awaited force’ of ‘reform’ – Yahoo News

Astrophysicist Neil deGrasse Tyson sees artificial intelligence as a much-needed stress-test for modern society, with a view that it will lead humanity to refine some of its more outdated ideas and systems now that the "genie is out of the bottle."

"Of course AI will replace jobs," Tyson said in comments to Fox News Digital. "Entire sectors of our economy have gone obsolete in the presence of technology ever since the dawn of the industrial era.

"The historical flaw in the reasoning is to presume that when jobs disappear, there will be no other jobs for people to do," he argued. "More people are employed in the world than ever before, yet none of them are making buggy whips. Just because you cant see a new job sector on the horizon, does not mean its not there."

AI has proven a catalyst for societal fears and hopes since OpenAI released ChatGPT-4 to the public for testing and interaction. AI relies on data to improve, and as a large language model system, that data comes from conversations, prompts and interactions with actual human beings.

HARRIS TAKES LEAD AT AI MEETING WITH TECH CEOS AS BIDEN LIGHTENS HIS WHITE HOUSE SCHEDULE

Neil deGrasse Tyson attends the 23rd Annual Webby Awards on May 13, 2019, in New York City.

Some tech leaders raised concerns about what would come next from such a powerful AI model, calling for a six-month pause on development. Others discussed the AI as potentially the most transformative technology since the industrial revolution and the printing press.

READ ON THE FOX NEWS APP

Tyson has more consistently discussed the positive potential of AI as a "long-needed, long-awaited force" of "reform."

"When computing power rapidly exceeded the humanmental ability to calculate, scientists and engineers did not go running for the hills: We embraced it," he said. "We absorbed it. The ongoing advances allowed us to think about and solve ever deeper, ever more complex problems on Earth and in the universe."

AI COULD BE NAIL IN THE COFFIN FOR THE INTERNET,' WARNS ASTROPHYSICIST

Story continues

Gayle King and Neil deGrasse Tyson at The 92nd Street Y on Oct. 19, 2022, in New York City.

"Now that computers have mastered language and culture, feeding off everything weve put on the internet, my first thought is cool, let it do thankless language stuff that nobody really wants to do anyway, and for which people hardly ever get visible credit, like write manuals or brochures or figure captions or wiki pages," Tyson added.

He argued that teachers worrying about students using ChatGPT or other AI to cheat on essays and term papers could instead see this as an opportunity to reshape education.

"If students cheat on a term paper by getting ChatGPT to write it for them, should we blame the student? Or is it the fault of an education system that weve honed over the past century to value grades more than students value learning?" Tyson asked.

GOOGLE DEEPMIND CEO MAKES PREDICTION ON WHEN HUMAN-LEVEL AI WILL BE HERE

The ChatGPT artificial intelligence software, which generates human-like conversation.

"ChatGPT may be the long-needed, long-awaited force to reform how and why we value what we learn in school.

"The urge to declare this time is different is strong, as AI also begins to replace our creativity," he explained. "If thats inevitable, then bring it on.

"If AI can compose a better opera than a human can, then let it do so," he continued. "That opera will be performed by people, viewed by a human audience that holds jobs we do not yet foresee. And even if robots did perform the opera, that itself could be an interesting sight."

While some worry about the lack of oversight and legislation currently in place to handle AI and its development, Tyson noted that the number of countries with AI ministers or czars "is growing."

"At times like this, one can futilely try to ban the progress of AI. Or instead, push for the rapid development of tools to tame it."

Excerpt from:

Astrophysicist Neil deGrasse Tyson offers optimistic view of AI, 'long awaited force' of 'reform' - Yahoo News

Gary Marcus Used to Call AI StupidNow He Calls It Dangerous – WIRED

Back thenonly months agoMarcus quibbling was technical. But now that large language models have become a global phenomenon, his focus has shifted. The crux of Marcus new message is that the chatbots from OpenAI, Google, and others are dangerous entities whose powers will lead to a tsunami of misinformation, security bugs, and defamatory hallucinations that will automate slander. This seems to court a contradiction. For years Marcus had charged that the claims of AIs builders are overhyped. Why is AI now so formidable that society must now restrain it?

Marcus, always loquacious, has an answer: Yes, Ive said for years that [LLMs] are actually pretty dumb, and I still believe that. But there's a difference between power and intelligence. And we are suddenly giving them a lot of power. In February he realized that the situation was sufficiently alarming that he should spend the bulk of his energy addressing the problem. Eventually, he says, hed like to head a nonprofit organization devoted to making the most, and avoiding the worst, of AI.

Marcus argues that in order to counter all the potential harms and destruction, policymakers, governments, and regulators have to hit the brakes on AI development. Along with Elon Musk and dozens of other scientists, policy nerds, and just plain freaked-out observers, he signed the now-famous petitiondemanding a six-month pause in training new LLMs. But he admits that he doesnt really think such a pause would make a difference and that he signed mostly to align himself with the community of AI critics. Instead of a training time-out, hed prefer a pause indeploying new models or iterating current ones. This would presumably have to be forced on companies, since theres fierce, almost existential, competition between Microsoft and Google, with Apple, Meta, Amazon, and uncounted startups wanting to get into the game.

Marcus has an idea for who might do the enforcing. He has lately been insistent that the world needs, immediately, a global, neutral, nonprofit International Agency for AI, which would be referred to with an acronym that sounds like a scream (Iaai!).

As he outlined in anop-ed he coauthored in theEconomist, such a body might work like the International Atomic Energy Agency, which conducts audits and inspections to identify nascent nuclear programs. Presumably this agency would monitor algorithms to make sure they dont include bias or promote misinformation or take over power grids while we arent looking. While it seems a stretch to imagine the United States, Europe, and China all working together on this, maybe the threat of an alien, if homegrown, intelligence overthrowing our species might lead them to act in the interests of Team Human. Hey, it worked with that other global threat, climate change! Uh

In any case, the discussion about controlling AI will gain even more steam as the technology weaves itself deeper and deeper into our lives. So expect to see a lot more of Marcus and a host of other talking heads. And thats not a bad thing. Discussion about what to do with AI is healthy and necessary, even if the fast-moving technology may well develop regardless of any measures that we painstakingly and belatedly adopt. The rapid ascension of ChatGPT into an all-purpose business tool, entertainment device, and confidant indicates that, scary or not, we want this stuff. Like every other huge technological advance, superintelligence seems destined to bring us irresistible benefits, even as it changes the workplace, our cultural consumption, and inevitably, us.

Go here to see the original:

Gary Marcus Used to Call AI StupidNow He Calls It Dangerous - WIRED

AI Is Coming for Your Web Browser. Here’s How to Use It – WIRED

There's now an Image Creator built right into Edge.

After a few seconds, you'll be met with four suggested images. Click on any of them for a closer look and to find the options for sharing them, downloading them, or saving them to a collection inside Edge. Your recently generated images are shown further down the sidebar, so you can get back to them if you need to, and there's also the Explore ideas tab if you need more inspiration.

This is all free to use, though you only get a certain number of boosts per month, which makes the AI art generation process faster. If you run out of boosts, you can get more through the Microsoft Rewards schemeotherwise you'll need to be more patient in waiting for your pictures to come back.

Other Browsers

It's fair to say that Microsoft Edge is leading the way at the moment when it comes to AI tools inside the browser, but other developers are getting involved too. Opera is completely redesigning its browser to fit in generative AI features. It's called Opera One, and it's now available in the form of an early-access developer version.

Right now there's not much to see in the way of AI, except for integrations for ChatGPT and ChatGPT alternative ChatSonic in the sidebar on the left. However, the whole interface is being revamped to be more fluid and modular, so expect to see plenty more features added over time. A full launch is scheduled for later this year.

The brand new Opera One comes with ChatGPT built in.

Meanwhile, the Brave browser just launched a new feature called the Summarizer. It leverages the power of AI to give you short and informative direct answers to your questions, based on text that's been pulled from web search results. The thinking is that you get the responses you need faster and in fewer clicks.

For example, you might want to know the difference between two different types of drinks, or need the details of what happened at a particular historical event. The Summarizer should be able to give you a brief overview without you having to actually open any web pages, and the sources for the summary are listed underneath.

View original post here:

AI Is Coming for Your Web Browser. Here's How to Use It - WIRED

Carl’s Jr. and Hardee’s to roll out AI drive-thru ordering – USA TODAY

Screenwriters take aim at artificial intelligence, ChatGPT

Not six months since the release of ChatGPT, generative artificial intelligence is already prompting widespread unease throughout Hollywood. Concern over chatbots writing or rewriting scripts is one of the reasons TV and film screenwriters took to picket lines earlier this week. (May 5)

AP

CKE Restaurants Holdings, the parent company of fast food chains Carls Jr. and Hardees, is rolling out artificial intelligence at its drive-thrus.

The company is partnering with AI companies Presto Automation, OpenCity, and Valyant AI to automate voice ordering at participating drive-thru locations across the country, according to news releases. Carl's Jr. and Hardee's operate roughly 2,800 restaurants across 44 states.

The partnerships are meant to boost accuracy, speed, and revenue and help fast-food chains manage staffing shortages.

CKE chief technology officer Phil Crawford noted that a pilot program with Presto yielded positive results, with deployed stores recording a "significant" uptick in revenue thanks to the technologys ability to upsell customers, according to a news release.

In a February earnings call, Presto CEO Rajat Suri said the companys AI "never forgets to upsell, and upsells better than a human." The company also lists Del Taco and Checkers as clients.

CKE is also using OpenCitys voice ordering platform, Tori, and Valyant AIs conversational AI platform, Holly, at select restaurants, according to news releases.

"The AI technology has transformed our drive-thru experience, providing us with a competitive edge in the market and helping us to better serve our guests," Crawford said in a Thursday news release from OpenCity.

Warren Buffett on AI: Buffett and Charlie Munger discuss profits, AI and more at Berkshire Hathaway meeting

Biden on AI: Biden, taking on the robot economy, announces $140 million investment in AI research

See original here:

Carl's Jr. and Hardee's to roll out AI drive-thru ordering - USA TODAY

ChatGPT and the new AI are wreaking havoc on cybersecurity in … – ZDNet

Generative artificial intelligence is transforming cybersecurity, aiding both attackers and defenders. Cybercriminals are harnessing AI to launch sophisticated and novel attacks at large scale. And defenders are using the same technology to protect critical infrastructure, government organizations, and corporate networks, said Christopher Ahlberg, CEO of threat intelligence platform Recorded Future.

Generative AI has helped bad actors innovate and develop new attack strategies, enabling them to stay one step ahead of cybersecurity defenses. AI helps cybercriminals automate attacks, scan attack surfaces, and generate content that resonates with various geographic regions and demographics, allowing them to target a broader range of potential victims across different countries. Cybercriminals adopted the technology to create convincing phishing emails. AI-generated text helps attackers produce highly personalized emails and text messages more likely to deceive targets.

"I think you don't have to think very creatively to realize that, man, this can actually help [cybercriminals] be authors, which is a problem," Ahlberg said.

Also:AI could automate 25% of all jobs. Here's which are most (and least) at risk

Defenders are using AI to fend off attacks. Organizations are using the tech to prevent leaks and find network vulnerabilities proactively. It also dynamically automates tasks such as setting up alerts for specific keywords and detecting sensitive information online. Threat hunters are using AI to identify unusual patterns and summarize large amounts of data, connecting the dots across multiple sources of information and hidden patterns.

The work still requires human experts, but Ahlberg says the generative AI technology we're seeing in projects like ChatGPT can help.

"We want to speed up the analysis cycle [to] help us analyze at the speed of thought," he said. "That's a very hard thing to do and I think we're seeing a breakthrough here, which is pretty exciting."

Ahlberg also discussed the potential threats that highly intelligent machines might bring. As the world becomes increasingly digital and interconnected, the ability to bend reality and shape perceptions could be exploited by malicious actors. These threats are not limited to nation-states, making the landscape even more complex and asymmetric.

Also:ChatGPT is more like an 'alien intelligence' than a human brain, says futurist

AI has the potential to help protect against these emerging threats, but it also presents its own set of risks. For example, machines with high processing capabilities could hack systems faster and more effectively than humans. To counter these threats, we need to ensure that AI is used defensively and with a clear understanding of who is in control.

As AI becomes more integrated into society, it's important for lawmakers, judges, and other decision-makers to understand the technology and its implications. Building strong alliances between technical experts and policymakers will be crucial in navigating the future of AI in threat hunting and beyond.

AI's opportunities, challenges, and ethical considerations in cybersecurity are complex and evolving. Ensuring unbiased AI models and maintaining human involvement in decision-making will help manage ethical challenges. Vigilance, collaboration, and a clear understanding of the technology will be crucial in addressing the potential long-term threats of highly intelligent machines.

Also:How ChatGPT works

Ahlberg also raised concerns about China, Russia, and economic adversaries deploying autonomous machines. These countries likely won't slow down AI development or share ethical considerations. While having the ability to "pull the plug" on such machines is a smart safeguard, he suggests that the integration of technology into society and the global economy will likely make it hard to detach. Ahlberg emphasizes the need to design products and machines with clarity about who controls them.

"The big thing that the internet did in all of this is that the internet sort of became the place where all the world's information migrated," said Ahlberg. "These large language models are doing pretty magical things to speed up that thinking cycle."

He added, "In the next 25 years, the world becomes a reflection of the internet."

Go here to read the rest:

ChatGPT and the new AI are wreaking havoc on cybersecurity in ... - ZDNet