Archive for the ‘Ai’ Category

How Googles AI Overviews Work, and How to Turn Them Off (You Cant) – WIRED

When can you expect your query to trigger an AI-generated summary of the results? AI Overviews appear for complex queries, says Mallory De Leon, a Google spokesperson. You'll find AI Overviews in your Google Search results when our systems determine that generative AI can be especially helpfulfor example, when you want to quickly understand information from a range of sources. During my initial tests, it felt like the AI Overviews popped up almost at random for queries, and the summaries appeared for simple questions as well as more complicated asks.

According to De Leon, the AI Overview is powered by a customized version of Googles Gemini model thats supplemented with aspects of the companys Search system, like the Knowledge Graph that has billions of general facts.

For some AI Overview answers, the webpage links are immediately visible. For other AI Overviews, you have to click Show more to see where the information is coming from.

One of my core hesitations about this feature as it rolls out is the continued potential for AI hallucinations, more commonly known as lies. When you interact with Googles Gemini chatbot, a disclaimer at the bottom reads: Gemini may display inaccurate info, including about people, so double-check its responses. Theres no such disclaimer added to the bottom of the AI Overview, which often simply reads, Generative AI is experimental.

When asked why theres no mention of potential hallucinations for AI Overviews, De Leon emphasizes that Google wants to still offer high-quality search results and mentions that the company did adversarial red-teaming tests to uncover potential weak points for the feature.

This implementation of generative AI is rooted in Searchs core quality and safety systems, with built-in guardrails to prevent low-quality or harmful information from surfacing, she says. AI Overviews are designed to highlight information that can be easily verified by the supporting information that we surface.

Knowing this, you might still want to click through the webpage links to double-check that the information is actually correct. Though its hard to imagine many users, who are often looking for quick answers, will spend extra time reading over the source material for Googles AI-generated answer.

Liz Reid, Googles head of Search, recently told my colleague Lauren Goode that AI Overviews are expected to arrive for countries outside of the United States before the end of 2024, so over a billion people will likely soon encounter this new feature. As someone whose job relies on readers actually clicking links and spending time reading the articles, of course Im apprehensive about this changeand Im not alone.

Beyond concerns from publishers, it also remains unclear what additional impacts might trickle down to users from Googles AI Overviews. Yes, OpenAIs ChatGPT and other AI tools are quite popular in Silicon Valley tech circles, but this feature will likely expose billions of people, who have never used a chatbot before, to AI-generated text. Even though AI Overviews are designed to save you time, they might lead to less trustworthy results.

Read the original:

How Googles AI Overviews Work, and How to Turn Them Off (You Cant) - WIRED

We have to stop ignoring AI’s hallucination problem – The Verge

Google I/O introduced an AI assistant that can see and hear the world, while OpenAI put its version of a Her-like chatbot into an iPhone. Next week, Microsoft will be hosting Build, where its sure to have some version of Copilot or Cortana that understands pivot tables. Then, a few weeks after that, Apple will host its own developer conference, and if the buzz is anything to go by, itll be talking about artificial intelligence, too. (Unclear if Siri will be mentioned.)

AI is here! Its no longer conceptual. Its taking jobs, making a few new ones, and helping millions of students avoid doing their homework. According to most of the major tech companies investing in AI, we appear to be at the start of experiencing one of those rare monumental shifts in technology. Think the Industrial Revolution or the creation of the internet or personal computer. All of Silicon Valley of Big Tech is focused on taking large language models and other forms of artificial intelligence and moving them from the laptops of researchers into the phones and computers of average people. Ideally, they will make a lot of money in the process.

But I cant really care about that because Meta AI thinks I have a beard.

I want to be very clear: I am a cis woman and do not have a beard. But if I type show me a picture of Alex Cranz into the prompt window, Meta AI inevitably returns images of very pretty dark-haired men with beards. I am only some of those things!

Meta AI isnt the only one to struggle with the minutiae of The Verges masthead. ChatGPT told me yesterday I dont work at The Verge. Googles Gemini didnt know who I was (fair), but after telling me Nilay Patel was a founder of The Verge, it then apologized and corrected itself, saying he was not. (I assure you he was.)

The AI keeps screwing up because these computers are stupid. Extraordinary in their abilities and astonishing in their dimwittedness. I cannot get excited about the next turn in the AI revolution because that turn is into a place where computers cannot consistently maintain accuracy about even minor things.

I mean, they even screwed up during Googles big AI keynote at I/O. In a commercial for Googles new AI-ified search engine, someone asked how to fix a jammed film camera, and it suggested they open the back door and gently remove the film. That is the easiest way to destroy any photos youve already taken.

An AIs difficult relationship with the truth is called hallucinating. In extremely simple terms: these machines are great at discovering patterns of information, but in their attempt to extrapolate and create, they occasionally get it wrong. They effectively hallucinate a new reality, and that new reality is often wrong. Its a tricky problem, and every single person working on AI right now is aware of it.

One Google ex-researcher claimed it could be fixed within the next year (though he lamented that outcome), and Microsoft has a tool for some of its users thats supposed to help detect them. Googles head of Search, Liz Reid, told The Verge its aware of the challenge, too. Theres a balance between creativity and factuality with any language model, she told my colleague David Pierce. Were really going to skew it toward the factuality side.

But notice how Reid said there was a balance? Thats because a lot of AI researchers dont actually think hallucinations can besolved. A study out of the National University of Singapore suggested that hallucinations are an inevitable outcome of all large language models. Just as no person is 100 percent right all the time, neither are these computers.

And thats probably why most of the major players in this field the ones with real resources and financial incentive to make us all embrace AI think you shouldnt worry about it. During Googles IO keynote, it added, in tiny gray font, the phrase check responses for accuracy to the screen below nearly every new AI tool it showed off a helpful reminder that its tools cant be trusted, but it also doesnt think its a problem. ChatGPT operates similarly. In tiny font just below the prompt window, it says, ChatGPT can make mistakes. Check important info.

Thats not a disclaimer you want to see from tools that are supposed to change our whole lives in the very near future! And the people making these tools do not seem to care too much about fixing the problem beyond a small warning.

Sam Altman, the CEO of OpenAI who was briefly ousted for prioritizing profit over safety, went a step further and said anyone who had an issue with AIs accuracy was naive. If you just do the naive thing and say, Never say anything that youre not 100 percent sure about, you can get them all to do that. But it wont have the magic that people like so much, he told a crowd at Salesforces Dreamforce conference last year.

This idea that theres a kind of unquantifiable magic sauce in AI that will allow us to forgive its tenuous relationship with reality is brought up a lot by the people eager to hand-wave away accuracy concerns. Google, OpenAI, Microsoft, and plenty of other AI developers and researchers have dismissed hallucination as a small annoyance that should be forgiven because theyre on the path to making digital beings that might make our own lives easier.

But apologies to Sam and everyone else financially incentivized to get me excited about AI. I dont come to computers for the inaccurate magic of human consciousness. I come to them because they are very accurate when humans are not. I dont need my computer to be my friend; I need it to get my gender right when I ask and help me not accidentally expose film when fixing a busted camera. Lawyers, I assume, would like it to get the case law right.

I understand where Sam Altman and other AI evangelists are coming from. There is a possibility in some far future to create a real digital consciousness from ones and zeroes. Right now, the development of artificial intelligence is moving at an astounding speed that puts many previous technological revolutions to shame. There is genuine magic at work in Silicon Valley right now.

But the AI thinks I have a beard. It cant consistently figure out the simplest tasks, and yet, its being foisted upon us with the expectation that we celebrate the incredible mediocrity of the services these AIs provide. While I can certainly marvel at the technological innovations happening, I would like my computers not to sacrifice accuracy just so I have a digital avatar to talk to. That is not a fair exchange its only an interesting one.

Follow this link:

We have to stop ignoring AI's hallucination problem - The Verge

Bye Bye, AI: How to turn off Google’s annoying AI overviews and just get search results – Tom’s Hardware

Google's "AI Overviews" feature, also known as SGE (Search Generative Experience), is a raging trash fire that threatens to choke the open web with its stench. Instead of directing you to expert insights from reputable sources, Google is now putting plagiarized and often incorrect AI summaries above its search results. So when you search for medical advice, for example, the AI may tell you to drink urine to get rid of kidney stones, and you'll have to scroll past that "advice" to find links to articles from human doctors.

Unfortunately, Google does not provide a way to turn off AI Overviews in its settings, but there are a few ways to avoid these atrocities and go straight to search results. In perhaps a tacit admission that its default results page is now a junk yard, the search giant has added a "web" tab to the site so, just like you can narrow your search to "images" or "videos" or "news," you can now get a plain old list of web pages without AI, answer boxes or other cruft.

Below, I'll show you how to filter AI overviews out of the results page using a Chrome extension that I wrote. Or you can send your searches directly to the web tab from Chrome's address bar, avoiding the need to turn anything off. Unfortunately, at the moment, neither of these methods works for Chrome on Android or iOS. However, you can use a different mobile browser, such as Firefox.

The Google AI Overview, like all parts of an HTML page, can be altered using JavaScript. There are a few extensions in the Chrome web store that are programmed to locate the AI Overview block and set its CSS display value to "none."

After seeing some of the other extensions in the market, including the appropriately-named Hide Google AI Overview, I decided to write my own AI Overview blocking extension called Bye Bye, Google AI. Like all Chrome extensions, it works in both Chrome and Microsoft Edge browsers.

Bye Bye, Google AI also has the option to hide / effectively turn off discussions blocks, shopping blocks, featured snippets, video blocks and sponsored links from the Google results page. You can choose which ones you want to filter out by going to the options menu (right clicking on the toolbar icon and selection Options).

The problem with my extension or any of the others is that Google can easily block them or break them. If Google makes small changes in the code on its results pages, the JavaScript in the extension may no longer be able to locate the AI Overview blocks (or other block types) to turn them off.

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

A potentially more reliable solution in the long term for turning off AI overviews is to configure your browser so that, when you search from the address bar, it sends the queries straight to the web tab. The Bye Bye, Google AI extension will search the web tab if you hit w + spacebar and then your query.

However, below, we'll see how to configure the Chrome browser so that it sends all queries from the address bar directly to the web tab, no extension or hitting w + spacebar required. The disadvantages of sending traffic to the web tab is that it doesn't show other kinds of results such as videos, discussions, featured snippets, images or shopping blocks and you might want to see some or all of those.

If, like me, you initiate most of your web searches from the Chrome browser's address bar, you can make a simple change that will direct all of your queries to Google's web search tab, no extension required.

1. Navigate to chrome://settings/searchEngines in Chrome or click Settings->Search Engine->Manage search engines and site search.

2. Click the Add button next to Site search.

A dialog box appears, allowing you to create a new "site search" entry.

3. Fill in the fields in the dialog box as follows then click Add.

4. Select "Make default" from the three-dot menu next to your new entry.

The Google (Web) engine will now appear on the Search engines list. When you enter a query in the address bar, it will direct you straight to the Web tab on Google. The real secret is that the search engine we created adds the parameter ?udm=14 to the search query.

While Google Chrome for the desktop makes it easy to change your address bar search or install extensions, Chrome for the phone is a different story. On Chrome for Android and iOS, you can't use extensions at all, and you can only choose from a limited group of search engines. Yes, you can select a custom search engine, but it has to be an existing engine on the Internet you've visited; you can't manually type in a search URL and, therefore, can't add the all-important ?udm=14 to the query string.

Unfortunately, neither mobile Safari nor mobile Edge allows you to manually add a search engine. However, mobile Firefox, available for iOS and Android, does have this capability. Here's how to use it.

1. Install Firefox on your phone if you don't have it already.

2. Navigate to Settings.

3. Tap Search.

4. Tap Default Search Engine

5. Tap Add search engine.

6. Fill out the fields as follows and then click Save.

7. Select Google (Web) from the menu.

Now, when you search from Firefox's address bar, you'll get the Google web tab.

Read more here:

Bye Bye, AI: How to turn off Google's annoying AI overviews and just get search results - Tom's Hardware

Hollywood agency CAA aims to help stars manage their own AI likenesses – TechCrunch

Creative Artists Agency (CAA), one of the top entertainment and sports talent agencies, is hoping to be at the forefront of AI protection services for celebrities in Hollywood.

With many stars having their digital likeness used without permission, CAA has built a virtual media storage system for A-list talent actors, athletes, comedians, directors, musicians, and more to store their digital assets, such as their names, images, digital scans, voice recordings, and so on. The new development is a part of theCAAvault, the companys studio where actors record their bodies, faces, movements, and voices using scanning technology to create AI clones.

CAA teamed up with AI tech company Veritone to provide its digital asset management solution, the company announced earlier this week.

The announcement arrives amid a wave of AI deepfakes of celebrities, which are often created without their consent. Tom Hanks, a famous actor and client on CAAs roster, fell victim to an AI scam seven months ago. He claimed that a company used an AI-generated video of him to promote a dental plan without permission.

Over the last couple of years or so, there has been a vast misuse of our clients names, images, likenesses, and voices without consent, without credit, without proper compensation. Its very clear that the law is not currently set up to be able to protect them, and so we see many open lawsuits out there right now, Shannon said.

A significant amount of personal data is necessary to create digital clones, which raises numerous privacy concerns due to the risk of compromising or misusing sensitive information. CAA clients can now store their AI digital doubles and other assets within a secure personal hub in the CAAvault which can only be accessed by authorized users, allowing them to share and monetize their content as they see fit.

This is giving the ability to start setting precedents for what consent-based use of AI looks like, CAAs head of strategic development, Alexandra Shannon, told TechCrunch. Frankly, our view has been that the law is going to take time to catch up, and so by the talent creating and owning their digital likeness with [theCAAvault] there is now a legitimate way for companies to work with one of our clients. If a third party chooses not to work with them in the right way, its much easier for legal cases to show there was an infringement of their rights and help protect clients over time.

Notably, the vault also ensures actors and other talent are rightfully compensated when companies use their digital likenesses.

All these assets are owned by the individual client, so it is largely up to them if they want to grant access to anybody else It is also completely up to the talents to decide the right business model for opportunities. This is a new space, and it is very much forming. We believe these assets will increase in value and opportunity over time. This shouldnt be a cheaper way to work with somebody We view [AI clones] as an enhancement rather than being for cost savings, Shannon added.

CAA also represents Ariana Grande, Beyonc, Reese Witherspoon, Steven Spielberg, and Zendaya, among others.

The use of AI cloning has sparked many debates in Hollywood, with some believing it could lead to fewer job opportunities, as studios might choose digital clones over real actors. This was a major point of contention during the 2023 SAG-AFTRA strikes, which ended in November after members approved a new agreement with AMPTP (Alliance of Motion Picture and Television Producers) that recognized the importance of human performers and included guidelines on how digital replicas should be used.

There are also concerns surrounding the unauthorized use of AI clones of deceased celebrities, which can be disturbing to family members. For instance, Robin Williams daughter expressed her disdain for an AI-generated voice recording of the star. However, some argue that, when done ethically, it can be a sentimental way to preserve an iconic actor and recreate their performances in future projects for all generations to enjoy.

AI clones are an effective tool that enables legacies to live on into future generations. CAA takes a consent and permission-based approach to all AI applications and would only work with estates that own and have permissions for the use of these likeness assets. It is up to the artists as to whom they wish to grant ownership of and permission for use after their passing, Shannon noted.

Shannon declined to share which of CAAs clients are currently storing their AI clones in the vault, however, she said it was only a select few at the moment. CAA also charges a fee for clients to participate in the vault, yet didnt say exactly how much it costs.

The ultimate goal will be to make this available to all our clients and anyone in the industry. It is not inexpensive, but over time, the costs will continue to come down, she added.

Read this article:

Hollywood agency CAA aims to help stars manage their own AI likenesses - TechCrunch

‘Copper is the new oil,’ and prices could soar 50% as AI, green energy, and military spending boost demand, top … – Fortune

Copper is emerging as the next indispensable industrial commodity, mirroring oils rise in earlier decades, a top commodities analyst said.

This time around, new forces in the economy, namely the advent of artificial intelligence, explosion of data centers, and the green energy revolution, are boosting demand for copper, while the development of new weapons is adding to it as well, according to Jeff Currie, chief strategy officer of Energy Pathways at Carlyle.

Copper is the new oil, he told Bloomberg TV on Tuesday, noting that his conversations with traders also reinforce his bullishness. It is the highest-conviction trade Ive ever seen.

Copper has long been a key industrial bellwether as its uses range widely from manufacturing and construction to electronics and other high-tech products.

But billions of dollars pouring into artificial intelligence and renewable energy are a relatively new part of coppers outlook, Currie noted, acknowledging that he made a similar prediction in 2021 when he was an analyst at Goldman Sachs.

Im confident that this time is lift-off, and I think were going to see more momentum behind it, he said. Whats different this time is there are now three sources of demandAI, green energy, and the militaryinstead of just green energy three years ago.

And while demand is high, supply remains tight as bringing new copper mines online can take 12 to 26 years, Currie pointed out.

That should eventually send prices soaring to $15,000 per ton, he predicted. Coppers prices are already at record highs, with benchmark prices in London at about $10,000 per ton, more than doubling from the pandemic-era lows in early 2020.

At some point, the price will get so high that it will create demand destruction, meaning buyers balk at paying so much. But Currie doesnt know what that level is.

But I go back to the 2000s, I was bullish on oil then as I am on copper today, he added, recalling that crude shot up from $20 to $140 per barrel at the time. So the upside on copper here is very significant.

Copper was also a key catalyst in BHPs proposed a takeover of Anglo American, a $40 billion deal that would create the worlds topcopper producer. But Anglo has rejected the offer and recently announced plans to restructure the group, including selling its diamond business De Beers.

Go here to see the original:

'Copper is the new oil,' and prices could soar 50% as AI, green energy, and military spending boost demand, top ... - Fortune