Archive for the ‘Ai’ Category

NSA is creating a hub for AI security, Nakasone says – The Record from Recorded Future News

The National Security Agency is consolidating its various artificial intelligence efforts into a new hub, its director announced Thursday.

The Artificial Intelligence Security Center will become the spy agencys focal point for AI activities such as leveraging foreign intelligence insights, helping to develop best practices guidelines for the fast-developing technology and creating risk frameworks for AI security, Army Gen. Paul Nakasone said during an event at the National Press Club in Washington.

The new entity will be housed within the agencys Cybersecurity Collaboration Center and help industry understand the threats against their intellectual property and collaborate to help prevent and eradicate threats, Nakasone told the audience, adding it would team with organizations throughout the Defense Department, intelligence community, academia and foreign partners.

The announcement comes after the NSA and U.S. Cyber Command, which Nakasone also helms, recently finished separate reviews of how they would use artificial intelligence in the future. The Central Intelligence Agency also said it plans to launch its own artificial intelligence-based chatbot.

One of the findings of the study was a clear need to focus on AI security, according to Nakasone, who noted NSA has particular responsibilities for such work because the agency is the designated federal manager for national security systems and already has extensive ties to the sprawling defense industrial base.

While U.S. firms are increasingly acquiring and developing generative AI technology, foreign adversaries are also moving quickly to develop and apply their own AI and we anticipate they will begin to explore and exploit vulnerabilities of U.S. and allied AI systems, the four-star warned.

He described AI security as protecting systems from learning, doing and revealing the wrong thing, as well as safeguarding them from digital attacks and ensuring malicious foreign actors can't steal America's innovative AI capabilities.

Nakasone did not specify who would lead the center or how large it might grow.

Today, the U.S leads in this critical area but this lead should not be taken for granted, he said.

Recorded Future

Intelligence Cloud.

No previous article

No new articles

Martin Matishak is a senior cybersecurity reporter for The Record. He spent the last five years at Politico, where he covered Congress, the Pentagon and the U.S. intelligence community and was a driving force behind the publication's cybersecurity newsletter.

View original post here:

NSA is creating a hub for AI security, Nakasone says - The Record from Recorded Future News

Google adds a switch for publishers to opt out of becoming AI training data – The Verge

Google just announced its giving website publishers a way to opt out of having their data used to train the companys AI models while remaining accessible through Google Search. The new tool, called Google-Extended, allows sites to continue to get scraped and indexed by crawlers like the Googlebot while avoiding having their data used to train AI models as they develop over time.

The company says Google-Extended will let publishers manage whether their sites help improve Bardand Vertex AIgenerative APIs, adding that web publishers can use the toggle to control access to content on a site. Google confirmed in July that its training its AI chatbot, Bard, on publicly available data scraped from the web.

Google-Extended is available through robots.txt, also known as the text file that informs web crawlers whether they can access certain sites. Google notes that as AI applications expand, it will continue to explore additional machine-readable approaches to choice and control for web publishers and that it will have more to share soon.

Already, many sites have moved to block the web crawler that OpenAI uses to scrape data and train ChatGPT, including The New York Times, CNN, Reuters, and Medium. However, there have been concerns over how to block out Google. After all, websites cant close off Googles crawlers completely, or else they wont get indexed in search. This has led some sites, such as The New York Times, to legally block Google instead by updating their terms of service to ban companies from using their content to train AI.

Continued here:

Google adds a switch for publishers to opt out of becoming AI training data - The Verge

Google will let publishers hide their content from its insatiable AI – Engadget

Google has announced a new control in its robots.txt indexing file that would let publishers decide whether their content will "help improve Bard and Vertex AI generative APIs, including future generations of models that power those products." The control is a crawler called Google-Extended, and publishers can add it to the file in their site's documentation to tell Google not to use it for those two APIs. In its announcement, the company's vice president of "Trust" Danielle Romain said it's "heard from web publishers that they want greater choice and control over how their content is used for emerging generative AI use cases."

Romain added that Google-Extended "is an important step in providing transparency and control that we believe all providers of AI models should make available." As generative AI chatbots grow in prevalence and become more deeply integrated into search results, the way content is digested by things like Bard and Bing AI has been of concern to publishers.

While those systems may cite their sources, they do aggregate information that originates from different websites and present it to the users within the conversation. This might drastically reduce the amount of traffic going to individual outlets, which would then significantly impact things like ad revenue and entire business models.

Google said that when it comes to training AI models, the opt-outs will apply to the next generation of models for Bard and Vertex AI. Publishers looking to keep their content out of things like Search Generative Experience (SGE) should continue to use the Googlebot user agent and the NOINDEX meta tag in the robots.txt document to do so.

Romain points out that "as AI applications expand, web publishers will face the increasing complexity of managing different uses at scale." This year has seen an explosion in the development of tools based on generative AI, and with search being such a huge way people discover content, the state of the internet looks set to undergo a huge shift. Google's addition of this control is not only timely, but indicates it's thinking about the way its products will impact the web.

Update, September 28 at 5:36pm ET: This article was updated to add more information about how publishers can keep their content out of Google's search and AI results and training.

Originally posted here:

Google will let publishers hide their content from its insatiable AI - Engadget

‘The Creator’ review: This drama about AI fails to take on a life of its … – NPR

Madeleine Yuna Voyles plays Alphie, a pensive young robot child in The Creator. 20th Century Studios hide caption

Madeleine Yuna Voyles plays Alphie, a pensive young robot child in The Creator.

The use of AI in Hollywood has been one of the most contentious issues in the writers and actors strikes, and the industry's anxiety about the subject isn't going away anytime soon. Some of that anxiety has already started to register on-screen. A mysterious robotic entity was the big villain in the most recent Mission: Impossible film, and AI is also central to the ambitious but muddled new science-fiction drama The Creator.

Set decades into the future, the movie begins with a prologue charting the rise of artificial intelligence. Here it's represented as a race of humanoid robots that in time become powerful enough to detonate a nuclear weapon and wipe out the entire city of Los Angeles.

As a longtime LA resident who's seen his city destroyed in countless films before this one, I couldn't help but watch this latest cataclysm with a chuckle and a shrug. It's just part of the setup in a story that patches together numerous ideas from earlier, better movies. After the destruction of LA, we learn, the U.S. declared war on AI and hunted the robots to near-extinction; the few that still remain are hiding out in what is now known as New Asia.

The director Gareth Edwards, who wrote the script with Chris Weitz, has cited Blade Runner and Apocalypse Now as major influences. And indeed, there's something queasy and heavy-handed about the way Edwards evokes the Vietnam War with images of American soldiers terrorizing the poor Asian villagers whom they suspect of sheltering robots.

John David Washington plays Joshua Taylor, a world-weary ex-special-forces operative. 20th Century Studios hide caption

John David Washington plays Joshua Taylor, a world-weary ex-special-forces operative.

The protagonist is a world-weary ex-special-forces operative named Joshua Taylor, played by John David Washington. He's reluctantly joined the mission to help destroy an AI superweapon said to be capable of wiping out humanity for good. Amid the battle that ensues, Joshua manages to track down the weapon, which in a twist that echoes earlier sci-fi classics like Akira and A.I. turns out to be a pensive young robot child, played by the excellent newcomer Madeleine Yuna Voyles.

Joshua's superior, played by Allison Janney, tells him to kill the robot child, but he doesn't. Instead, he goes rogue and on the run with the child, whom he calls Alpha, or Alphie. Washington doesn't have much range or screen presence, but he and Voyles do generate enough chemistry to make you forget you're watching yet another man tag-teaming with a young girl a trope familiar from movies as different as Paper Moon and Lon: The Professional.

Joshua's betrayal is partly motivated by his grief over his long-lost love, a human woman named Maya who allied herself with the robots; she's played by an underused Gemma Chan. One of the more bothersome aspects of The Creator is the way it reflexively equates Asians with advanced technology; it's the latest troubling example of "techno-orientalism," a cultural concept that has spurred a million Blade Runner term papers.

In recycling so many spare parts, Edwards, best known for directing the Star Wars prequel Rogue One, is clearly trying to tap into our memories of great Hollywood spectacles past. To his credit, he wants to give us the kind of philosophically weighty, visually immersive science-fiction blockbuster that the studios rarely attempt anymore. The most impressive aspect of The Creator is its world building; much of the movie was shot on location in different Asian countries, and its mix of real places and futuristic design elements feels more plausible and grounded than it would have if it had been rendered exclusively in CGI.

But even the most strikingly beautiful images like the one of high-tech laser beams shimmering over a beach at sunset are tethered to a story and characters that never take on a life of their own. Not even the great Ken Watanabe can breathe much life into his role as a stern robo-warrior who does his part to help Joshua and Alphie on their journey.

In the end, Edwards mounts a sincere but soggy plea for human-robot harmony, arguing that AI isn't quite the malicious threat it might seem. That's a sweet enough sentiment, though it's also one of many reasons I left The Creator asking myself: Did an AI write this?

Here is the original post:

'The Creator' review: This drama about AI fails to take on a life of its ... - NPR

Hollywood Writers Reached an AI Deal That Will Rewrite History – WIRED

The deal is not without its quandaries. Enforcement is an overriding one, says Daniel Gervais, a professor of intellectual property and AI law at Vanderbilt University in Nashville, Tennessee. Figuring that out will likely set another precedent. Gervais agrees that this deal gives writers some leverage with studios, but it might not be able to stop an AI company, which may or not be based in the US, from scraping their work. August concurs, saying the WGA needs to be honest about the limitations of the contract. We made a deal with our employers, the studios, he says. We have no contractual relationship with the major AI companies. So this is not the end of the fight.

There are also questions around who carries the burden to reveal when AI has contributed some part of a script. Studios could argue that they took a script from one writer and gave it to another for rewrites without knowledge that the text had AI-generated components. As a lawyer, Im thinking, OK, so what does that mean? How do you prove that? Whats the burden? And how realistic is that?

The future implicitly hinted at by the terms of the WGA deal is one in which machines and humans work together. From an artists perspective, the agreement does not villainize AI, instead leaving the door open for continued experimentation, whether that be generating amusing names for a Tolkienesque satire or serious collaboration with more sophisticated versions of the tools in the future. This open-minded approach contrasts with some of the more hysterical reactions to these technologieshysteria thats now starting to see some pushback.

Outside Hollywood, the agreement sets a precedent for workers in many fieldsnamely, that they can and should fight to control the introduction of disruptive technologies. What, if any, precedents are set may become obvious as soon as talks resume between AMPTP and the actors union, the Screen Actors GuildAmerican Federation of Television and Radio Artists (SAG-AFTRA). Its unclear just how soon those negotiations will pick back up, but its highly likely that the guild will look to WGAs contract as a lodestar.

Still, the contract is only a determined start, says actor and director Alex Winter. He fears it won't offer expansive enough protection. Studios are putting a lot of resources into new uses for AI, he says, and they don't show signs of easing up. The writers guild deal puts a lot of trust in the studios to do the right thing, and his hope is that the SAG contract, once it's complete, will offer more protections. Similar to how our government has been allowing Big Tech topolice itselfwith AI, Winter says, I dont see that working with Big Tech and I dont see this working in the entertainment industry either, unfortunately.

Actors have stronger protections in the form of the right of publicityalso known as name, image, and likeness rightsyet intense concerns remain about synthetic actors being built from the material of actors past performances. (As of this writing, SAG-AFTRA had not responded to a request for comment.) It will also be interesting to see if any of the issues that came up during the WGAs negotiations will trickle into ongoing unionization efforts at video game studios or other tech firms. On Monday, SAG-AFTRA members authorized a strike for actors who work on video games; once again, AI was one of the issues raised.

When it comes to AI, argues Simon Johnson, an economist at MIT, the WGA has burst out in front of other unions, and everyone should take note. As he and several coauthors laid out in a recent policy memo on pro-worker AI, the history of automation teaches that workers cannot wait until management deploys these technologies; if they do, they will be replaced. (See also: the Luddites.)

We think this is exactly the right way to think about it, which is that you dont want to say no to AI, he says. You want to say the AI can be controlled and used as much as possible by workers, by the people being employed. In order to make that feasible, youre going to have to put some constraints on what employers can do with it. I think the writers are actually, in this regard, in a pretty strong position compared to other workers in the American economy.

Excerpt from:

Hollywood Writers Reached an AI Deal That Will Rewrite History - WIRED