Archive for the ‘Ai’ Category

Report Uncovers Thousands of AI-Generated Child Abuse Images … – PetaPixel

A new report has revealed how child safety investigators are struggling to stop thousands of disturbing artificial intelligence (AI) generated child sex images that are being created and shared across the web.

According to a report published by The Washington Post on Monday, the rise of AI technology has sparked a dangerous explosion of lifelike images showing child sexual exploitation causing concern among child safety experts.

The report notes that thousands of AI-generated child-sex images have been found on forums across the dark web. Users are also sharing detailed instructions for how other pedophiles can make their own realistic AI images of children performing sex acts, commonly known as child pornography.

Childrens images, including the content of known victims, are being repurposed for this really evil output, Rebecca Portnoff, the director of data science at Thorn, a nonprofit child-safety group tells The Washington Post.

Since last fall, Thorn has seen month-over-month growth in AI images prevalence on the dark web.

The explosion of such images has the worrying potential to undermine efforts to find victims and combat real abuse as law enforcement will have to go to extra lengths to investigate whether a photograph is real or fake.

According to the publication, AI-generated child sex images could confound the central tracking system built to block such material from the web because it is designed only to catch known images of abuse, rather than detect newly-generated ones.

Law enforcement officials, who work to identify victimized children, may now be forced to spend time determining whether the images are real or AI-generated.

AI tools can also re-victimize any individual whose photographs of past child sex abuse are used to train models to generate fake images.

Victim identification is already a needle in a haystack problem, where law enforcement is trying to find a child in harms way, Portnoff explains.

The ease of using these tools is a significant shift, as well as the realism. It just makes everything more of a challenge.

The images have also fueled a debate on whether they even violate federal child-protection laws as the pictures often depict children who do not actually exist.

According to The Washington Post, Justice Department officials who combat child exploitation say such images are still illegal even if the child depicted is AI-generated.

However, there is no previous case in the U.S. of a suspect had been charged for creating deepfake child pornography.

In April, a man in Quebec, Canada was recently sentenced to three years in prison for using AI to generate images of child pornography the first ruling of its kind in the country.

Image credits: Header photo licensed via Depositphotos.

See the article here:

Report Uncovers Thousands of AI-Generated Child Abuse Images ... - PetaPixel

Marvel Secret Invasion’s AI Credits Are A Soulless Insult To Artists – TheGamer

Theres something a little on the nose about making the credits to a TV show called Secret Invasion with AI. For a while now, AI has been creeping into our artistic endeavours, invading it. This path has been walked by NFTs before it, a thing that looks like art and walks like art, but is in fact a zebra. Where the cash scam of NFTs could be spotted from a mile off and was difficult to understand, AI is a much easier sell to the general public. That makes it far more dangerous.

How AI has been pitched to the public is you type in some words, and a picture comes out. Its how many people wish art worked - you tell an artist exactly what to draw, and they soullessly recreate it for free. Its an instant gratification machine - you type in some prompts, and you get a funny picture. But theres more to it than that, as Marvels latest disrespect for artists demonstrates.

Related: AI Will Be Gaming's Downfall

Most people who use AI were never going to pay an artist anyway. They arent using it for professional needs, or for artwork they otherwise would have commissioned. Its just a way to while away the time. Generally Id advise against it, as feeding the machine only helps it grow, and tells businesses that we prefer AI art rendered in seconds with zero thought process rather than something deliberately created. But even if we say casual use is harmless, Marvel is not using it for casual use.

On the face of it, Marvel is using it for the opening credits, which it otherwise would have used real art for. But thats not quite the whole story. Its the opening credits of a show it only expects to do middling numbers, which has had limited marketing, and which only serves to obligate viewers to watch yet another TV show ahead of The Marvels in November. This is a show designed to go slightly under the radar, and so with it Marvel can make AI usage normal in a show with limited stakes. If it gets away with it here, we will see it used in bigger projects too.

Weve seen CGI get significantly worse recently, and thats because of increased workload. With only Warner Bros., Disney, and Sony making the big superhero movies that require VFX work, they can apply pressure to contractors. Dont hit the deadline? Thats one third of your potential work gone, forever. Studios push themselves harder and harder so other studios can pay them less and make more money. The only alternative is to make no money at all. Enter AI.

While you can save money by churning out shows and movies that rely entirely on IP, cutting corners by cheaping out on VFX, even cheaping out can be expensive. More expensive than free, certainly. Using AI for the credit doesnt mean cutting a corner fine, it means driving right on through it. All of the artists past and present who have been crucial in Marvel and Disneys rise (two corporations more than any other built on the backs of artists) are having their legacy thrown out. If the Walt Disney Corporation doesnt value people who can draw, what business on Earth will?

Ive always been reluctant to use the attack line that AI art looks bad, because eventually itll look good. Or at least, something that will pass for good. The core problem is that its completely unoriginal, and there is zero thought behind what it does. Think of something like Across the Spider-Verse, where every single frame includes a deliberate choice. Then look at Marvels Secret Invasion, where each second of the opening credits has an ugly fluttering of colours. It lingers on some images as if to imbue them with importance, but the joy of speculation is robbed from us when we know it was created by robots who know nothing.

Director and executive producer Ali Selim says its deliberate, and meant to mimic Skrull artwork When we reached out to the AI vendors, that was part of it it just came right out of the shape-shifting, Skrull world identity, you know? Who did this? Who is this?. First off, AI vendors has replaced content as my least favourite phrase used to bastardise art. Taking data and prompts from other people, repackaging it as cheap sludge, and selling it on.

Secondly, what are the chances that Skrull artwork resembles the moronic ugly shit tech bros try to claim is the next big art movement? Id say pretty damn low, unless I was making a Marvel show while being asked to save money and roadtest AI, in which case Id come up with a similar bullshit lie. The irony is a real artist could have come up with a fresh artstyle that captured the look of the Skrull people, rather than the same slop weve seen a million times before.

Im not an absolutist with AI materials. In video games, AI is needed so the enemies know when to shoot at you, and Spider-Verse artists used it as a tool to handle the underlying complications of their robust animation style. But the line in the sand has always been using AI for creative purposes, and the MCU has just flagrantly crossed it. I doubt it will be the last time.

It wouldnt surprise me if the whole thing turns out to be stolen anyway. Thats literally all AI art does, and if an artist spots something that resembles their own work in the credits, Marvel could be in trouble. Given the bridges Marvel has repeatedly burned with artists, there will be a lot of solidarity around. Its a cold, lonely future for Marvel, and before long that will go for movie theatres too. The general public may not care as much about issues like this, but it cares about shit. And this is shit.

The team that worked on the original Iron Man movie created a far more appealing visual palette in a cave with a box of scraps. Every project since Endgame has increasingly proven that these days, Marvel is not Tony Stark.

Next: No Hard Feelings Jennifer Lawrence Full Frontal Is 2023s Wildest Scene

See original here:

Marvel Secret Invasion's AI Credits Are A Soulless Insult To Artists - TheGamer

G7 data protection authorities point to key concerns on generative AI – EURACTIV

The privacy watchdogs of the G7 countries are set to detail a common vision of the data protection challenges of generative AI models like ChatGPT, according to a draft statement seen by EURACTIV.

The data protection and privacy authorities of the United States, France, Germany, Italy, United Kingdom, Canada and Japan have been meeting in Tokyo on Tuesday and Wednesday (20-21 June) for a G7 roundtable to discuss data free flows, enforcement cooperation and emerging technologies.

The risks of generative AI models from the privacy watchdog perspective related to their rapid proliferation in various contexts and domains have taken centre stage, the draft statement reads.

We recognize that there are growing concerns that generative AI may present risks and potential harms to privacy, data protection, and other fundamental human rights if not properly developed and regulated, the statement reads.

Generative AI is a sophisticated technology capable of providing human-like text, image or audiovisual content based on a users input. Since the meteoric rise of ChatGPT, the emerging technology has brought great excitement but also massive anxiety over its possible misuse.

In April, the G7 of digital ministers that gathered set out the so-called Hiroshima Process to align on some of these topics, such as governance, safeguarding Intellectual Property rights, promoting transparency, preventing disinformation and promoting responsible use of the technology.

The Hiroshima Process is due to drive a voluntary Code of Conduct on generative AI that the European Commission is developing with the United States and other G7 partners.

Meanwhile, the EU is close to adopting the worlds first comprehensive legislation on Artificial Intelligence, which is set to include some provisions specific to generative AI.

Still, the privacy regulators point out to a series of risks that generative AI tools entail from a data protection standpoint.

The starting point is the legal authority AI developers have for processing personal information, particularly of minors, in the datasets used to train the AI models, how users interactions are fed into the tools and what information is then spat out as output.

The statement also calls for security safeguards to avoid the generative AI models being used to extract or reproduce personal information or that their privacy safeguards can be circumvented with carefully-crafted prompts.

The authorities also call on the AI developers to ensure that personal information used by generative AI tools is kept accurate, complete and up-to-date and free from discriminatory, unlawful, or otherwise unjustifiable effects.

In addition, the G7 regulators point to transparency measures to promote openness and explainability in the operation of generative AI tools, especially in cases where such tools are used to make or assist in decision-making about individuals.

The provision of technical documentation across the development lifecycle, measures to ensure an appropriate level of responsibility among actors of the AI supply chain and the principle to limit the collection of personal data to the strict necessary are also referenced.

Finally, the statement urges generative AI providers to put in place technical and organisational measures to ensure individuals affected by and interacting with these systems can still exercise their rights, such as access, rectification, and erasure of personal information, as well as the possibility to refuse to be subject solely to automated decisions that have significant effects.

The declaration stressed the case of Italy, where the data protection authority temporarily suspended ChatGPT due to possible privacy violations, but the service was eventually reinstated following improvements from OpenAI.

The authorities mention several ongoing actions, including investigating generative AI models in their respective legislation, providing guidance to AI developers for privacy compliance and supporting innovative projects such as regulatory sandboxes.

Fostering cooperation, particularly with establishing a dedicated task force, is also referenced, as EU authorities set up one to streamline enforcement on ChatGPT following the Italian regulators decision addressed to the worlds most famous chatbot.

However, according to a source informed on the matter, the work of this task force has been progressing very slowly, mostly due to the administrative process and coordination, and the European regulators are now expecting OpenAI to provide clarifications by the end of the summer.

Developers and providers should embed privacy in the design, conception, operation, and management of new products and services that use generative AI technologies, based on the concept of Privacy by Design and document their choices and analyses in a privacy impact assessment, the statement continues.

Moreover, AI developers are urged to enable downstream economic operators that deploy or adapt the modelto comply with data protection obligations.

Further discussions on how to address the privacy challenges of generative AI will take place in an emerging technology working group of the G7 of the data protection authorities.

[Edited by Nathalie Weatherald]

Originally posted here:

G7 data protection authorities point to key concerns on generative AI - EURACTIV

Microsoft – AI Will Help Drive $100 Billion In Revenue By 2027 … – Seeking Alpha

Jean-Luc Ichard

Given the run-up in AI-related valuations, separating the real deal from companies that are merely AI wannabes is critical. The first few things to consider are, will this company see revenue from AI and, if so, how soon.

Pictured Above: Some AI-related cloud stocks have surged in valuation by 4X in the matter of a few months (YCharts)

Although many AI stocks will not report enough AI revenue to survive the fierce, competitive battle the tech industry faces due to AI/ML, Wall Street investors can reasonably assume that Microsoft will be a leader in this space. Microsoft's AI platform is rather insulated from widespread competition outside of Google Cloud and AWS, and the company's software assets are particularly well suited for AI advancements, such as Office 365.

In April 2022, our firm re-entered Microsoft with a note to our premium research members about the company's dominance in AI before Chat-GPT3 was released. We repeated this in October 2022 when we called Microsoft a "sleeping AI giant":

Microsoft is a sleeping AI/ML giant. Google gets a lot of attention here yet I think they are equally prepared to serve this market [] To help Microsoft rival Google, the company has been investing in OpenAI, which is a large R&D operation that is breaking ground with AI algorithms that help computers to create images from text, reduce the amount of code that developers need to write, and to also help robotics think and act like humans, among other things [] DALL-E is a "12-billion parameter" version of GPT-3 that creates images from text. The partnership with Microsoft will bring DALL-E to apps and services, including the Designer app and Image Creator tool in Bing and Microsoft Edge - this was announced earlier this month at Ignite."

Analysts have been raising their price targets to the high $300s with an Evercore analyst raising his price target to $400 stating:

The infusion of AI across Microsoft's product portfolio represents a potential $100 billion incremental revenue uplift in 2027.

To provide some context, Azure and Office 365 helped Microsoft add almost $100 billion in revenue over the past four years. It increased from $110 billion to $198 billion in revenue. The stock appreciated 180% over that time frame. At the time, the market did not comprehend the revenue potential in these two businesses. We believe that history will repeat itself and the market is underestimating the impact AI will have on MSFT's future sales growth across its business lines.

However, valuation poses a risk to Microsoft's current stock price, and as outlined below, our firm prefers to wait before we add again to our position.

The OpenAI opportunity extends beyond Microsoft's installed base, which is an important change to Microsoft's market position. This is because OpenAI APIs run on Azure even if the customer isn't directly an Azure customer. Management commented on this in the earnings call:

Second, even Azure OpenAI API customers are all new, and the workload conversations, whether it's B2C conversations in financial services or drug discovery on another side, these are all new workloads that we really were not in the game in the past, whereas we now are."

One market that gets overlooked in terms of its AI impact is the Federal Government. It is currently undergoing a major shift into the cloud. In a blog post, the company CTO Bill Chappell wrote:

Microsoft continues to develop and advance cloud services to meet the full spectrum of government needs while complying with United States regulatory standards for classification and security. The latest of these tools, generative AI capabilities through Microsoft Azure OpenAI Service, can help government agencies improve efficiency, enhance productivity, and unlock new insights from their data. Many agencies require a higher level of security given the sensitivity of government data. Microsoft Azure Government provides the stringent security and compliance standards they need to meet government requirements for sensitive data.

Many years ago, I wrote about the Pentagon contract and why Microsoft would be a front-runner when it was widely reported AWS was the sole Big 3 contender for the contract. This analysis pointed toward the long-standing history Microsoft has in being favored by government entities.

The company introduced Microsoft 365 Copilot last month. It is the productivity tool that combines large language models (LLMs) with the data in Microsoft Graph and Microsoft 365 apps. The use cases of Copilot in Word include giving the users the first draft while saving the time on sourcing, writing, and editing the content. Similarly, Copilot in PowerPoint will help to create presentations based on previous content. Copilot in Excel can analyze trends from the data, create charts, and helps to make informative decisions.

To have a suite of productivity products that can see an immediate impact from AI-related R&D is a large part of the $100 billion that Microsoft can potentially add to the top line by 2027.

Another important driver is Microsoft's close partnerships with many of the telecom and data centers around world which will further cement its strong position in edge computing.

In February, Microsoft announced it had previewed two AI-powered services that are designed to manage telecom networks. Jason Zander, executive vice president of strategic missions and technologies at Microsoft said,

What we're doing is taking our native cloud work and making it specific to this telecom operator network space. I think a really great example of that is all the AI ops work that we are introducing into the system.

In the most recent quarter, Microsoft announced that the new AI-powered Bing and Edge has seen a positive response. The company crossed 100 million daily active users of Bing. This is how Microsoft described the early impact of ChatGPT:

Of the millions of active users of the new Bing preview, it's great to see that roughly one third are new to Bing. We see this appeal of the new Bing as a validation of our view that search is due for a reinvention and of the unique value proposition of combining Search + Answers + Chat + Creation in one experience.

Notably, Microsoft Bing has 3% market share and for every additional 1%, Microsoft will make an additional $2 billion.

Microsoft's cybersecurity segment reports more than $15 billion in revenue. The company was also the only Big 3 cloud vendor to not only build a multi-cloud product but also multi-cloud security. Today Microsoft's cybersecurity sales dwarf the revenue of many cybersecurity best-of-breed products combined.

Tech Insider Network

In the spring of 2022, I wrote about how reducing cloud costs was going to be a key trend in 2022 and beyond. We believed that Microsoft was uniquely positioned to benefit from this trend as it aggregates cloud services to help drive down costs. This is especially attractive for the Fortune 500 whereas startups, SMBs and mid-sized enterprises are likely to seek out and manage a larger portfolio of cloud services from various vendors.

Among the Big 3, Microsoft dominates the Fortune 500 with 95% running on Azure. Retaining the Fortune 500 in the migration to the cloud was accomplished through hybrid computing where Microsoft was first-to-market on serving a mix of on-premise, private and public clouds for their large enterprise customers. As the leader in on-premise systems, Microsoft was perfectly positioned to win with hybrid architectures. The company took this a step further and undercut other services on prices across its suite of software and platforms to win aggregate, long-term contracts.

Microsoft business model is low risk compared to many other AI stocks. However, there is certainly risk in the company's valuation. The risk is compounded when market exuberance front runs a trend and overshoots the mark of what a company can realistically report in the coming years. Microsoft's valuation is high relative to its 5-year median. If you look at the 5-year median prior to the current run-up, the stock has a historic valuation of 9x PS Ratio and is currently trading at a 12x PS Ratio. Similarly, the 5-year median PE Ratio at the start of the year was 25 and the stock is currently trading at 36.

YCharts

AI will be a constantly evolving space and while many investors are rushing in at overstretched valuations, we prefer to be patient. Over time, we agree with the analyst that Microsoft's competitive moat has positioned it to monetize the AI opportunity, much like with Azure and Microsoft 360, across its business lines so that its revenue will increase by $100B in the medium-term.

Microsoft is a real-deal AI stock and the increase in valuation has clearly factored in some of this. However, our updated sum-of-the parts analysis indicates there is still upside. Our current bull case price target is $440. As the story unfolds over the next few quarters, we see additional upside. However, in light of the strong rally from the Jan 2023 lows, we believe incorporating technical analysis to attempt to get the stock lower is important in determining optimal entry levels. In other words, the risk the stock sells off is much higher than usual right now. Sure, the stock price could continue to climb higher, but the world's best investors favor being patient and buying when the market is in a state of fear rather than a state of greed. When we do add to our key positions, we issue real-time trade alerts.

The rest is here:

Microsoft - AI Will Help Drive $100 Billion In Revenue By 2027 ... - Seeking Alpha

Christopher Nolan Explains How AI Could Actually Improve … – MovieWeb

The arrival of AI in the film industry has been a topic of debate in recent months and Christopher Nolan joins the discussion. From using ChatGPT to write an episode of South Park, applying artificial intelligence to improve dialogues on Amazon Prime Video, and studios considering finalizing scripts using these new tools amid writers' strike, AIs are here to stay and it is only a matter of time before they become a tool for everyday use.

After popular directors like Joe Russo or acclaimed actors like Tom Hanks expressed their opinion regarding how AI will master the future of filmmaking, the man behind Oppenheimer shares his opinion on the matter, although he does not seem so concerned about the arrival of new technologies in the industry, but ready to use them in order to improve his work (via Wired):

The whole machine learning as applied to deepfake technology, that's an extraordinary step forward in visual effects and in what you could do with audio. There will be wonderful things that will come out, longer term, in terms of environments, in terms of building a doorway or a window, in terms of pooling the massive data of what things look like, and how light reacts to materials. Those things are going to be enormously powerful tools. I'm, you know, very much the old analog fusty filmmaker. I shoot on film. And I try to give the actors a complete reality around it. My position on technology as far as it relates to my work is that I want to use technology for what it's best for. Like if we do a stunt, a hazardous stunt. You could do it with much more visible wires, and then you just paint out the wires. Things like that.

As Nolan implies, its not about the AI, but how people use it. These technologies can definitely be a helping hand when it comes to visual effects, but as any tool can get out of hand too. Oppenheimers story is in fact a clear example of how technology can be both an improvement or a lethal weapon.

Related: Oppenheimer Biographer Reacts to Christopher Nolans Movie

An AI-dominated future may not be devastating for the director, but it looks like his new movie is. During his interview with Wired, Nolan confessed that those who have already seen Oppenheimer have had a shocking reaction to the film:

Some people leave the movie absolutely devastated. They can't speak. I mean, there's an element of fear that's there in the history and there in the underpinnings. But the love of the characters, the love of the relationships, is as strong as I've ever done. Oppenheimer's story is all impossible questions. Impossible ethical dilemmas, paradox. There are no easy answers in his story. There are just difficult questions, and that's what makes the story so compelling. I think we were able to find a lot of things to be optimistic about in the film, genuinely, but there's this sort of overriding bigger question that hangs over it. It felt essential that there be questions at the end that you leave rattling in people's brains, and prompting discussion.

Oppenheimer follows the story of one of the most complex and controversial personalities from the 20th Century, Robert J. Oppenheimer, father of the atomic bomb. The movie stars Cillian Murphy, Emily Blunt, Robert Downey Jr, Matt Damon, Rami Malek, Tom Conti, and many other big stars.

Originally posted here:

Christopher Nolan Explains How AI Could Actually Improve ... - MovieWeb