Archive for the ‘Ai’ Category

Bloomsbury admits using AI-generated artwork for Sarah J Maas novel – The Guardian

Books

Publisher says cover of House of Earth and Blood was prepared by in-house designers unaware the stock image chosen was not human-made

Fri 19 May 2023 10.30 EDT

Publisher Bloomsbury has said it was unaware an image it used on the cover of a book by fantasy author Sarah J Maas was generated by artificial intelligence.

The paperback of Maass House of Earth and Blood features a drawing of a wolf, which Bloomsbury had credited to Adobe Stock, a service that provides royalty-free images to subscribers.

But the Verge reported that the illustration of the wolf matches one created by a user on Adobe Stock called Aperture Vintage, who has marked the image as AI-generated.

A number of illustrators and fans have criticised the cover for using AI, but Bloomsbury has said it was unaware of the images origin.

Bloomsburys in-house design team created the UK paperback cover of House of Earth and Blood, and as part of this process we incorporated an image from a photo library that we were unaware was AI when we licensed it, said Bloomsbury in a statement. The final cover was fully designed by our in-house team.

This is not the first time that a book cover from a major publishing house has used AI. In 2022, sci-fi imprint Tor discovered that a cover it had created had used a licensed image created by AI, but decided to go ahead anyway due to production constraints.

And this month Bradford literature festival apologised for the hurt caused after artists criticised it for using AI-generated images on promotional material.

Meanwhile, sci-fi publisher Clarkesworld, which publishes science fiction short stories, was forced to close itself to submissions after a deluge of entries generated by AI.

The publishing industry is more broadly grappling with the use and role of AI. It has led to the Society of Authors (SoA) issuing a paper on artificial intelligence, in which it said that while there are potential benefits of machine learning, there are risks that need to be assessed, and safeguards need to be put in place to ensure that the creative industries will continue to thrive.

The SoA has advised that consent should be sought from creators before their work is used by an AI system, and that developers should be required to publish the data sources they have used to train their AI systems.

The guidance addresses concerns similar to those raised by illustrators and artists who spoke to the Guardian earlier this year about the way in which AI image generators use databases of already existing art and text without the creators permission.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Continue reading here:

Bloomsbury admits using AI-generated artwork for Sarah J Maas novel - The Guardian

Azeem on AI: Where Will the Jobs Come from After AI? – HBR.org Daily

AZEEM AZHAR: Hi there, Im Azeem Azhar. For the past decade, Ive studied exponential technologies, their emergence, rapid uptake, and the opportunities they create. I wrote a book about this in 2021. Its called The Exponential Age. Even with my expertise, I sometimes find it challenging to keep up with the fast pace of change in the field of artificial intelligence, and thats why Im excited to share a series of weekly insights with you, where we can delve into some of the most intriguing questions about AI. In todays reflection, I look at an insightful research note from Goldman Sachs, titled The Potentially Large Effects of Artificial Intelligence on Economic Growth, in which the authors explore the labor markets future. The study posits that global productivity could see an impressive uptick, ultimately boosting global GDP by 7%. And I wonder, where will the jobs come from after AI? Lets dig in.

The headline finding was that productivity globally could eventually rise, and you could see a rise in global GDP by 7%, which is no slouch. There were also models that showed that US economic growth could jump from that one to one and a half percent anemic level, up to two and a half, three percent, the sorts of levels that were enjoyed during those halcyon period of the 1950s, which is all pretty exciting. But what I thought was quite interesting was how the researchers dug into the impact on the workforce from all of these productivity changes. So they found some quite interesting findings. I suspect if youve been reading the newsletter thinking about these things, you wouldnt be too surprised by them. But lets just go through them because theyre numerical and theyre quite useful.

So, they found that about two thirds of US occupations were exposed to some degree of automation by AI, and a significant share of those had quite a substantial part share of their workload that could be replaced. So running from healthcare supports down the bottom end to health practices, computer and IT sales and management, finance, legal and office admin, you saw that between on average 25% of tasks to 46% of tasks in the case of office admin could be automated with a much larger impact in general in developed markets than in emerging markets. Its pretty interesting because the researchers suggest, and I think this is a kind of reasonable assertion, that if a job would find about 50% or more of its tasks being automated, it would lend itself to being replaced. Whereas jobs that might have 10 to about 49% of their tasks automated lend themselves to using AI as a sort of complement to the human worker.

Ive looked at this question over the last several years, youve probably read a number of those things, and the question is, what might that actually mean as it plays out? So what we found historically is that when new technologies come around, the firms that make use of them tend to be able to grow their headcount, they grow their employment levels, and its the firms that dont use those technologies that tend to lose out. I talk about this in my book. I have the parable of the two men, Indrek and Fred, who are walking in the Canadian wilds and they stop to take a break. They take their shoes off and a grizzly bear approaches them and one of them pops his shoes on and the other says, Why are you putting your shoes on? Youll never outrun the grizzly bear. And his friend says, I dont need to outrun the grizzly bear, I just need to outrun you.

And that of course, is a competitive dynamic. The firms that are well managed, that can manage these new technologies, that make the investment will perform better, as better performing firms always have, and theyll grow, and in the competitive space that will come at the cost to the underperforming firms. So that should create the kind of incentive for companies to invest in these technology. But they wont do evenly. So we will see some winners and well see some losers.

Wed also expect to see widespread downward wage pressure because these jobs are essentially being able to be done more efficiently. So, a smaller number of people potentially could be required. The other thing to wonder is the extent to which this would necessarily lead to job cuts. And you could say, Look, firms wont do this. These are well paid workers, 80k, 100k, 150k or more a year. And they will be protected in some sense for a certain period of time. But even the most protective firms come to really think about their workforces have gone into cost-cutting mode in the last few months, like McKinsey and Google. And its hard to imagine economy-wide that in this type of economy, the opportunity to streamline and be efficient wont be quite tempting for management.

So, the question is, where might those cuts fall in the firm? I have a hunch, and its no more than that, that if you are a manager in a largish, medium size, or even a sort of bigger end of the small firm, it will be quite appealing to look at the middle of your employment base. Because what you have there is you have people who are quite well paid but are not your top leadership. And the temptation will be to go in and thin those ranks, not so far that you deplete all the tacit knowledge and all the sort of socialized information in the firm, the stuff that isnt codified, but enough to cut costs on the basis that AI-enabled juniors working with a small number of well-trained, experienced, more senior professionals will be able to fill in the gaps. And I suspect that will be a kind of tempting strategy for companies as we move on. And that in a sense is a kind of extension of the delayering of firms that we saw when it started to get rolled out in the 1980s and 1990s.

But what about this 7% productivity growth? So, thats got to be doing something. The economy is going to be growing much faster than it was before, and its going to create new opportunities and new needs. Theres a great survey that the Goldman Sachs authors quote from David Orta. He is this amazing economist, and he points out that 85% of employment growth in the US in the last 80 years has been in jobs that didnt exist in 1940 when the period started. So, we know effectively that the economy creates new work, new classes of work very well, although over an 80-year period. And the thing is that if these technologies are going to be rolled out overnight to millions of workers, the impact will be felt quite fast.

I mean, just take a look at lawyers. Therere somewhere between 700,000 lawyers in the US, if you look at the Bureau of Labor Statistics data, or 1.3 million, if you look at the American Bar Association data. Sorry, I dont know the real number. But based on Goldmans estimates, about 40% of those jobs could be up for being replaced. So thats between 250 and 500,000 people. So, the question is not will new jobs be ultimately created. Its when do they get created in the sort of short time that is available? And we can imagine that new sorts of roles emerge that are complementary to the AI tools that get layered in, ones that are syncretic across the specialist expertise of being a particular type of admin or being a particular type of legal profession, and what is now required to make these technologies work. So, that would be one area.

The second is that the growing economy is going to raise the demand in complementary services, which is what you would expect from economic growth. And of course, there are these new sectors like the bioeconomy and the green economy that are developing rapidly and are being stimulated by things like, in the US, the Inflation Reduction Act, and similar sorts of things in the EU and UK, which should create a demand for new types of private sector jobs.

But its a really hard conundrum because how do you re-skill people? How do you ensure that they actually want to make the move? How do you make sure that they have the resources and the emotional psychological capabilities to make the move? And how do you make sure the jobs that are created are in the places where the people actually live? And I say all of this because I know that is material that weve heard before, but I dont get a sense that I see really strong and solid [inaudible 00:08:14] and interventions, and these are the types of things that need to come from government to tackle what could well be a very sharp transition as these productivity enhancing tools start to get rolled out.

Well, thanks for tuning in. If you want to truly grasp the ins and outs of AI, visit http://www.exponentialview.co, where I share expert insights with hundreds of thousands of leaders each week.

See the article here:

Azeem on AI: Where Will the Jobs Come from After AI? - HBR.org Daily

Google plans to use new A.I. models for ads and to help YouTube creators, sources say – CNBC

Google CEO Sundar Pichai speaks on-stage during the Google I/O keynote session at the Google Developers Conference in Mountain View, California, on May 10, 2023.

Josh Edelson | AFP | Getty Images

Google's effort to rapidly add new artificial intelligence technology into its core products is making its way into the advertising world, CNBC has learned.

The company has given the green light to plans for using generative AI, fueled by large language models (LLMs), to automate advertising and ad-supported consumer services, according to internal documents.

Last week, Google unveiled PaLM 2, its latest and most powerful LLM, trained on reams of text data that can come up with human-like responses to questions and commands. Certain groups within Google are now planning to use PaLM 2-powered tools to allow advertisers to generate their own media assets and to suggest videos for YouTube creators to make, documents show.

Google has also been testing PaLM 2 for YouTube youth content for things like titles, and descriptions. For creators, the company has been using the technology to experiment with the idea of providing five video ideas based on topics that appear relevant.

With the AI chatbot craze speedily racing across the tech industry and capturing the fascination of Wall Street, Google and its peers, including Microsoft, Meta and Amazon, are rushing to embed their most sophisticated models in as many products as possible. The urgency has been particularly acute at Google since the public launch late last year of Microsoft-backed OpenAI's ChatGPT raised concern that the future of internet search was suddenly up for grabs.

Meanwhile, Google has been mired in a multi-quarter stretch of muted revenue growth after almost two decades of consistent and rapid expansion. With fears of a recession building since last year, advertisers have been reeling in online marketing budgets, wreaking havoc on Google,Facebookand others. Specific to Google, paid search advertising conversion rates have decreased this year across most industries.

Beyond search, email and spreadsheets, Google wants to use generative AI offerings to increase spending to boost revenue and improve margins, according to the documents. An AI-powered customer support strategy could potentially run across more than 100 Google products, including, Google Play Store, Gmail, Android Search and Maps, the documents show.

Automated support chatbots could provide specific answers through simple, clear sentences and allow for follow-up questions to be asked before suggesting an advertising plan that would best suit an inquiring customer.

A Google spokesperson declined to comment.

Google recently offered Google Duet and Chat assistance,allowing people to use simple natural language to get answers on cloud-related questions, such as how to use certain cloud services or functions, or to get detailed implementation plans for their projects.

Google is also working on its own internal Stable Diffusion-like product for image creation, according to the documents. Stable Diffusion's technology, similar to OpenAI's DALL-E, can quickly render images in various styles with text-based direction from the user.

Google's plan to push its latest AI models into advertising isn't a surprise. Last week, Facebook parent Meta unveiled the AI Sandbox, a "testing playground" for advertisers to try out new generative AI-powered ad tools. The company also announced updates to Meta Advantage, its portfolio of automated tools and products that advertisers can use to enhance their campaigns.

On May 23, Google will be introducing new technologies for advertisers at its annual event, Google Marketing Live. The company hasn't offered specifics about what it will be announcing, but it's made clear that AI will be a central theme.

"You'll discover how our AI-powered ads solutions can help multiply your marketing expertise and drive powerful business results in today's changing economy," the website for the event says.

WATCH: AI takes center stage at Google I/O

See original here:

Google plans to use new A.I. models for ads and to help YouTube creators, sources say - CNBC

Another Side of the A.I. Boom: Detecting What A.I. Makes – The New York Times

Andrey Doronichev was alarmed last year when he saw a video on social media that appeared to show the president of Ukraine surrendering to Russia.

The video was quickly debunked as a synthetically generated deepfake, but to Mr. Doronichev, it was a worrying portent. This year, his fears crept closer to reality, as companies began competing to enhance and release artificial intelligence technology despite the havoc it could cause.

Generative A.I. is now available to anyone, and its increasingly capable of fooling people with text, audio, images and videos that seem to be conceived and captured by humans. The risk of societal gullibility has set off concerns about disinformation, job loss, discrimination, privacy and broad dystopia.

For entrepreneurs like Mr. Doronichev, it has also become a business opportunity. More than a dozen companies now offer tools to identify whether something was made with artificial intelligence, with names like Sensity AI (deepfake detection), Fictitious.AI (plagiarism detection) and Originality.AI (also plagiarism).

Mr. Doronichev, a Russian native, founded a company in San Francisco, Optic, to help identify synthetic or spoofed material to be, in his words, an airport X-ray machine for digital content.

In March, it unveiled a website where users can check images to see if they were made by actual photographs or artificial intelligence. It is working on other services to verify video and audio.

Content authenticity is going to become a major problem for society as a whole, said Mr. Doronichev, who was an investor for a face-swapping app called Reface. Were entering the age of cheap fakes. Since it doesnt cost much to produce fake content, he said, it can be done at scale.

The overall generative A.I. market is expected to exceed $109 billion by 2030, growing 35.6 percent a year on average until then, according to the market research firm Grand View Research. Businesses focused on detecting the technology are a growing part of the industry.

Months after being created by a Princeton University student, GPTZero claims that more than a million people have used its program to suss out computer-generated text. Reality Defender was one of 414 companies chosen from 17,000 applications to be funded by the start-up accelerator Y Combinator this winter.

Copyleaks raised $7.75 million last year in part to expand its anti-plagiarism services for schools and universities to detect artificial intelligence in students work. Sentinel, whose founders specialized in cybersecurity and information warfare for the British Royal Navy and the North Atlantic Treaty Organization, closed a $1.5 million seed round in 2020 that was backed in part by one of Skypes founding engineers to help protect democracies against deepfakes and other malicious synthetic media.

Major tech companies are also involved: Intels FakeCatcher claims to be able to identify deepfake videos with 96 percent accuracy, in part by analyzing pixels for subtle signs of blood flow in human faces.

Within the federal government, the Defense Advanced Research Projects Agency plans to spend nearly $30 million this year to run Semantic Forensics, a program that develops algorithms to automatically detect deepfakes and determine whether they are malicious.

Even OpenAI, which turbocharged the A.I. boom when it released its ChatGPT tool late last year, is working on detection services. The company, based in San Francisco, debuted a free tool in January to help distinguish between text composed by a human and text written by artificial intelligence.

OpenAI stressed that while the tool was an improvement on past iterations, it was still not fully reliable. The tool correctly identified 26 percent of artificially generated text but falsely flagged 9 percent of text from humans as computer generated.

The OpenAI tool is burdened with common flaws in detection programs: It struggles with short texts and writing that is not in English. In educational settings, plagiarism-detection tools such as TurnItIn have been accused of inaccurately classifying essays written by students as being generated by chatbots.

Detection tools inherently lag behind the generative technology they are trying to detect. By the time a defense system is able to recognize the work of a new chatbot or image generator, like Google Bard or Midjourney, developers are already coming up with a new iteration that can evade that defense. The situation has been described as an arms race or a virus-antivirus relationship where one begets the other, over and over.

When Midjourney releases Midjourney 5, my starter gun goes off, and I start working to catch up and while Im doing that, theyre working on Midjourney 6, said Hany Farid, a professor of computer science at the University of California, Berkeley, who specializes in digital forensics and is also involved in the A.I. detection industry. Its an inherently adversarial game where as I work on the detector, somebody is building a better mousetrap, a better synthesizer.

Despite the constant catch-up, many companies have seen demand for A.I. detection from schools and educators, said Joshua Tucker, a professor of politics at New York University and a co-director of its Center for Social Media and Politics. He questioned whether a similar market would emerge ahead of the 2024 election.

Will we see a sort of parallel wing of these companies developing to help protect political candidates so they can know when theyre being sort of targeted by these kinds of things, he said.

Experts said that synthetically generated video was still fairly clunky and easy to identify, but that audio cloning and image-crafting were both highly advanced. Separating real from fake will require digital forensics tactics such as reverse image searches and IP address tracking.

Available detection programs are being tested with examples that are very different than going into the wild, where images that have been making the rounds and have gotten modified and cropped and downsized and transcoded and annotated and God knows what else has happened to them, Mr. Farid said.

That laundering of content makes this a hard task, he added.

The Content Authenticity Initiative, a consortium of 1,000 companies and organizations, is one group trying to make generative technology obvious from the outset. (Its led by Adobe, with members such as The New York Times and artificial intelligence players like Stability A.I.) Rather than piece together the origin of an image or a video later in its life cycle, the group is trying to establish standards that will apply traceable credentials to digital work upon creation.

Adobe said last week that its generative technology Firefly would be integrated into Google Bard, where it will attach nutrition labels to the content it produces, including the date an image was made and the digital tools used to create it.

Jeff Sakasegawa, the trust and safety architect at Persona, a company that helps verify consumer identity, said the challenges raised by artificial intelligence had only begun.

The wave is building momentum, he said. Its heading toward the shore. I dont think its crashed yet.

See the rest here:

Another Side of the A.I. Boom: Detecting What A.I. Makes - The New York Times

This Is What AI Thinks Is The "Perfect" Man And Woman – IFLScience

An eating disorder awareness group is raising awareness of artificial intelligence (AI) image-generators and how they propagate unrealistic standards of beauty like the data on the Internet they were trained on.

The Bulimia Project asked image generators Dall-E 2, Stable Diffusion, and Midjourney to create the perfect female body specifically according to social media in 2023, followed by the same prompt but for males.

"Smaller women appeared in nearly all the images created by Dall-E 2, Stable Diffusion, and Midjourney, but the latter came up with the most unrealistic representations of the female body," the Project wrote in a post detailing their findings. "The same can be said for the male physiques it generated, all of which look like photoshopped versions of bodybuilders."

The team found that 40 percent of the images generated by the AI depicted unrealistic body types, slightly more for men than for women. A whopping 53 percent of the images also portrayed olive skin tones, and 37 percent of the generated people had blonde hair.

The team then asked the generators to generate a more general "perfect woman in 2023 as well as the perfect man.

According to the findings, the main difference between the two prompts was that the social media images were more sexually charged and contained more disproportionate and unrealistic body parts.

"Considering that social media uses algorithms based on which content gets the most lingering eyes, its easy to guess why AIs renderings would come out more sexualized. But we can only assume that the reason AI came up with so many oddly shaped versions of the physiques it found on social media is that these platforms promote unrealistic body types, to begin with," the team wrote.

Racist and sexist biases have repeatedly been found in AI generators, with AI picking up biases in their datasets. According to The Bulimia Project's findings, they are also biased toward unrealistic body types.

"In the age of Instagram and Snapchat filters, no one can reasonably achieve the physical standards set by social media," the team wrote, "so, why try to meet unrealistic ideals? Its both mentally and physically healthier to keep body image expectations squarely in the realm of reality."

If you or someone you know might have an eating disorder, help and support are available in the US at nationaleatingdisorders.org. In the UK, help and support are available at http://www.beateatingdisorders.org.uk. International helplines can be found atwww.worldeatingdisordersday.org/home/find-help.

Originally posted here:

This Is What AI Thinks Is The "Perfect" Man And Woman - IFLScience