Archive for the ‘Ai’ Category

Qualcomm’s ‘Holy Grail’: Generative AI Is Coming to Phones Soon – CNET

Generative AI like ChatGPT and Midjourney have dazzled imaginations and disrupted industries, but their debut has mostly been limited to browser windows on desktop computers. Next year, you'll be able to make use of generative AI on the go once premium phones launch with Qualcomm's top-tier chips inside.

Phones have used AI for years to touch up photos and improve autocorrect, but generative AI tools could bring the next level of enhancements to the mobile experience. Qualcomm is building generative AI into its next generation of premium chips, which are set to debut at its annual Qualcomm Summit in Hawaii in late October.

Summit attendees will get to experience firsthand what generative AI will bring to phones, but Qualcomm senior vice president of product management Ziad Asghar described to CNET why users should get excited for on-device AI. For one, having access to a user's data -- driving patterns, restaurant searches, photos and more -- all in one place will make solutions generated by AI in your phone much more customized and helpful than general responses from cloud-based generative AI.

"I think that's going to be the holy grail," Asghar said. "That's the true promise that makes us really excited about where this technology can go."

There are other advantages to having generative AI on-device. Most importantly, queries and personal data searched are kept private and not relayed through a distant server. Using local AI is also faster than waiting for cloud computation, and it can work while traveling on airplanes or in other areas that lack cell service.

But an on-device solution also makes business and efficiency sense. As machine learning models have gotten more complex (from hundreds of thousands of parameters to billions, Asghar said), it's more expensive to run servers answering queries, as Qualcomm explained in a white paper published last month. Back in April, OpenAI was estimated to spend around $700,000 per day getting ChatGPT to answer prompts, and that cost prediction was based on the older GPT-3 model, not the newer GPT-4 that is more complex and likely to be costlier to maintain at scale. Instead of needing an entire server farm, Qualcomm's solution is to have a device's existing silicon brain do all the thinking needed -- at no extra cost.

"Running AI on your phone is effectively free -- you paid for the computing power up front," Techsponential analyst Avi Greengart told CNET over email.

Greengart saw Qualcomm's on-device generative AI in action when the chipmaker had it on display at Mobile World Congress in February, using a Snapdragon 8 Gen 2-powered Android phone to run the image generating software Stable Diffusion. Despite being an early demo, he found it "tremendously exciting."

A Snapdragon 8 Gen 2 chipset.

Qualcomm has ideas for what people could do with phone-based generative AI, improving everything from productivity tasks to watching entertainment to creating content.

As the Stable Diffusion demo showcased, on-device generative AI could allow people to tweak images on command, like asking it to change the background to put you in front of the Venice canals, Asghar said. Or they could have it generate a completely new image -- but that's just the beginning, as text and visual large learning models could work in succession to flow from an idea to a ready output.

Using multiple models, Asghar said, a user could have their speech translated by automatic speech recognition into text that is then fed into an image generator. Take that a step further and have your phone render a person's face, which uses generative AI to make realistic mouth movements and text-to-speech to speak back to you, and boom, you've got a generative AI-powered virtual assistant you can have full conversations with.

This specific example could be powered in part by third-party AI, like Facebook parent company Meta's recently launched large language model Llama 2 in partnership with Microsoft as well as Qualcomm.

"[Llama 2] will allow customers, partners and developers to build use cases, such as intelligent virtual assistants, productivity applications, content creation tools, entertainment and more," Qualcomm said in a press release at the time. "These new on-device AI experiences, powered by Snapdragon, can work in areas with no connectivity or even in airplane mode."

Qualcomm won't limit these features to phones. At its upcoming summit, the company plans to announce generative AI solutions for PC and auto too. That personal assistant could help you with your to-do lists, schedule meetings and shoot off emails. If you're stuck outside the office and need to give a presentation, Asghar said, the AI could generate a new background so it doesn't look like you're sitting in your car and bring up a slide deck (or even help present it).

"For those of us who grew up watching Knight Rider, well, KITT is now going to be real," Asghar said, referring to the TV show's iconic smart car.

Regardless of the platform, the core generative AI solution will exist on-device. It could help with office busywork, like automatically generating notes from a call and creating a five-slide deck summarizing its key points ("This is like Clippy, but on steroids, right?" Asghar said). Or it could fabricate digital worlds from scratch in AR and VR.

Beyond fantasy worlds, generative AI could help blind people navigate the real world. Asghar described a situation where image-to-3D-image-to-text-to-speech model handoffs could use the phone's camera to recognize when a user is at an intersection and inform them when to stop, as well as how many cars are coming from which directions.

On the education front -- perhaps using a webcam or a phone's camera -- generative AI could gauge how well students are absorbing a teaching lesson, perhaps by tracking their expressions and body language. And then the generative AI could tailor the material to each student's strengths and weaknesses, Asghar theorized.

These are all Qualcomm's predictions, but third parties will have to decide how best to harness the technology to improve their own products and services. For phones, generative AI could have a real impact once it's integrated with mobile apps for more customized gaming experiences, social media and content creation, Techsponential's Greengart said.

It's hard to tell what that means for users until app makers have generative AI tech on hand to tinker and integrate into their apps. It's easier to extrapolate what it could do based on how AI helps people right now. Roger Entner, analyst for Recon Analytics, predicts that generative AI will help fix flaws in suboptimal photos, generate filters for social media, and refine autocorrect -- problems that exist right now.

"Generative AI here creates a quality of use improvement that soon we will take for granted," Entner told CNET over email.

A Snapdragon 8 Gen 2 encased in a red puck in front of a rig used to test chips in production.

Current generative AI solutions rely on big server farms to answer queries at scale, but Qualcomm is confident that its on-device silicon can handle single-user needs. In Asghar's labs, the company's chips handled AI models with 7 billion parameters (aspects that evaluate data and change the tone or accuracy of its output), which is far below the 175 billion parameters of OpenAI's GPT-3 model that powers ChatGPT, but should suit mobile searches.

"We will actually be able to show that running on the device at the [Hawaii] summit," Asghar said.

The demo device will likely pack Qualcomm's next top-tier chip, presumably the Snapdragon 8 Gen 3 that will end up in next year's premium Android phones. The demo device running Stable Diffusion at MWC 2023 used the Snapdragon 8 Gen 2 announced at last year's Snapdragon Summit in Hawaii.

In an era of phones barely lasting through the day before needing to recharge, there's also concern over whether summoning the generative AI genie throughout the day will drain your battery even faster. We'll have to wait for real-world tests to see how phones implement and optimize the technology, but Asghar pointed out that the MWC 2023 demo was running queries for attendees all day and didn't exhaust the battery or even warm to the touch. He believes Qualcomm's silicon is uniquely capable, with generative AI running mostly on a Snapdragon chipset's Hexagon processor and neural processing unit, with "very good power consumption."

"I think there is going to be concern for those who do not have dedicated pieces of hardware to do this processing," Asghar said.

Asghar believes that next year's premium Android phones powered with Qualcomm's silicon will be able to use generative AI. But it will take some time for that to trickle down to cheaper phones. Much like how on current phones AI assistance for cleaning up images, audio and video is best at the top of the lineup and gets less effective for cheaper phones, generative AI capabilities will be lesser (but still present) the further down you go in Qualcomm's chip catalog.

"Maybe you can do a 10-plus billion parameter model in the premium, and the tier below that might be lesser than that, if you're below that then it might be lesser than that," Asghar said. "So it will be a graceful degradation of those experiences, but they will extend into the other products as well."

As with 5G, Qualcomm may be first to a new technology with generative AI, but it won't be the last. Apple has quietly been improving its on-device AI, with senior vice president of software Craig Federighi noting in a post-Worldwide Developers Conference chat that they swapped in a more powerful transformer language model to improve autocorrect. Apple has even reportedly been testing its own "Apple GPT" chatbot internally. The tech giant is said to be developing its own framework to create large language models in order to compete in the AI space, which has heated up since OpenAI released ChatGPT to the public late in 2022.

Watch this: Comparing Bing Chat, Bard Chat and ChatGPT

Apple's AI could enter the race against Google's Bard AI and Microsoft's Bing AI, both of which have had limited releases this year for public testing. Those follow the more traditional "intelligent chatbot" model of generative AI enhancing software, but it's possible they'll arrive on phones through apps or be accessed through a web browser. Both Google and Microsoft are already integrating generative AI into their productivity platforms, so users will likely see their efforts first in mobile versions of Google Docs or Microsoft Office.

But for most phone owners, Qualcomm's chip-based generative AI could be the first impactful use of a new technology. We'll have to wait for the Snapdragon Summit to see how much our mobile experience may be changing as soon as next year.

See the original post:

Qualcomm's 'Holy Grail': Generative AI Is Coming to Phones Soon - CNET

The AI Tools Making Images Look Better – Quanta Magazine

Its one of the biggest cliches in crime and science fiction: An investigator pulls up a blurry photo on a computer screen and asks for it to be enhanced, and boom, the image comes into focus, revealing some essential clue. Its a wonderful storytelling convenience, but its been a frustrating fiction for decades blow up an image too much, and it becomes visibly pixelated. There isnt enough data to do more.

If you just navely upscale an image, its going to be blurry. Theres going to be a lot of detail, but its going to be wrong, said Bryan Catanzaro, vice president of applied deep learning research at Nvidia.

Recently, researchers and professionals have begun incorporating artificial intelligence algorithms into their image-enhancing tools, making the process easier and more powerful, but there are still limits to how much data can be retrieved from any image. Luckily, as researchers push enhancement algorithms ever further, they are finding new ways to cope with those limits even, at times, finding ways to overcome them.

In the past decade, researchers started enhancing images with a new kind of AI model called a generative adversarial network, or GAN, which could produce detailed, impressive-looking pictures. The images suddenly started looking a lot better, said Tomer Michaeli, an electrical engineer at the Technion in Israel. But he was surprised that images made by GANs showed high levels of distortion, which measures how close an enhanced image is to the underlying reality of what it shows. GANs produced images that looked pretty and natural, but they were actually making up, or hallucinating, details that werent accurate, which registered as high levels of distortion.

Michaeli watched the field of photo restoration split into two distinct sub-communities. One showed nice pictures, many made by GANs. The other showed data, but they didnt show many images, because they didnt look nice, he said.

In 2017, Michaeli and his graduate student Yochai Blau looked into this dichotomy more formally. They plotted the performance of various image-enhancement algorithms on a graph of distortion versus perceptual quality, using a known measure for perceptual quality that correlates well with humans subjective judgment. As Michaeli expected, some of the algorithms resulted in very high visual quality, while others were very accurate, with low distortion. But none had both advantages; you had to pick one or the other. The researchers dubbed this the perception-distortion trade-off.

Michaeli also challenged other researchers to come up with algorithms that could produce the best image quality for a given level of distortion, to allow fair comparisons between the pretty-picture algorithms and the nice-stats ones. Since then, hundreds of AI researchers have reported on the distortion and perception qualities of their algorithms, citing the Michaeli and Blau paper that described the trade-off.

Sometimes, the implications of the perception-distortion trade-off arent dire. Nvidia, for instance, found that high-definition screens werent nicely rendering some lower-definition visual content, so in February it released a tool that uses deep learning to upscale streaming video. In this case, Nvidias engineers chose perceptual quality over accuracy, accepting the fact that when the algorithm upscales video, it will make up some visual details that arent in the original video. The model is hallucinating. Its all a guess, Catanzaro said. Most of the time its fine for a super-resolution model to guess wrong, as long as its consistent.

Read the original post:

The AI Tools Making Images Look Better - Quanta Magazine

NGA, SLU, NGA Host AI-Focused Geo-Resolution Conference – Saint Louis University

ST. LOUIS, MO The National Geospatial-Intelligence Agency and Saint Louis University will co-host the Geo-Resolution 2023 conference, Digital Transformations: Navigating a World of Data from Seabed to Space, Thursday, Sept. 28, at SLUs Busch Student Center. This years theme will focus on the impact of artificial intelligence and new digital technologies for geospatial research and analysis.

Geo-Resolution is an annual conference that encourages collaboration between government, academic and industry partners to foster geospatial technology innovation and applications, connect geospatial experts and students and grow the geospatial ecosystem in the greater St. Louis region.

Geo-Resolution 2023 discussions will include:

Geo-Resolution is designed to provide participants particularly students access to geospatial experts from government, academia, innovation hubs, start-up companies and nonprofit organizations. Students will be able to meet local leaders from industry, academia and government to explore geospatial career opportunities.

This years conference will feature:

The conference will also include a Young Mentors panel, a student poster session, a student geospatial career fair and networking opportunities.

Geo-Resolution 2023 is free and open to the public. The conference will be held in-person at Saint Louis University and streamed live on the conference website.

Advance registration is required.

Register to Attend

NGA delivers world-class geospatial intelligence that provides a decisive advantage to policymakers, warfighters, intelligence professionals and first responders.

NGA is a unique combination of intelligence agency and combat support agency. It is the world leader in timely, relevant, accurate and actionable geospatial intelligence. NGA enables the U.S. intelligence community and the Department of Defense to fulfill the presidents national security priorities to protect the nation.

For more information about NGA, visit us online at http://www.nga.mil, on Instagram, LinkedIn, Facebook and Twitter.

Founded in 1818, Saint Louis University is one of the nations oldest and most prestigious Catholic institutions. Rooted in Jesuit values and its pioneering history as the first university west of the Mississippi River, SLU offers more than 13,500 students a rigorous, transformative education of the whole person. At the core of the Universitys diverse community of scholars is SLUs service-focused mission, which challenges and prepares students to make the world a better, more just place.

Originally posted here:

NGA, SLU, NGA Host AI-Focused Geo-Resolution Conference - Saint Louis University

VeChain and SingularityNET team up on AI to fight climate change – Cointelegraph

Artificial intelligence firm SingularityNET and blockchain firm VeChain have become the latest firms to marry blockchain with artificial intelligence this time, with the aim of cutting down carbon emissions.

Over the last year, the crypto industry has seen an increasing amount of collaboration between blockchain and AI technology.

On Aug. 24, VeChain a smart contract-compatible blockchain used for supply-chain tracking announced a strategic collaboration with the decentralized AI services-sharing platform SingularityNET.

In a joint statement, the firms said the partnership will merge VeChains enterprise data with SingularityNET's advanced AI algorithms to enhance automation of manual processes and provide real-time data.

SingularityNET founder and CEO Ben Goertzel told Cointelegraph that blockchain and AI go hand-in-hand and can solve problems where traditional approaches often fail.

The last few years have taught the world that when the right AI algorithms meet the right data on sufficient processing power, magic can happen, said Goertzel.

Goertzel explained the partnership could, for example, allow AI to identify new ways to use VeChains blockchain data to optimize carbon emission output and minimize pollution.

Achieving a sustainable and environmentally positive economy is an extremely complex problem involving coordination of a large number of different economic players, he added.

Meanwhile, VeChain Chief Technology Officer Antonio Senatore added: Blockchain and AI offer game-changing capabilities for industries and enterprises and are opening new avenues of operation.

Related: Heres how blockchain and AI combine to redefine data security

In July, Bitcoin Miner Hive Blockchain changed its name and business strategy as part of its foray into the emerging field of AI.Hive Digital Technologies CEO Aydin Kilictold Cointelegraph in August that blockchain and AI are both pillars of Web3.

In June, Ethereum layer-2 scaling network Polygon announced its integration of AI technology. The AI interface called Polygon Copilot will help developers obtain analytics and insights for Dapps on the network.

Dr. Daoyuan Wu, an AI researcher from the Nanyang Technological University in Singapore and MetaTrust affiliate, told Cointelegraph that the inherent autonomy of AI aligns seamlessly with the decentralized and autonomous characteristics of blockchain and smart contracts, adding:

MetaTrust Labs is working on a project called GPTScan which works as a tool that combines Generative Pre-training Transformer (GPT) and static analysis to detect logic vulnerabilities in smart contracts.

GPTScan is the first tool of its kind that utilizes GPT to match candidate vulnerable functions based on code-level scenarios and properties," added Dr. Daoyuan in an interview with Cointelegraph.

Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.

Magazine: How to prevent AI from annihilating humanity using blockchain

View original post here:

VeChain and SingularityNET team up on AI to fight climate change - Cointelegraph

Using Generative AI to Resurrect the Dead Will Create a Burden for … – WIRED

Given enough data, one can feel like its possible to keep dead loved ones alive. With ChatGPT and other powerful large language models, it is feasible to create a more convincing chatbot of a dead person. But doing so, especially in the face of scarce resources and inevitable decay, ignores the massive amounts of labor that go into keeping the dead alive online.

Someone always has to do the hard work of maintaining automated systems, as demonstrated by the overworked and underpaid annotators and content moderators behind generative AI, and this is also true where replicas of the dead are concerned. From managing a digital estate after gathering passwords and account information, to navigating a slowly-decaying inherited smart home, digital death care practices require significant upkeep. Content creators depend on the backend labor of caregivers and a network of human and nonhuman entities, from specific operating systems and devices to server farms, to keep digital heirlooms alive across generations. Updating formats and keeping those electronic records searchable, usable, and accessible requires labor, energy, and time. This is a problem for archivists and institutions, but also for individuals who might want to preserve the digital belongings of their dead kin.

And even with all of this effort, devices, formats, and websites also die, just as we frail humans do. Despite the fantasy of an automated home that can run itself in perpetuity or a website that can survive for centuries, planned obsolescence means these systems will most certainly decay. As people tasked with maintaining the digital belongings of dead loved ones can attest, there is a stark difference between what people think they want, or what they expect others to do, and the reality of what it means to help technologies persist over time. The mortality of both people and technology means that these systems will ultimately stop working.

Early attempts to create AI-backed replicas of dead humans certainly bear this out. Intellitars Virtual Eternity, based in Scottsdale, Arizona, launched in 2008 and used images and speech patterns to simulate a humans personality, perhaps filling in for someone at a business meeting or chatting with grieving loved ones after a persons death. Writing for CNET, a reviewer dubbed Intellitar the product most likely to make children cry. But soon after the company went under in 2012, its website disappeared. LifeNaut, a project backed by the transhumanist organization Terasemwhich is also known for creating BINA48, a robotic version of Bina Aspen, the wife of Terasems founderwill purportedly combine genetic and biometric information with personal datastreams to simulate a full-fledged human being once technology makes it possible to do so. But the projects site itself relies on outmoded Flash software, indicating that the true promise of digital immortality is likely far off and will require updates along the way.

With generative AI, there is speculation that we might be able to create even more convincing facsimiles of humans, including dead ones. But this requires vast resources, including raw materials, water, and energy, pointing to the folly of maintaining chatbots of the dead in the face of catastrophic climate change. It also has astronomical financial costs: ChatGPT purportedly costs $700,000 a day to maintain, and will bankrupt OpenAI by 2024. This is not a sustainable model for immortality.

There is also the question of who should have the authority to create these replicas in the first place: a close family member, an employer, a company? Not everyone would want to be reincarnated as a chatbot. In a 2021 piece for the San Francisco Chronicle, the journalist Jason Fagone recounts the story of a man named Joshua Barbeau who produced a chatbot version of his long-dead fiance Jessica using OpenAIs GPT-3. It was a way for him to cope with death and grief, but it also kept him invested in a close romantic relationship with a person who was no longer alive. This was also not the way that Jessicas other loved ones wanted to remember her; family members opted not to interact with the chatbot.

Go here to read the rest:

Using Generative AI to Resurrect the Dead Will Create a Burden for ... - WIRED