Lies, Damn Lies, and Generative Artificial Intelligence: How GAI … – Public Knowledge
By Lisa Macpherson August 7, 2023
Generative artificial intelligence (AI) has exploded into popular consciousness since the release of ChatGPT to the general public for testing in November 2022. The term refers to machine learning systems that can be used to create new content in response to human prompts after being trained on vast amounts of data. Outputs of generative artificial intelligence may include audio (e.g., Amazon Polly and Murf.AI), code (e.g., CoPilot), images (e.g., Stable Diffusion, Midjourney, and Dall-E), text (e.g. ChatGPT, Llama), and videos (e.g., Synthesia). As has been the case for many advances in science and technology, were hearing from all sides about the short- and long-term risks as well as the societal and economic benefits of these capabilities.
In this post, well discuss the specific risk that broad use of generative artificial intelligence systems will further distort the integrity of our news environment through the creation and spread of false information. Well also discuss a range of solutions that have been proposed to protect the integrity of our information environment.
Highlighting the Risks of Generative AI for Disinformation
Generative artificial intelligence systems can compound the existing challenges in our information environment in at least three ways: by increasing the number of parties that can create credible disinformation narratives; making them less expensive to create; and making them more difficult to detect. If social media made it cheaper and easier to spread disinformation, now generative AI will make it easier to produce. And traditional cues that alert researchers to false information, like language and syntax issues and cultural gaffes in foreign intelligence operations, will be missing.
ChatGPT, the consumer-facing application of generative pre-trained transformer GPT, has already been described as the most powerful tool for spreading misinformation that has ever been on the internet. Researchers at OpenAI, ChatGPTs parent company, have conveyed their own concerns that their systems could be misused by malicious actors motivated by the pursuit of monetary gain, a particular political agenda, and/or a desire to create chaos or confusion. Image generators, like Stability AIs Stable Diffusion, create such realistic images that they may undermine the classic entreaty to believe your own eyes in order to determine what is true and what is not.
This isnt just about hallucinations, which refers to when a generative model puts out factually incorrect or nonsensical information. Researchers have already proven that bad actors can use machine-generated propaganda to sway opinions. The impact of generative models on our information environment can be cumulative: Researchers are finding that the use of content from large language models to train other models pollutes the information environment and results in content that is further and further from reality. It all adds a scary new twist to the classic description of the internet as five websites, each consisting of screenshots of text from the other four. What if all those websites were actually training each other on false information, then feeding it to us?
These risks have already created momentum among policymakers to regulate generative AI. The Federal Trade Commission recently demanded that OpenAI provide detailed descriptions of all complaints it has received of its products making false, misleading, disparaging or harmful statements about people. The White House, House, and Senate are holding hearings or calling for comments about the risks of generative AI in order to steer potential policy interventions. Legislators have called for content authenticity standards; notifications to users when generative AI is used to create content; impact and risk assessments; and certification of high-impact AI systems. And inevitably weve already heard generative AI and Section 230 used together in a sentence. (Our position is that the large language models associated with generative AI do not enjoy Section 230 protections.)
So what should we do? Its already clear that a range of solutions will be both desirable and necessary in order to protect the integrity of our information environment and help restore trust in institutions but, spoiler alert few of them pertain specifically to disinformation generated by AI.
Technical Solutions
The explosion of focus on generative AI has ignited a parallel explosion in technological solutions to track digital provenance and ensure content authenticity that is, tools to help detect what content is created with AI. These tools, some of which come from the creators of AI systems, can be applied in different places on the value chain. For example, Adobes Firefly generative technology, which will be integrated into Googles Bard chatbot, attaches nutrition labels to the content it produces, including the date an image was made and the digital tools used to create it. The Coalition for Content Provenance and Authenticity, a consortium of major technology, media, and consumer products companies, has launched an interoperable verification standard for certifying the source and history (that is, provenance) of media content. Various systems for so-called digital watermarking modifications of generated text or media in ways that are invisible to people but can be detected by AI using cryptographic techniques have also been proposed. Several companies, including Meta for its new Llama 2 product, encourage the use of classifiers that detect and filter outputs based on the meaning conveyed by the words chosen. An alternative technical approach to detect inauthentic content that can be used downstream is the use of digital forensics tactics, like tracking the network or device address or conducting reverse image searches for content that has already been posted and shared.
While each of these solutions has its own strengths and weaknesses, even in aggregate, they are imperfect and may be outpaced by developments in the technology itself. Early tools, like OpenAIs own classifier, have already been retired because of their low rate of accuracy. Opt-in standards wont be adopted by bad actors; in fact, bad actors may copy, resave, shrink, or crop images, which obscures the signals that AI detectors rely on. Bad actors may also favor earlier, more basic versions of generative AI systems that lack the protections of new versions. Like the content moderation systems of the dominant platforms, most of the detectors currently struggle with writing that is not in English, and can sustain or amplify moderation bias against marginalized groups. In another parallel to content moderation, development of classifier systems can take a heavy toll on human workers. In short, it is unlikely these tools would win a technological arms race with motivated generators of disinformation. And some of these methods raise concerns that they may encourage platforms to detect and moderate certain forms of content too aggressively, threatening free expression.
Content Moderation Solutions
Another range of solutions has to do with how downstream companies, such as search engines and social media platforms, moderate content created by generative AI. Most of their approaches are really extensions of their existing strategies to mitigate disinformation. These include using fact-checking partnerships to verify the veracity of content; labeling of problematic content as a means of adding friction to sharing; downranking content from repeat offenders; upranking trusted sources of information; and fingerprinting and sharing of known AI-created content across platforms (similar to processes that already exist for fingerprinting non-consensual intimate images and child sexual abuse materials). In their efforts to avoid partisan debates about censorship and bias, several of the major platforms have also shifted their emphasis from the content of posts to account and behavioral signals, like detecting networks of accounts that amplify each others messages, large groups of accounts that are created at the same time, and hashtag flooding.
All of these methods may be helpful if lower cost, higher volume and more difficult detection are the hallmarks of generative AI in disinformation. They may also use risk assessments to determine where the potential harms are severe enough to warrant specific policies related to AI-generated content. (Elections and public health information are the most prevalent examples. When the stakes are that high, it may warrant prohibitions on certain uses of generative AI or manipulated media.) They could add information about AI-generated content (such as its prevalence, or the type moderated) to existing transparency reports. We would also favor policies that call for more accountability, including legal liability, for paid advertising. We dont have the same concerns about over-moderation of commercial speech.
But all these methods carry the same limits and risks as they do for other forms of content. That includes the risk of over-moderation, which invariably has a particular impact on marginalized communities. As generative AI comes into broader use, users may actually be posting content that is beneficial and entertaining, making strict moderation policies by search and social media platforms undesirable as well as legally problematic. Even when strict policies and enforcement are warranted, their value depends on platforms willingness and ability to enforce them, including in languages other than English. Do we really want platforms to be the main line of defense against harmful narratives of disinformation given the platforms history, including on topics of enormous public importance like COVID-19 and elections?
AI Industry Self Regulation
Until or unless there are government regulations, the field of AI will be governed largely by the ethical frameworks, codes, and practices of its developers and users. (There are exceptions, such as when AI systems have outcomes that are discriminatory.) Virtually every AI developer has articulated their own principles for responsible AI development. These principles may encompass each stage of the product development process, from pretraining and training of data sets to setting boundaries for outputs, and incorporate principles like privacy and security, equity and inclusion, and transparency. They also articulate use policies that ostensibly govern what users can generate. For example, OpenAIs usage policies disallow disinformation, as well as hateful, harassing, or violent content and coordinated inauthentic behavior, among other things.
But these policies, no matter how well-intentioned, have significant limits. For example, researchers recently found that the guardrails of both closed systems, like ChatGPT, and open-sourced systems, like Metas Llama 2 product, can be coaxed into generating biased, false, and violative responses. And, as in every other industry, voluntary standards and self-regulation are subject to daily trade-offs with growth and profit motives. This will be the case even when voluntary standards are agreed to collectively (as is a new industry-led body to develop safety standards) or secured by the White House (as is a new set of commitments announced last week). For the most part, were talking about the same companies even some of the same people whose voluntary standards have proven insufficient to safeguard our privacy, moderate content that threatens democracy, ensure equitable outcomes, and prohibit harassment and hate speech.
Regulatory Solutions
Any discussion of how to regulate disinformation in the United States no matter how virulent, and no matter how its created is bounded by the simple fact that most of it is constitutionally protected speech. Regardless, policymakers are actively exploring whether, or how, to regulate generative (and other) AI. New research shows public support for the federal government taking steps to restrict false information and extremely violent content online. In Public Knowledges view: Proceed with caution. While there may be room and precedent for content standards for the most destructive lawful but awful disinformation (such as networked disinformation that threatens national security and public health and safety), in general user speech is protected speech and free expression values are paramount.
One framework which begins by comparing AI to nuclear weapons is grounded in the idea of incremental regulation; that is, regulation that recognizes and accounts for a breadth of use cases and potential benefits as well as harms. It encourages us to focus on applications of the technology, not bans or restrictions on the technology itself. Every sector and use case comes with its own set of ethical dilemmas, technical complexities, stakeholders and policy challenges, and potential transformational benefits from AI. For example, in the case of disinformation, Public Knowledge advocates for solutions that address the harms associated with disinformation whether they originate with generative AI, Photoshop, troll farms, or your uncle Frank. The resulting policy solutions would encompass things like requirements for risk assessment frameworks and mitigation strategies; transparency on algorithmic decision-making and its outcomes; access to data for qualified researchers; guarantee of due process in content moderation; impact assessments that show how algorithmic systems perform against tests for bias; and enforcement of accountability for the platforms business model (e.g., paid advertising).
We also need to account for the rapidity of innovation in this sector. One solution that Public Knowledge has favored is an expert and dedicated administrative agency for digital platforms. A dedicated agency should have the authority to conduct oversight and auditing of AI and other algorithmic decision making products in order to protect consumers and promote civic discourse and democracy. But such an agency should also have broader authorities, including to enhance competition and empower the public to choose platforms and services whose policies align with their values. Data privacy protections are also relevant here, as they would disallow the customization and targeting of content that can make disinformation narratives so potent and so polarizing. But lets implement protections that cover all the data collection, exploitation, and surveillance uses weve discussed for so many years.
The Best Time To Act
To paraphrase an old expression, the best time to act to protect the integrity of our information environment is, well, in 2016; but the second-best time is now. Theres been a lot of freaking out about the heightened risks of disinformation due to generative AI as the United States and 49 other countries enter another election cycle for 2024. But generative AI is only one of the new threats in our information environment.
Virtually all of the major platforms have rolled back disinformation policies and protections before the 2024 election cycle. A U.S. District Court judge recently issued a ruling and preliminary injunction limiting contact between Biden administration officials and social media platforms over certain online content, even some content relating to national security and public health and safety. There is a powerful new counter-narrative in Congress and the judicial system about the governments role in content moderation and an equation with censorship. Social media platforms, and media in general, seem to be fragmenting. This could be good or bad: Will the popularity of alternative, sometimes highly partisan, platforms send the conspiracy theorists back underground, made less dangerous because they are less able to find one another, connect, and communicate? Could more cohesive online communities with more in common increase the civility of these platforms? Or will the end of a few dominant digital gatekeepers mean even greater sequestering and polarization? And what happens if Twitter or X does implode like the Titan submersible, and its wonky, highly influential user base of journalists, politicians and experts disbands and cant find one another to connect the dots on world events?
It will take a whole-of-society approach to restore trust in our information environment, and we need to accelerate solutions that have already been proposed. We favor solutions that equip civil society to identify false information and allow all Americans to make informed choices about what information they share. We should enable research into how disinformation is seeded and spread and how to counteract it. Policymakers should create incentives for the technology platforms to change their policies and product design and they should foster more competition and choice among media outlets. Civil society should convene stakeholders, including from the communities most impacted by misinformation, to research and design solutions all while protecting privacy and freedom of expression. And we should use policy to solve the collapse of local news, since it has opened information voids that disinformation rushes in to fill.
Lets not waste a crisis, even if its a false one. Lets focus the explosion of attention on generative AI and its threats to democracy into productive solutions to the challenges and harms of disinformation weve been facing for years.
Excerpt from:
Lies, Damn Lies, and Generative Artificial Intelligence: How GAI ... - Public Knowledge
- The Trump Administrations Artificial Intelligence Rollback Is a Chance to Rethink AI Policy - Ms. Magazine - February 5th, 2025 [February 5th, 2025]
- Workday layoffs: California-based company lays off 1,750 employees, 8.5% of its workforce in favor of artificial intelligence - ABC7 Los Angeles - February 5th, 2025 [February 5th, 2025]
- It can really transform lives: Navigating the ethical landscape of artificial intelligence - WKMG News 6 & ClickOrlando - February 5th, 2025 [February 5th, 2025]
- Legal Restrictions Governing Artificial Intelligence in the Workplace - Law.com - February 5th, 2025 [February 5th, 2025]
- Google drops AI weapons banwhat it means for the future of artificial intelligence - VentureBeat - February 5th, 2025 [February 5th, 2025]
- MPs to scrutinise use of artificial intelligence in the finance sector - ComputerWeekly.com - February 5th, 2025 [February 5th, 2025]
- Catalyzing Change: Innovation and Efficiency through Artificial Intelligence in Contracting - United States Army - February 5th, 2025 [February 5th, 2025]
- STSD to hear cost breakdown, address artificial intelligence in education - The Wellsboro Gazette - February 5th, 2025 [February 5th, 2025]
- OECD activities during the Artificial Intelligence (AI) Action Summit - OECD - February 5th, 2025 [February 5th, 2025]
- Tether Ventures Into Artificial Intelligence With New Application Suite - Bitcoin.com News - February 5th, 2025 [February 5th, 2025]
- Will Artificial Intelligence Kill Acting? Nicholas Cage Thinks It Could - Movieguide - February 5th, 2025 [February 5th, 2025]
- 3 Reasons to Buy This Artificial Intelligence (AI) Stock on the Dip - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- 1 No-Brainer Artificial Intelligence (AI) Stock to Buy With $35 and Hold for the Long Run - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- Google renounces its promise not to develop weapons with artificial intelligence - Mezha.Media - February 5th, 2025 [February 5th, 2025]
- DeepSeek Just Changed Generative Artificial Intelligence (AI) Forever. 2 Surprising Winners From Its Innovation. - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- FUTURE-AI: international consensus guideline for trustworthy and deployable artificial intelligence in healthcare - The BMJ - February 5th, 2025 [February 5th, 2025]
- DeepSeek Just Exposed the Biggest Flaw of the Artificial Intelligence (AI) Revolution - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- Artificial Intelligence Is Here: How The Innovative Technology Is Taking Over The Stateline - WREX.com - February 5th, 2025 [February 5th, 2025]
- The Ultimate Artificial Intelligence (AI) Stocks to Buy in 2025 - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- This Magnificent Artificial Intelligence (AI) Stock Has Shot Up Over 175% in Just 3 Months, and It Could Soar Higher in 2025 - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- Artificial intelligence is bringing nuclear power back from the dead maybe even in California - CalMatters - February 5th, 2025 [February 5th, 2025]
- Got $5,000? These Are 3 of the Cheapest Artificial Intelligence Stocks to Buy Right Now - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- Compass Capital partners with MIT Sloan School of Management on an artificial intelligence project - ZAWYA - February 5th, 2025 [February 5th, 2025]
- 3 No-Brainer Artificial Intelligence (AI) Stocks to Buy With $500 Right Now - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- Nvidia vs. Alphabet: Which Artificial Intelligence (AI) Stock Should You Buy After the Emergence of China's DeepSeek? - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- A look inside the Trump administration approach to artificial intelligence - Federal News Network - February 5th, 2025 [February 5th, 2025]
- Artificial Intelligence (AI) in Cardiology Market Industry Growth Trends: Market Forecast and Revenue Share by 2031 - openPR - February 5th, 2025 [February 5th, 2025]
- Riverhead hospital employees picket for raises, protections from artificial intelligence - RiverheadLOCAL - February 5th, 2025 [February 5th, 2025]
- 1 Wall Street Analyst Thinks This Artificial Intelligence (AI) Chip Stock Could Benefit From DeepSeek's Breakthrough - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- 1 No-Brainer Artificial Intelligence (AI) Stock That Will Crush the Market in 2025 - The Motley Fool - February 5th, 2025 [February 5th, 2025]
- 3 Artificial Intelligence (AI) Stocks That Could Deliver Stunning Returns This Year - The Motley Fool - January 27th, 2025 [January 27th, 2025]
- Trumps White House and the New Artificial Intelligence Era - The Dispatch - January 27th, 2025 [January 27th, 2025]
- Artificial intelligence confirms it - these are the jobs that will become extinct in the next 5 years - Unin Rayo - January 27th, 2025 [January 27th, 2025]
- My Top 2 Artificial Intelligence (AI) Stocks for 2025 (Hint: Nvidia Is Not One of Them) - Nasdaq - January 27th, 2025 [January 27th, 2025]
- Artificial intelligence bill passes in the Arkansas House - THV11.com KTHV - January 27th, 2025 [January 27th, 2025]
- Chen elected fellow of Association for the Advancement of Artificial Intelligence - The Source - WashU - WashU - January 27th, 2025 [January 27th, 2025]
- Nvidia Plummeted Today -- Time to Buy the Artificial Intelligence (AI) Leader's Stock? - The Motley Fool - January 27th, 2025 [January 27th, 2025]
- Super Micro Computer Plummeted Today -- Is It Time to Buy the Artificial Intelligence (AI) Stock? - The Motley Fool - January 27th, 2025 [January 27th, 2025]
- The Brief: Impact practitioners on the perils and possibilities of artificial intelligence - ImpactAlpha - January 27th, 2025 [January 27th, 2025]
- 3 Mega-Cap Artificial Intelligence (AI) Stocks Wall Street Thinks Will Soar the Most Over the Next 12 Months - sharewise - January 27th, 2025 [January 27th, 2025]
- 3 Mega-Cap Artificial Intelligence (AI) Stocks Wall Street Thinks Will Soar the Most Over the Next 12 Months - The Motley Fool - January 27th, 2025 [January 27th, 2025]
- Ask how you can do human good: artificial intelligence and the future at HKS - Harvard Kennedy School - January 27th, 2025 [January 27th, 2025]
- This Unstoppable Artificial Intelligence (AI) Stock Climbed 90% in 2024, and Its Still a Buy at Todays Price - MSN - January 27th, 2025 [January 27th, 2025]
- Nvidia Plummeted Today -- Time to Buy the Artificial Intelligence (AI) Leader's Stock? - MSN - January 27th, 2025 [January 27th, 2025]
- Artificial intelligence: key updates and developments (20 27 January) - Lexology - January 27th, 2025 [January 27th, 2025]
- Here's 1 Trillion-Dollar Artificial Intelligence (AI) Chip Stock to Buy Hand Over Fist While It's Still a Bargain - The Motley Fool - January 27th, 2025 [January 27th, 2025]
- Artificial intelligence curriculum being questioned as the future of education in Pennsylvania 'cyber charters' - Beaver County Radio - January 27th, 2025 [January 27th, 2025]
- Why Rezolve Could Be the Next Big Name in Artificial Intelligence - MarketBeat - January 27th, 2025 [January 27th, 2025]
- Artificial Intelligence Market to Hit $3819.2 Billion By 2034, US Leading the Way in Artificial Intelligence - EIN News - January 27th, 2025 [January 27th, 2025]
- President Donald Trump Just Announced Project Stargate: 3 Unstoppable Stocks That Could Profit From the Artificial Intelligence (AI) Buildout - The... - January 26th, 2025 [January 26th, 2025]
- Tevogen Bio Broadens Relationship with Microsoft to Deepen Artificial Intelligence Collaboration and Develop PredicTcell Technology on Azure - Yahoo... - January 26th, 2025 [January 26th, 2025]
- This Artificial Intelligence (AI) Stock Is a Favorite of Billionaires. Here's Why. - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- Beyond ChatGPT: WVU researchers to study use and ethics of artificial intelligence across disciplines - WVU Today - January 26th, 2025 [January 26th, 2025]
- Potential Changes in the Regulation of Artificial Intelligence in 2025 - The National Law Review - January 26th, 2025 [January 26th, 2025]
- This Artificial Intelligence (AI) Innovator Could Be Sitting on a $100 Billion Opportunity That Could Send Shares Soaring 67% - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- International Day of Education 2025 - "Artificial Intelligence and Education: Challenges and Opportunities" - Welcome to the United Nations - January 26th, 2025 [January 26th, 2025]
- 2 Artificial Intelligence (AI) Stocks That Could Make You a Millionaire - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- Some doctors increasingly using artificial intelligence to take notes during appointments - Yoursun.com - January 26th, 2025 [January 26th, 2025]
- This Artificial Intelligence (AI) Stock Has Jumped 30% Already in 2025. It Could Jump Another 32%, According to Wall Street. - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- 3 Reasons Amazon Is 1 of the Best Artificial Intelligence (AI) Stocks to Buy Right Now - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- UnDesto AI- bridging the gap on artificial intelligence - Civic Media - January 26th, 2025 [January 26th, 2025]
- Syngenta Group: Five Key Trends in Artificial Intelligence That Will Revolutionize Agriculture in 2025 - Business Wire - January 26th, 2025 [January 26th, 2025]
- Just Salad turns to artificial intelligence to help guests build their lunch - Restaurant Business Online - January 26th, 2025 [January 26th, 2025]
- Lots of cheap renewable energy required to power artificial intelligence server stacks - MSN - January 26th, 2025 [January 26th, 2025]
- What to Know About the New Trump Administration Executive Order on Artificial Intelligence - Council on Foreign Relations - January 26th, 2025 [January 26th, 2025]
- Nvidia Stock Investors Just Got Fantastic Artificial Intelligence (AI) News From President Trump - The Motley Fool - January 26th, 2025 [January 26th, 2025]
- Trump axes Biden's executive order on artificial intelligence, plans to invest billions - 13WHAM-TV - January 26th, 2025 [January 26th, 2025]
- Stargate artificial intelligence project to exclusively serve OpenAI - Financial Times - January 24th, 2025 [January 24th, 2025]
- Unlocking Early Colorectal Cancer Detection With Artificial Intelligence - AJMC.com Managed Markets Network - January 24th, 2025 [January 24th, 2025]
- Here come the bots: How Michigan schools are leaping into artificial intelligence - Detroit News - January 24th, 2025 [January 24th, 2025]
- 3 Artificial Intelligence (AI) Stocks That Could Go on a Multidecade Run - The Motley Fool - January 24th, 2025 [January 24th, 2025]
- UNESCO Highlights the Role of Artificial Intelligence in Education at Congreso Futuro 2025 - UNESCO - January 24th, 2025 [January 24th, 2025]
- Navigating deepfakes and synthetic media: This course helps students demystify artificial intelligence technologies - The Conversation - January 24th, 2025 [January 24th, 2025]
- How AI Agents Are Changing the Rules of the Game: The Future of Artificial Intelligence - Telefnica - January 24th, 2025 [January 24th, 2025]
- Here Are the 3 Cheapest Megacap Artificial Intelligence (AI) Stocks on the Market to Buy in 2025 - The Motley Fool - January 24th, 2025 [January 24th, 2025]
- Artificial Intelligence (AI) in Games Market to Grow by USD 27.47 Billion (2025-2029), Rising Adoption of AR and VR Games Fuels Growth, Report on AI... - January 24th, 2025 [January 24th, 2025]
- Artificial Intelligence (AI) Chips Market to Grow by USD 902.65 Billion (2025-2029), Focus on AI Chips for Smartphones Drives Growth, Report with AI... - January 24th, 2025 [January 24th, 2025]
- ByteDance in race with US rivals to drive artificial general intelligence - South China Morning Post - January 24th, 2025 [January 24th, 2025]
- Labor Faces Artificial Intelligence and Outsourcing: Appeasement or Class Struggle? - CounterPunch - January 24th, 2025 [January 24th, 2025]
- Artificial Intelligence Chip Market Projected to Grow at 38.2% CAGR, Reaching $383.7 Billion by 2032 - openPR - January 24th, 2025 [January 24th, 2025]