Archive for the ‘Artificial Intelligence’ Category

Congress wrestles with AIs boost to campaigns, potential misuse – Roll Call

Lawmakers pushing ahead with some of the first bills governing artificial intelligence are confronting old problems as they deal with a new technology.

At a recent Senate committee makeup of legislation that would prohibit the distribution of deceptive AI in campaigns for federal office and require disclosure when AI is used, some Republicans espoused support for the measures ideals while voting against them, citing the potential limits on free speech.

We have to balance the potential for innovation with the potential for deceptive or fraudulent use, Nebraska Republican Sen. Deb Fischer, ranking member of the Senate Rules and Administration Committee, said at the markup. On top of that, we cant lose sight of the important protections our Constitution provides for free speech in this country. These two bills do not strike that careful balance.

Political battles are only likely to get more intense over AI as campaigns increasingly rely on it to fine-tune messages and find target audiences and others use it spread disinformation.

The technology is here to stay, proponents say, because AI greatly increases efficiency.

Campaigns can positively benefit from AI-derived messaging, said Mark Jablonowski, who is president ofDSPolitical, a digital advertising company that works with Democratic candidates and for progressive causes, and chief technology officer at its parent, Optimal. Our clients are using AI successfully to create messaging tracks.

But consultants, lawmakers, and government officials say the same tools that boost efficiency in campaigns will spread disinformation or impersonate candidates, causing confusion among voters and likely eroding confidence in the electoral process.

Senate Majority Leader Charles E. Schumer, D-N.Y., echoed those concerns at the Rules markup.

If deepfakes are everywhere and no one believes the results of the elections, woe is our democracy, he said. This is so damn serious.

Sen. Amy Klobuchar, D-Minn., the committees chairwoman, said AI tools have the potential to turbocharge the spread of disinformation and deceive voters.

The panel advanced a measure that would prohibit deceptive AI in campaigns for federal office and one that would require disclaimers when AI is used, both on9-2 votes with GOP lawmakers casting the opposing votes.

Klobuchar said she would be open to changes to address concerns raised by Republicans.

A third measure requiring the Election Assistance Commission to develop guidelines on the uses and risks of AI advanced on a 11-0 vote.

Campaign workers may enter a few prompts into generative AI tools that then spit out 50 or 60 unique messaging tracks, with workers choosingthe top three or fourthat really hit the mark, Jablonowski said in an email. There are many efficiency gains helping campaigns do more with less and create a more meaningful message, which is very important in politics.

Consultants and digital advertising firms now have access to more than two dozen AI-based tools that assist with various aspects of political campaigns, ranging from those that generate designs, ads and video content to those that produce emails, op-eds and media monitoring platforms, according to Higher Ground Labs, a venture fund that invests in tech platforms to help progressive candidates and causes.

AI-generated content is becoming most prevalent in political communications, particularly in content generation across images, video, audio, and text, Higher Ground Labs said in a May 23 report. Human oversight remains critical to ensure quality, accuracy and ethical use, the report said.

The report cited one study that found that using AI tools to generate fundraising emails grew dollars raised per work hour by 350 percent to 440 percent. The tools helped save time without losing quality even when employed by less experienced staffers, the report said.

AI tools also are helping campaigns with audience targeting. In 2023, Boise, Idaho, Mayor Lauren McLean built a target audience group using an AI tool that proved to be more capable in identifying supporters and outperformed standard partisanship models, according to the Higher Ground report.

But even the consultants who rely on these new technologies are aware of the downsides.

I wont sugarcoat it. As someone who has been in this space for two decades, this is the sort of Venn diagram I focus on, Jablonowski said, referring to the intersection of AI tools and those who might misuse them. I think were going to see a lot of good coming from AI this year, and were going to see significant potential challenges from bad actors. This keeps me up at night.

Beyond legislation, the Federal Communications Commission is looking into new rules that would require disclosures on messages generated using AI tools.

As artificial intelligence tools become more accessible, the Commission wants to make sure consumers are fully informed when the technology is used, FCC Chair Jessica Rosenworcel said in a May 22 statement. Today, Ive shared with my colleagues a proposal that makes clear consumers have a right to know when AI tools are being used in the political ads they see.

The FCC said the proposed rules are not intended to prohibit content generated by AI but are intended to disclose use of the technology.

States also are racing to pass laws that would require campaigns and candidates to disclose use of AI tools in their messages.

Alabama became the latest state to enact a law to criminalize the use of AI in election campaigning. The measure, passed last month, makes it a misdemeanor for a first-time offense and a felony for subsequent violations for distributing AI-generated deepfakes falsely showing a person speaking or doing something they did not.

Florida legislationsigned into law by Gov. Ron DeSantis in April likewise would impose prison terms for running AI-generated ads without disclosure.

Several other states have enacted laws requiring disclosure of of AI in generating messages and ads and imposing civil penalties for failing to do so.

Deepfake messages are not theoretical. Last week, the FCC issued a proposed $6 million fine to Steve Kramer, a political consultant, for organizing a fake robocall in New Hampshire, whichauthorities say was received by 20,000 or more voters, in which the artificial-intelligence-doctored voice of President Joe Biden asked them to skip the states primary in January. Kramer admitted he was behind the call in an interview with CBS News.

New Hampshire Attorney General John Formella charged Kramer with voter suppression, a felony, and misdemeanor charges for impersonation of a candidate. A spokesman for the attorney generals office said Kramer is set to be arraigned in early June.

Jablonowski argued that some bad actors may break the rules regardless of whether laws require disclosure because the payoff might be worth any potential consequences.

It is particularly concerning that people who use generative AI maliciously are not going to be following the rules no matter what industry and regulators say, Jablonowski said. Requiring folks to label content as being created by generative AI only works if people follow those rules with fidelity.

One way to stem the spread of fake messages is for social media platforms to curb them, Jablonowski said. Meta Platforms Inc., for example, requires disclosure for ads using AI. And the company has said it will label AI-generated images on Facebook, Instagram and Threads.

Nick Clegg, president of global affairs at Meta, told MIT Technology Review at a May 22 conference that the company has yet to see large-scale use of AI-generated deepfakes on its platforms.

The interesting thing so far I stress, so far is not how much but how little AI-generated content [there is], Clegg said at the conference.

Tools to detect AI-generated material are not perfect and still evolving, and watermarks or digital signatures indicating AI-generated content can be tampered with, he said.

In addition to AI-generated deepfakes, social media platforms are still grappling with old-fashioned misinformation on their platforms, Jablonowski said, creating likely confusion and distrust among voters. Despite laws and actions by platforms, people bent on using AI to create confusion are going to do whatever they think they can get away with, said Jerry McNerney, a senior policy adviser at the law firm of Pillsbury Winthrop Shaw Pittman LLP.

McNerney is a former member of Congress who was co-chair of the Congressional Artificial Intelligence Caucus.

Trying to keep ahead of [such bad actors] with specific prohibitions is going to be a losing battle, McNerney said in an interview, arguing that federal agencies and industry groups may have to come up with standards that are enforceable. You need something more systemic.

View post:
Congress wrestles with AIs boost to campaigns, potential misuse - Roll Call

Types of Artificial Intelligence That You Should Know in 2024 – Simplilearn

The use and scope of Artificial Intelligence dont need a formal introduction. Artificial Intelligence is no more just a buzzword; it has become a reality that is part of our everyday lives. With companies building intelligent machines for diverse applications using AI, it is revolutionizing business sectors like never before. You will learn about the various stages and categories of artificial intelligence in this article on Types Of Artificial Intelligence.

Artificial Intelligence is the process of building intelligent machines from vast volumes of data. Systems learn from past learning and experiences and perform human-like tasks. It enhances the speed, precision, and effectiveness of human efforts. AI uses complex algorithms and methods to build machines that can make decisions on their own. Machine Learning and Deep learning forms the core of Artificial Intelligence.

AI is now being used in almost every sector of business:

Now that you know what AI really is, lets look at what are the different types of artificial intelligence?

Artificial Intelligence can be broadly classified into several types based on capabilities, functionalities, and technologies. Here's an overview of the different types of AI:

This type of AI is designed to perform a narrow task (e.g., facial recognition, internet searches, or driving a car). Most current AI systems, including those that can play complex games like chess and Go, fall under this category. They operate under a limited pre-defined range or set of contexts.

A type of AI endowed with broad human-like cognitive capabilities, enabling it to tackle new and unfamiliar tasks autonomously. Such a robust AI framework possesses the capacity to discern, assimilate, and utilize its intelligence to resolve any challenge without needing human guidance.

This represents a future form of AI where machines could surpass human intelligence across all fields, including creativity, general wisdom, and problem-solving. Superintelligence is speculative and not yet realized.

These AI systems do not store memories or past experiences for future actions. They analyze and respond to different situations. IBM's Deep Blue, which beat Garry Kasparov at chess, is an example.

These AI systems can make informed and improved decisions by studying the past data they have collected. Most present-day AI applications, from chatbots and virtual assistants to self-driving cars, fall into this category.

This is a more advanced type of AI that researchers are still working on. It would entail understanding and remembering emotions, beliefs, needs, and depending on those, making decisions. This type requires the machine to understand humans truly.

This represents the future of AI, where machines will have their own consciousness, sentience, and self-awareness. This type of AI is still theoretical and would be capable of understanding and possessing emotions, which could lead them to form beliefs and desires.

AI systems capable of self-improvement through experience, without direct programming. They concentrate on creating software that can independently learn by accessing and utilizing data.

A subset of ML involving many layers of neural networks. It is used for learning from large amounts of data and is the technology behind voice control in consumer devices, image recognition, and many other applications.

This AI technology enables machines to understand and interpret human language. It's used in chatbots, translation services, and sentiment analysis applications.

This field involves designing, constructing, operating, and using robots and computer systems for controlling them, sensory feedback, and information processing.

This technology allows machines to interpret the world visually, and it's used in various applications such as medical image analysis, surveillance, and manufacturing.

These AI systems answer questions and solve problems in a specific domain of expertise using rule-based systems.

Find Our Artificial Intelligence Course in Top Cities

AI research has successfully developed effective techniques for solving a wide range of problems, from game playing to medical diagnosis.

There are many branches of AI, each with its focus and set of techniques. Some of the essential branches of AI include:

We might be far from creating machines that can solve all the issues and are self-aware. But, we should focus our efforts toward understanding how a machine can train and learn on its own and possess the ability to base decisions on past experiences.

I hope this article helped you to understand the different types of artificial intelligence. If you are looking to start your career in Artificial Intelligent and Machine Learning, then check out Simplilearn's Post Graduate Program in AI and Machine Learning.

Do you have any questions regarding this article? If you have, please put in the comments section of this article on types of artificial intelligence. Our team will help you solve your queries at the earliest!

An AI model is a mathematical model used to make predictions or decisions. Some of the common types of AI models:

There are two main categories of AI:

The father of AI is John McCarthy. He is a computer scientist who coined the term "artificial intelligence" in 1955. McCarthy is also credited with developing the first AI programming language, Lisp.

Read more:
Types of Artificial Intelligence That You Should Know in 2024 - Simplilearn

The Scariest Part About Artificial Intelligence – The New Republic

This problem is finally getting a small piece of the attention it deserves, thanks to recent coverage by the Financial Times, Nature, and The Atlantic. But the tech industrys fossil fuellike tactics of greenwashing, gaslighting, and refusing to comment are going to make thorough reporting on this difficult. The closest weve gotten to candor came when Open AI founder Sam Altman admitted at Davos that A.I. will consume much more energy than expected, straining our grids. He admitted that the situation could become untenable: Theres no way to get there without a breakthrough.

Researching this issue gives one all the feelings of a dystopian twentieth-century sci-fi movie about parasitical robots stealing our human essence and ultimately killing us off. At every point, we want to yell at the screen, Dont let the robots in there! We wonder: Cant they be stopped?

If there was an A.I. abolition movement, Id join it today, ideally advocating exuberantly cruel penalties for the tech moguls who have ensnared us in this destructive and frivolous gambit. But being of a more constructive bent, Green New Deal co-author and Massachusetts Senator Ed Markey last month introduced the Artificial Intelligence Environmental Impacts Act of 2024. Its unfortunately mild, calling upon government agencies to do what the industry isnt doing: measure and investigate A.I.s environmental footprint. Its perhaps a politically feasible first step, especially given bipartisan social and cultural concerns about A.I.

Here is the original post:
The Scariest Part About Artificial Intelligence - The New Republic

Artificial intelligence vs machine learning: what’s the difference? – ReadWrite

There are so many buzzwords in the tech world these days that keeping up with the latest trends can be challenging. Artificial intelligence (AI) has been dominating the news, so much so that AI was named the most notable word of 2023 by Collins Dictionary. However, specific terms like machine learning have often been used instead of AI.

Introduced by American computer scientist Arthur Samuel in 1959, the term machine learning is described as a computers ability to learn without being explicitly programmed.

For one, machine learning (ML) is a subset of artificial intelligence (AI). While they are often used interchangeably, especially when discussing big data, these popular technologies have several distinctions, including differences in their scope, applications, and beyond.

Most people are now aware of this concept. Still, artificial intelligence actually refers to a collection of technologies integrated into a system, allowing it to think, learn, and solve complex problems. It has the capacity to copy cognitive abilities similar to human beings, enabling it to see, understand, and react to spoken or written language, analyze data, offer suggestions, and beyond.

Meanwhile, machine learning is just one area of AI that automatically enables a machine or system to learn and improve from experience. Rather than relying on explicit programming, it uses algorithms to sift through vast datasets, extract learning from the data, and then utilize this to make well-informed decisions. The learning part is that it improves over time through training and exposure to more data.

Machine learning models are the results or knowledge the program acquires by running an algorithm on training data. The more data used, the better the models performance.

Machine learning is an aspect of AI that enables machines to take knowledge from data and learn from it. In contrast, AI represents the overarching principle of allowing machines or systems to understand, reason, act, or adapt like humans.

Hence, think of AI as the entire ocean, encompassing various forms of marine life. Machine learning is like a specific species of fish in that ocean. Just as this species lives within the broader environment of the ocean, machine learning exists within the realm of AI, representing just one of many elements or aspects. However, it is still a significant and dynamic part of the entire ecosystem.

Machine learning cannot impersonate human intelligence, which is not its aim. Instead, it focuses on building systems that can independently learn from and adapt to new data through identifying patterns. AIs goal, on the other hand, is to create machines that can operate intelligently and independently, simulating human intelligence to perform a wide range of tasks, from simple to highly complex ones.

For example, when you receive emails, your email service uses machine learning algorithms to filter out spam. The ML system has been trained on vast datasets of emails, learning to distinguish between spam and non-spam by recognizing patterns in the text, sender information, and other attributes. Over time, it adapts to new types of spam and your personal preferences like which emails you mark as spam or not continually improving its accuracy.

In this scenario, your email provider may use AI to offer smart replies, sort emails into categories (like social, promotions, primary), and even prioritize essential emails. This AI system understands the context of your emails, categorizes them, and suggests short responses based on the content it analyzes. It mimics a high level of understanding and response generation that usually requires human intelligence.

There are three main types of machine learning and some specialized forms, including supervised, unsupervised, semi-supervised, and reinforcement learning.

In supervised learning, the machine is taught by an operator. The user supplies the machine learning algorithm with a recognized dataset containing specific inputs paired with their correct outputs, and the algorithm has to figure out how to produce these outputs from the given inputs. Although the user is aware of the correct solutions, the algorithm needs to identify patterns, all while learning from them and making predictions. If the predictions have errors, the user has to correct them, and this cycle repeats until the algorithm reaches a substantial degree of accuracy or performance.

Semi-supervised learning falls between supervised and unsupervised learning. Labeled data consists of information tagged with meaningful labels, allowing the algorithm to understand the data, whereas unlabeled data does not contain these informative tags. Using this mix, machine learning algorithms can be trained to assign labels to unlabeled data.

Unsupervised learning involves training the algorithm on a dataset without explicit labels or correct answers. The goal is for the model to identify patterns and relationships in the data by itself. It tries to learn the underlying structure of the data to categorize it into clusters or spread it along dimensions.

Finally, reinforcement learning looks at structured learning approaches, in which a machine learning algorithm is given a set of actions, parameters, and goals. The algorithm then has to navigate through various scenarios by experimenting with different strategies, assessing each outcome to identify the most effective approach. It employs a trial-and-error approach, drawing on previous experiences to refine its strategy and adjust its actions according to the given situation, all to achieve the best possible result.

In financial contexts, AI and machine learning serve as essential tools for tasks like identifying fraudulent activities, forecasting risks, and offering enhanced proactive financial guidance. Apparently, AI-driven platforms can now offer personalized educational content based on an individuals financial behavior and needs. By delivering bite-sized, relevant information, these platforms ensure users are well-equipped to make informed financial decisions, leading to better credit scores over time. Nvidia AI posted on X that generative AI was being incorporated into curricula.

During the Covid-19 pandemic, machine learning also gave insights into the most urgent events. They are also powerful weapons for cybersecurity, helping organizations protect themselves and their customers by detecting anomalies. Mobile app developers have actively integrated numerous algorithms and explicit programming to make their apps fraud-free for financial institutions.

Featured image: Canva

Read the original post:
Artificial intelligence vs machine learning: what's the difference? - ReadWrite

AI singularity may come in 2027 with artificial ‘super intelligence’ sooner than we think, says top scientist – Livescience.com

Humanity could create an artificial intelligence (AI) agent that is just as smart as humans in as soon as the next three years, a leading scientist has claimed.

Ben Goertzel, a computer scientist and CEO of SingularityNET, made the claim during the closing remarks at the Beneficial AGI Summit 2024 on March 1 in Panama City, Panama. He is known as the "father of AGI" after helping to popularize the term artificial general intelligence (AGI) in the early 2000s.

The best AI systems in deployment today are considered "narrow AI" because they may be more capable than humans in one area, based on training data, but can't outperform humans more generally. These narrow AI systems, which range from machine learning algorithms to large language models (LLMs) like ChatGPT, struggle to reason like humans and understand context.

However, Goertzel noted AI research is entering a period of exponential growth, and the evidence suggests that artificial general intelligence (AGI) where AI becomes just as capable as humans across several areas independent of the original training data is within reach. This hypothetical point in AI development is known as the "singularity."

Goertzel suggested 2029 or 2030 could be the likeliest years when humanity will build the first AGI agent, but that it could happen as early as 2027.

Related: Artificial general intelligence when AI becomes more capable than humans is just moments away, Meta's Mark Zuckerberg declares

If such an agent is designed to have access to and rewrite its own code, it could then very quickly evolve into an artificial super intelligence (ASI) which Goertzel loosely defined as an AI that has the cognitive and computing power of all of human civilization combined.

"No one has created human-level artificial general intelligence yet; nobody has a solid knowledge of when we're going to get there. I mean, there are known unknowns and probably unknown unknowns. On the other hand, to me it seems quite plausible we could get to human-level AGI within, let's say, the next three to eight years," Goertzel said.

He pointed to "three lines of converging evidence" to support his thesis. The first is modeling by computer scientist Ray Kurzweil in the book "The Singularity is Near" (Viking USA, 2005), which has been refined in his forthcoming book "The Singularity is Nearer" (Bodley Head, June 2024). In his book, Kurzweil built predictive models that suggest AGI will be achievable in 2029, largely centering on the exponential nature of technological growth in other fields.

Goertzel also pointed to improvements made to LLMs within a few years, which have "woken up so much of the world to the potential of AI." He clarified LLMs in themselves will not lead to AGI because the way they show knowledge doesn't represent genuine understanding, but that LLMs may be one component in a broad set of interconnected architectures.

The third piece of evidence, Goertzel said, lay in his work building such an infrastructure, which he has called "OpenCog Hyperon," as well as associated software systems and a forthcoming AGI programming language, dubbed "MeTTa," to support it.

OpenCog Hyperon is a form of AI infrastructure that involves stitching together existing and new AI paradigms, including LLMs as one component. The hypothetical endpoint is a large-scale distributed network of AI systems based on different architectures that each help to represent different elements of human cognition from content generation to reasoning.

Such an approach is a model other AI researchers have backed, including Databricks CTO Matei Zaharia in a blog post he co-authored on Feb. 18 on the Berkeley Artificial Intelligence Research (BAIR) website.

Goertzel admitted, however, that he "could be wrong" and that we may need a "quantum computer with a million qubits or something."

"My own view is once you get to human-level AGI, within a few years you could get a radically superhuman AGI unless the AGI threatens to throttle its own development out of its own conservatism," Goertzel added. "I think once an AGI can introspect its own mind, then it can do engineering and science at a human or superhuman level. It should be able to make a smarter AGI, then an even smarter AGI, then an intelligence explosion. That may lead to an increase in the exponential rate beyond even what Ray [Kurzweil] thought."

Go here to read the rest:
AI singularity may come in 2027 with artificial 'super intelligence' sooner than we think, says top scientist - Livescience.com