Archive for the ‘Artificial Intelligence’ Category

Demystifying AI: The Probability Theory Behind LLMs Like OpenAI’s ChatGPT – PYMNTS.com

When a paradigm shift occurs, it is not always obvious to those affected by it.

But there is no eye of the storm equivalent when it comes to generative artificial intelligence (AI).

The technology ishere. There are already variouscommercial productsavailable fordeployment, and organizations that can effectively leverage it in support of theirbusiness goalsare likely to outperform their peers that fail to adopt the innovation.

Still, as with many innovations, uncertainty and institutional inertia reign supreme which is why understanding how the large language models (LLMs) powering AI work is critical to not just piercing the black box of the technologys supposed inscrutability, but also to applying AI tools correctly within an enterprise setting.

The most important thing to understand about the foundational models powering todays AI interfaces and giving them their ability to generate responses is the simple fact that LLMs, like Googles Bard, Anthropics Claude, OpenAIs ChatGPT and others, are just adding one word at a time.

Underneath the layers of sophisticated algorithmic calculations, thats all there is to it.

Thats because at a fundamental level, generative AI models are built to generate reasonable continuations of text by drawing from a ranked list of words, each given different weighted probabilities based on the data set the model was trained on.

Read more:There Are a Lot of Generative AI Acronyms Heres What They All Mean

While news of AI that can surpass human intelligence are helping fuel the hype of the technology, the reality is far more driven by math than it is by myth.

It is important for everyone to understand that AIlearns from data at the end of the day [AI] is merely probabilistics and statistics, Akli Adjaoute, AI pioneer and founder and general partner at venture capital fund Exponion, told PYMNTS in November.

But where do the probabilities that determine an AI systems output originate from?

The answer lies within the AI models training data. Peeking into the inner workings of an AI model reveals that it is not only the next reasonable word that is being identified, weighted, then generated, but that this process occurs on a letter by letter basis, as AI models break apart words into more manageable tokens.

That is a big part of whyprompt engineering for AI models is an emerging skillset. After all, different prompts produce different outputs based on the probabilities inherent to each reasonable continuation, meaning that to get the best output, you need to have a clear idea of where to point the provided input or query.

It also means that the data informing the weight given to each probabilistic outcome must berelevantto the query. The more relevant, the better.

See also:Tailoring AI Solutions by Industry Key to Scalability

While PYMNTS Intelligence has found that more than eight in 10 business leaders (84%) believe generative AI will positively impactthe workforce, generative AI systems are only as good as the data theyre trained on. Thats why the largest AI players are in an arms race toacquire the best training data sets.

Theres a long way to go before theres afuturistic version of AIwhere machines think and make decisions. Humans will be around for quite a while,Tony Wimmer, head of data and analytics atJ.P. Morgan Payments, told PYMNTS in March. And the more that we can write software that has payments data at the heart of it to help humans, the better payments will get.

Thats why, to train an AI model to perform to the necessary standard, many enterprises are relying ontheir own internal datato avoid compromising model outputs. By creating vertically specialized LLMs trained for industry use cases, organizations can deploy AI systems that are able to find the signal within the noise, as well as to be further fine-tuned to business-specific goals with real-time data.

AsAkli Adjaoutetold PYMNTS back in November, if you go into a field where the data is real, particularly in thepayments industry, whether its credit risk, whether its delinquency, whether its AML [anti-money laundering], whether its fraud prevention, anything that touches payments AI can bring a lot of benefit.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read the rest here:
Demystifying AI: The Probability Theory Behind LLMs Like OpenAI's ChatGPT - PYMNTS.com

The Urgent but Difficult Task of Regulating Artificial Intelligence – Amnesty International

By David Nolan, Hajira Maryam & Michael Kleinman, Amnesty Tech

The year 2023 marked a new era of AI hype, rapidly steering policy makers towards discussions on the safety and regulation of new artificial intelligence (AI) technologies. The feverish year in tech started with the launch of ChatGPT in late 2022 and ended with a landmark agreement on the EU AI Act being reached. Whilst the final text is still being ironed out in technical meetings over the coming weeks, early signs indicate the western worlds first AI rulebook goes someway to protecting people from the harms of AI but still falls short in a number of crucial areas, failing to ensure human rights protections especially for the most marginalised. This came soon after the UK Government hosted an inaugural AI Safety Summit in November 2023, where global leaders, key industry players, and select civil society groups gathered to discuss the risks of AI. Although the growing momentum and debate on AI governance is welcomed and urgently needed, the key question for 2024 is whether these discussions will generate concrete commitments and focus on the most important present-day AI risks, and critically whether it will translate into further substantive action in other jurisdictions.

Whilst AI developments do present new opportunities and benefits, we must not ignore the documented dangers posed by AI tools when they are used as a means of societal control, mass surveillance and discrimination. All too often, AI systems are trained on massive amounts of private and public datadata which reflects societal injustices, often leading to biased outcomes and exacerbating inequalities. From predictive policing tools, to automated systems used in public sector decision-making to determine who can access healthcare and social assistance, to monitoring the movement of migrants and refugees, AI has flagrantly and consistently undermined the human rights of the most marginalised in society. Other forms of AI, such as fraud detection algorithms, have also disproportionately impacted ethnic minorities, who have endured devastating financial problems as Amnesty International has already documented, while facial recognition technology has been used by the police and security forces to target racialised communities and entrench Israels system of apartheid.

So, what makes regulation of AI complex and challenging? First, there is the vague nature of the term AI itself, making efforts to regulate this technology more cumbersome. There is no widespread consensus on the definition of AI because the term does not refer to a singular technology and rather encapsulates a myriad technological applications and methods. The use of AI systems in many different domains across the public and private sector, means a large number of varied stakeholders are involved in its development and deployment, meaning such systems are a product of labour, data, software and financial inputs and any regulation must grapple with upstream and downstream harms. Further, these systems cannot be strictly considered as hardware or software, but rather their impact comes down to the context in which they are developed and implemented and regulation must take this into account.

As we enter 2024, now is the time to not only ensure that AI systems are rights respecting by design, but also to guarantee that those who are impacted by these technologies are not only meaningfully involved in decision-making on how AI technology should be regulated, but also that their experiences are continually surfaced and are centred within these discussions.

Alongside the EU legislative process, the UK, US, and others, have set out their distinct roadmaps and approach to identifying the key risks AI technologies present, and how they intend to mitigate these. Whilst there are many complexities of these legislative processes, this should not delay any efforts to protect people from the present and future harms of AI, and there are crucial elements that we, at Amnesty, know any proposed regulatory approach must contain. Regulation must be legally binding and center the already documented harms to people subject to these systems. Commitments and principles on the responsible development and use of AIthe core of the current pro-innovation regulatory framework being pursued by the UKdo not offer an adequate protection against the risks of emerging technology and must be put on statutory footing.

Similarly, any regulation must include broader accountability mechanisms over and above technical evaluations that are being pushed by industry. Whilst these may be a useful string within any regulatory toolkits bow, particularly in testing for algorithmic bias, bans and prohibitions cannot be off the table for systems fundamentally incompatible with human rights, no matter how accurate or technically efficacious they purport to be.

Others must learn from the EU process and ensure there are not loopholes for public and private sector players to circumvent regulatory obligations, and removing any exemptions for AI used within national security or law enforcement is critical to achieving this. It is also important that where future regulation limits or prohibits the use of certain AI systems in one jurisdiction, no loopholes or regulatory gaps allow the same systems to be exported to other countries where they could be used to harm the human rights of marginalized groups. This remains a glaring gap in the UK, US, and EU approaches, as they fail to take into account the global power imbalances of these technologies, especially their impact on communities in the Global Majority whose voices are not represented in these discussions. There have already been documented cases of outsourced workers being exploited in Kenya and Pakistan by companies developing AI tools.

As we enter 2024, now is the time to not only ensure that AI systems are rights-respecting by design, but also to guarantee that those who are impacted by these technologies are not only meaningfully involved in decision-making on how AI technology should be regulated, but also that their experiences are continually surfaced and are centred within these discussions.More than lip service by lawmakers, we need binding regulation that holds companies and other key industry players to account and ensures that profits do not come at the expense of human rights protections. International, regional and national governance efforts must complement and catalyse each other, and global discussions must not come at the expense of meaningful national regulation or binding regulatory standards these are not mutually exclusive. This is the level at which accountability is servedwe must learn from past attempts to regulate tech, which means ensuring robust mechanisms are introduced to allow victims of AI-inflicted rights violations to seek justice.

Read the original here:
The Urgent but Difficult Task of Regulating Artificial Intelligence - Amnesty International

Comparing Student Reactions To Lectures In Artificial Intelligence And Physics – Science 2.0

In the past two weeks I visited two schools in Veneto to engage students with the topic of Artificial Intelligence, which is something everybody seems to be happy to hear about these days: on the 10th of January I visited a school in Vicenza, and on the 17th a school in Venice. In both cases there were about 50-60 students, but there was a crucial difference: while the school in Venezia (the "Liceo Marco Foscarini", where I have been giving lectures in the past within the project called "Art and Science") was a classical liceum and the high-schoolers who came to listen to my presentation were between 16 and 18 years old, the one in Vicenza was a middle school, and its attending students were between 11 and 13 years old. Since the contents of the lecture could withstand virtually no change - I was too busy during these first few post-Christmas weeks - the two-pronged test was an effective testing ground to spot differences in the reaction of the two audiences. To be honest, I approached the first event with some worries that the content I was presenting to those young kids was going to be a bit overwhelming to them, so maybe in hindsight we could imagine that the impression I got was biased by this "low expectations" attitude.

To make matters worse, because my lecture was the first in a series organized by a local academy, with comparticipation of the Comune of Vicenza, the lecture I gave had to follow speeches from the school director, the maior of Vicenza, and a couple of other introductions - something that I was sure was further decreasing the stamina and willingness to listen to a frontal lecture of the young audience. In fact, I was completely flabberghasted.

Not only did the middle schoolers in Vicenza follow with attention and in full silence the 80-minutes-long talk I had prepared. They also interrupted a few times with witty questions (as I had begged them to do, in fact). At the end of the presentation, I was hit by a rapid succession of questions ranging over the full contents of the lecture - from artificial intelligence to particle physics, to details about the SWGO experiment, astrophysics, and what not. I counted about 20 questions and then lost track of that. This continued after the end of the event, when some of the students were not completely happy yet and came to meet me and ask for more detail.

Above, a moment during the lecture in Vicenza

When I gave the same lecture in Venice, I must say I did receive again several interesting questions. But in comparison, the Foscarini teenagers were clearly a bit less enthusiastic on the whole of the topic of the lecture. Maybe my assessment comes from the bias I was mentioning earlier; and in part, I have to say I have much more experience with high-schoolers than with younger students, so I knew better what to expect and I was not surprised by the outcome.

This comparison seems to align with what has been once observed by none other than Carl Sagan. I have to thank Phil Warnell here, who commenting on Facebook to a post I wrote there on my experience with middle schoolers cited a piece from Sagan that is quite relevant:

I cannot but concur with what Sagan says in these two quotes. I also believe that part of the unwillingness of high-schoolers to ask questions is due to the judgment of their peers. What happens is that until we are 12 or 13 we for the most part have not yet had experience with the negative feedback we may get by being participative in school events, and we do not yet fear the reaction of our friends and not-so-friendly schoolmates. It seems that kind of experience grows a shell around them, making them a bit less willing to expose themselves and speak up to discuss what they did not understand, or to express enthusiasm. I think that is a bit sad, but it is of course part of our early trajectory amid experiences that form us and equip us with the vaccines we are going to need in the rest of our life.

See original here:
Comparing Student Reactions To Lectures In Artificial Intelligence And Physics - Science 2.0

Critics Say Sweeping Artificial Intelligence Regulations Could Target Parody, Satire Such as South Park, Family Guy – R Street

Its just not workable, a fellow at the R Street Institute, Shoshana Weissmann, tells the Sun. Although AI impersonation is a problem and fraud laws should protect against it, thats not what this law would do, she says.

The bill defines likeness as the actual or simulated image or likeness of an individual, regardless of the means of creation, that is readily identifiable by virtue of face, likeness, or other distinguishing characteristic. It defines voice as any medium containing the actual voice or a simulation of the voice of an individual, whether recorded or generated by computer, artificial intelligence, algorithm, or other digital technology, service, or device to the extent that an individual is readily identifiable from the sound of it.

Theres no exception for parody, and basically, the way they define digital creations is just so broad, it would cover cartoons, Ms. Weissmann says, adding that the bill would extend to shows such as South Park and Family Guy, which both do impersonations of people.

Its understood that this isnt the real celebrity. When South Park made fun of Ben Affleck, it wasnt really Ben Affleck. And they even used his picture at one point, but it was clear they were making fun of him. But under the pure text of this law, that would be unlawful, she says.

If the bill was enacted, someone would sue immediately, she says, adding that it would not pass First Amendment scrutiny.

Lawmakers should be more careful to ensure these regulations dont run afoul of the Constitution, she says, but instead, they have haphazard legislation like this that just doesnt make any functional sense.

While the bill does include a section relating to the First Amendment defense, Ms. Weissmann says, its essentially saying that after youre sued under our bill, you can use the First Amendment as a defense. But you can do that anyway under the bill. That doesnt change that.

Because of the threat of being dragged into court and spending thousands of dollars on lawyers, the bill would effectively be chilling speech, she notes.

One of the harms defined in the bill includes severe emotional distress of any person whose voice or likeness is used without consent.

Lets say Ben Affleck said he had severe emotional distress because South Park parodied him, Ms. Weissmann says. He could sue under this law. Thats insane, absolutely insane.

The bill would be more workable if it was made more specific and narrow to actual harms, and also made sure that people couldnt sue over very obvious parodies, she says. The way its drafted now, however, is going to apply to a lot more than they intended, she adds.

See the rest here:
Critics Say Sweeping Artificial Intelligence Regulations Could Target Parody, Satire Such as South Park, Family Guy - R Street

Artificial Intelligence: inevitable integration enterprises | Top Stories | theweeklyjournal.com – The Weekly Journal

Given the accelerated pace at which Artificial Intelligence (AI) occupies various lines to enhance the way in which private and public agencies carry out their work, more and more people must be trained to understand the impact of the new technology in their lives.

According to CRANT's chief executive, lvaro Melndez, AI represents a transformation for marketing, in which brands will have to appeal to credibility at a time when consumer vulnerability is threatened by the constant generation of content that is mostly not real.

The ManpowerGroup Employment Expectations Survey (MEOS) revealed that net hiring intentions

"Artificial intelligence, beyond the superficial form, in which there has been a lot of talk about it helping you to generate images or video or text, obviously helps a lot because it makes the work easier and gives new opportunities, but there is a much deeper transformation that is what interests us and is that transformation that now all this is possible and much of what will be generated can be misleading ... it may be a lie," said Melndez.

Because the situation involves a new way of consuming information, the executive considered that it is an opportunity for brands to use the tool responsibly to generate a positive impact through marketing.

"It's a different way of thinking about marketing. It's no longer about communicating a product or a service, but now it's about how you are showing reality," the executive commented.

To address the problem, Melndez said that companies must educate themselves in the use of AI in an ethical manner, and become a source of confidence for the consumer.

At a time when marketing is in the early stages with AI, he considered that by the end of this year all companies will have it incorporated, which will generate competitiveness in relation to those that do not.

That is why Melndez designed and carried out the "AI for Marketers" workshop, in collaboration with the agency de la Cruz, to provide a group of marketers with an explanatory framework of the basic principles, ethics, tools, advantages and opportunities that AI provides so that they are not left behind by the incursion of the technology.

"The goal is to facilitate a much deeper understanding of what artificial intelligence is and how it can be applied, both to enhance their work with their companies and their brands, but also to enhance their career. Artificial intelligence (AI) is not for tech people, it's not for data scientists. We all have to understand and master artificial intelligence," said the founder of the company dedicated to the creative application of artificial intelligence.

Results of AI in companies

Among the companies that incorporate Artificial Intelligence as efficiency strategies to generate higher value content, Melndez exemplified Tomorrow AI that generates around 60 thousand marketing materials monthly with a team of only four people.

"Another example is Duolingo, this company that teaches languages. They had to lay off - which is the downside - about a thousand people, because a lot of the content that Duolingo does, and the way they educate people, they can now do it through artificial intelligence," said the CRANT executive.

When asked by The News Journal about the repercussions of AI in terms of employment, Melndez pointed out that the part where more people will be out of work is inevitable because companies will understand that they can carry out tasks through technology.

Although many jobs will disappear, he assured that a creative explosion will emerge that will give way to entrepreneurship.

The finance industry has not been immune to the technological advances of recent decades; in

"We will start to see companies doing things that we would never have imagined possible, and that is interesting because it will break the market and large established companies will disappear because others have solved it in a better way," said Melndez.

"If you are a person who has an idea and wants to execute it, but can't because you don't have the resources or because you don't know how to program, let's say you want to make an application, with artificial intelligence you will be able to make that application without knowing how to program and launch your company with almost no employees and without hiring anyone," he added.

At present, estimates by investment banking group Goldman Sachs on the rise of platforms that use AI suggest that 300 million jobs around the world could be automated, and, in the case of the United States, the workload could be replaced by 25% to 50%.

See original here:
Artificial Intelligence: inevitable integration enterprises | Top Stories | theweeklyjournal.com - The Weekly Journal