Archive for the ‘Artificial Intelligence’ Category

A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Cant Be Done. – EdSurge

When Satya Nitta worked at IBM, he and a team of colleagues took on a bold assignment: Use the latest in artificial intelligence to build a new kind of personal digital tutor.

This was before ChatGPT existed, and fewer people were talking about the wonders of AI. But Nitta was working with what was perhaps the highest-profile AI system at the time, IBMs Watson. That AI tool had pulled off some big wins, including beating humans on the Jeopardy quiz show in 2011.

Nitta says he was optimistic that Watson could power a generalized tutor, but he knew the task would be extremely difficult. I remember telling IBM top brass that this is going to be a 25-year journey, he recently told EdSurge.

He says his team spent about five years trying, and along the way they helped build some small-scale attempts into learning products, such as a pilot chatbot assistant that was part of a Pearson online psychology courseware system in 2018.

But in the end, Nitta decided that even though the generative AI technology driving excitement these days brings new capabilities that will change education and other fields, the tech just isnt up to delivering on becoming a generalized personal tutor, and wont be for decades at least, if ever.

Well have flying cars before we will have AI tutors, he says. It is a deeply human process that AI is hopelessly incapable of meeting in a meaningful way. Its like being a therapist or like being a nurse.

Instead, he co-founded a new AI company, called Merlyn Mind, that is building other types of AI-powered tools for educators.

Meanwhile, plenty of companies and education leaders these days are hard at work chasing that dream of building AI tutors. Even a recent White House executive order seeks to help the cause.

Earlier this month, Sal Khan, leader of the nonprofit Khan Academy, told the New York Times: Were at the cusp of using A.I. for probably the biggest positive transformation that education has ever seen. And the way were going to do that is by giving every student on the planet an artificially intelligent but amazing personal tutor.

Khan Academy has been one of the first organizations to use ChatGPT to try to develop such a tutor, which it calls Khanmigo, that is currently in a pilot phase in a series of schools.

Khans system does come with an off-putting warning, though, noting that it makes mistakes sometimes. The warning is necessary because all of the latest AI chatbots suffer from what are known as hallucinations the word used to describe situations when the chatbot simply fabricates details when it doesnt know the answer to a question asked by a user.

AI experts are busy trying to offset the hallucination problem, and one of the most promising approaches so far is to bring in a separate AI chatbot to check the results of a system like ChatGPT to see if it has likely made up details. Thats what researchers at Georgia Tech have been trying, for instance, hoping that their muti-chatbot system can get to the point where any false information is scrubbed from an answer before it is shown to a student. But its not yet clear that approach can get to a level of accuracy that educators will accept.

At this critical point in the development of new AI tools, though, its useful to ask whether a chatbot tutor is the right goal for developers to head toward. Or is there a better metaphor than tutor for what generative AI can do to help students and teachers?

Michael Feldstein spends a lot of time experimenting with chatbots these days. Hes a longtime edtech consultant and blogger, and in the past he wasnt shy about calling out what he saw as excessive hype by companies selling edtech tools.

In 2015, he famously criticized promises about what was then the latest in AI for education a tool from a company called Knewton. The CEO of Knewton, Jose Ferreira, said his product would be like a robot tutor in the sky that can semi-read your mind and figure out what your strengths and weaknesses are, down to the percentile. Which led Feldstein to respond that the CEO was selling snake oil because, Feldstein argued, the tool was nowhere near to living up to that promise. (The assets of Knewton were quietly sold off a few years later.)

So what does Feldstein think of the latest promises by AI experts that effective tutors could be on the near horizon?

ChatGPT is definitely not snake oil far from it, he tells EdSurge. It is also not a robot tutor in the sky that can semi-read your mind. It has new capabilities, and we need to think about what kinds of tutoring functions todays tech can deliver that would be useful to students.

He does think tutoring is a useful way to view what ChatGPT and other new chatbots can do, though. And he says that comes from personal experience.

Feldstein has a relative who is battling a brain hemorrhage, and so Feldstein has been turning to ChatGPT to give him personal lessons in understanding the medical condition and his loved-ones prognosis. As Feldstein gets updates from friends and family on Facebook, he says, he asks questions in an ongoing thread in ChatGPT to try to better understand whats happening.

When I ask it in the right way, it can give me the right amount of detail about, What do we know today about her chances of being OK again? Feldstein says. Its not the same as talking to a doctor, but it has tutored me in meaningful ways about a serious subject and helped me become more educated on my relatives condition.

While Feldstein says he would call that a tutor, he argues that its still important that companies not oversell their AI tools. Weve done a disservice to say theyre these all-knowing boxes, or they will be in a few months, he says. Theyre tools. Theyre strange tools. They misbehave in strange ways as do people.

He points out that even human tutors can make mistakes, but most students have a sense of what theyre getting into when they make an appointment with a human tutor.

When you go into a tutoring center in your college, they dont know everything. You dont know how trained they are. Theres a chance they may tell you something thats wrong. But you go in and get the help that you can.

Whatever you call these new AI tools, he says, it will be useful to have an always-on helper that you can ask questions to, even if their results are just a starting point for more learning.

What are new ways that generative AI tools can be used in education, if tutoring ends up not being the right fit?

To Nitta, the stronger role is to serve as an assistant to experts rather than a replacement for an expert tutor. In other words, instead of replacing, say, a therapist, he imagines that chatbots can help a human therapist summarize and organize notes from a session with a patient.

Thats a very helpful tool rather than an AI pretending to be a therapist, he says. Even though that may be seen as boring, by some, he argues that the technologys superpower is to automate things that humans dont like to do.

In the educational context, his company is building AI tools designed to help teachers, or to help human tutors, do their jobs better. To that end, Merlyn Mind has taken the unusual step of building its own so-called large language model from scratch designed for education.

Even then, he argues that the best results come when the model is tuned to support specific education domains, by being trained with vetted datasets rather than relying on ChatGPT and other mainstream tools that draw from vast amounts of information from the internet.

What does a human tutor do well? They know the student, and they provide human motivation, he adds. Were all about the AI augmenting the tutor.

Go here to see the original:
A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Cant Be Done. - EdSurge

Demystifying AI: The Probability Theory Behind LLMs Like OpenAI’s ChatGPT – PYMNTS.com

When a paradigm shift occurs, it is not always obvious to those affected by it.

But there is no eye of the storm equivalent when it comes to generative artificial intelligence (AI).

The technology ishere. There are already variouscommercial productsavailable fordeployment, and organizations that can effectively leverage it in support of theirbusiness goalsare likely to outperform their peers that fail to adopt the innovation.

Still, as with many innovations, uncertainty and institutional inertia reign supreme which is why understanding how the large language models (LLMs) powering AI work is critical to not just piercing the black box of the technologys supposed inscrutability, but also to applying AI tools correctly within an enterprise setting.

The most important thing to understand about the foundational models powering todays AI interfaces and giving them their ability to generate responses is the simple fact that LLMs, like Googles Bard, Anthropics Claude, OpenAIs ChatGPT and others, are just adding one word at a time.

Underneath the layers of sophisticated algorithmic calculations, thats all there is to it.

Thats because at a fundamental level, generative AI models are built to generate reasonable continuations of text by drawing from a ranked list of words, each given different weighted probabilities based on the data set the model was trained on.

Read more:There Are a Lot of Generative AI Acronyms Heres What They All Mean

While news of AI that can surpass human intelligence are helping fuel the hype of the technology, the reality is far more driven by math than it is by myth.

It is important for everyone to understand that AIlearns from data at the end of the day [AI] is merely probabilistics and statistics, Akli Adjaoute, AI pioneer and founder and general partner at venture capital fund Exponion, told PYMNTS in November.

But where do the probabilities that determine an AI systems output originate from?

The answer lies within the AI models training data. Peeking into the inner workings of an AI model reveals that it is not only the next reasonable word that is being identified, weighted, then generated, but that this process occurs on a letter by letter basis, as AI models break apart words into more manageable tokens.

That is a big part of whyprompt engineering for AI models is an emerging skillset. After all, different prompts produce different outputs based on the probabilities inherent to each reasonable continuation, meaning that to get the best output, you need to have a clear idea of where to point the provided input or query.

It also means that the data informing the weight given to each probabilistic outcome must berelevantto the query. The more relevant, the better.

See also:Tailoring AI Solutions by Industry Key to Scalability

While PYMNTS Intelligence has found that more than eight in 10 business leaders (84%) believe generative AI will positively impactthe workforce, generative AI systems are only as good as the data theyre trained on. Thats why the largest AI players are in an arms race toacquire the best training data sets.

Theres a long way to go before theres afuturistic version of AIwhere machines think and make decisions. Humans will be around for quite a while,Tony Wimmer, head of data and analytics atJ.P. Morgan Payments, told PYMNTS in March. And the more that we can write software that has payments data at the heart of it to help humans, the better payments will get.

Thats why, to train an AI model to perform to the necessary standard, many enterprises are relying ontheir own internal datato avoid compromising model outputs. By creating vertically specialized LLMs trained for industry use cases, organizations can deploy AI systems that are able to find the signal within the noise, as well as to be further fine-tuned to business-specific goals with real-time data.

AsAkli Adjaoutetold PYMNTS back in November, if you go into a field where the data is real, particularly in thepayments industry, whether its credit risk, whether its delinquency, whether its AML [anti-money laundering], whether its fraud prevention, anything that touches payments AI can bring a lot of benefit.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read the rest here:
Demystifying AI: The Probability Theory Behind LLMs Like OpenAI's ChatGPT - PYMNTS.com

The Urgent but Difficult Task of Regulating Artificial Intelligence – Amnesty International

By David Nolan, Hajira Maryam & Michael Kleinman, Amnesty Tech

The year 2023 marked a new era of AI hype, rapidly steering policy makers towards discussions on the safety and regulation of new artificial intelligence (AI) technologies. The feverish year in tech started with the launch of ChatGPT in late 2022 and ended with a landmark agreement on the EU AI Act being reached. Whilst the final text is still being ironed out in technical meetings over the coming weeks, early signs indicate the western worlds first AI rulebook goes someway to protecting people from the harms of AI but still falls short in a number of crucial areas, failing to ensure human rights protections especially for the most marginalised. This came soon after the UK Government hosted an inaugural AI Safety Summit in November 2023, where global leaders, key industry players, and select civil society groups gathered to discuss the risks of AI. Although the growing momentum and debate on AI governance is welcomed and urgently needed, the key question for 2024 is whether these discussions will generate concrete commitments and focus on the most important present-day AI risks, and critically whether it will translate into further substantive action in other jurisdictions.

Whilst AI developments do present new opportunities and benefits, we must not ignore the documented dangers posed by AI tools when they are used as a means of societal control, mass surveillance and discrimination. All too often, AI systems are trained on massive amounts of private and public datadata which reflects societal injustices, often leading to biased outcomes and exacerbating inequalities. From predictive policing tools, to automated systems used in public sector decision-making to determine who can access healthcare and social assistance, to monitoring the movement of migrants and refugees, AI has flagrantly and consistently undermined the human rights of the most marginalised in society. Other forms of AI, such as fraud detection algorithms, have also disproportionately impacted ethnic minorities, who have endured devastating financial problems as Amnesty International has already documented, while facial recognition technology has been used by the police and security forces to target racialised communities and entrench Israels system of apartheid.

So, what makes regulation of AI complex and challenging? First, there is the vague nature of the term AI itself, making efforts to regulate this technology more cumbersome. There is no widespread consensus on the definition of AI because the term does not refer to a singular technology and rather encapsulates a myriad technological applications and methods. The use of AI systems in many different domains across the public and private sector, means a large number of varied stakeholders are involved in its development and deployment, meaning such systems are a product of labour, data, software and financial inputs and any regulation must grapple with upstream and downstream harms. Further, these systems cannot be strictly considered as hardware or software, but rather their impact comes down to the context in which they are developed and implemented and regulation must take this into account.

As we enter 2024, now is the time to not only ensure that AI systems are rights respecting by design, but also to guarantee that those who are impacted by these technologies are not only meaningfully involved in decision-making on how AI technology should be regulated, but also that their experiences are continually surfaced and are centred within these discussions.

Alongside the EU legislative process, the UK, US, and others, have set out their distinct roadmaps and approach to identifying the key risks AI technologies present, and how they intend to mitigate these. Whilst there are many complexities of these legislative processes, this should not delay any efforts to protect people from the present and future harms of AI, and there are crucial elements that we, at Amnesty, know any proposed regulatory approach must contain. Regulation must be legally binding and center the already documented harms to people subject to these systems. Commitments and principles on the responsible development and use of AIthe core of the current pro-innovation regulatory framework being pursued by the UKdo not offer an adequate protection against the risks of emerging technology and must be put on statutory footing.

Similarly, any regulation must include broader accountability mechanisms over and above technical evaluations that are being pushed by industry. Whilst these may be a useful string within any regulatory toolkits bow, particularly in testing for algorithmic bias, bans and prohibitions cannot be off the table for systems fundamentally incompatible with human rights, no matter how accurate or technically efficacious they purport to be.

Others must learn from the EU process and ensure there are not loopholes for public and private sector players to circumvent regulatory obligations, and removing any exemptions for AI used within national security or law enforcement is critical to achieving this. It is also important that where future regulation limits or prohibits the use of certain AI systems in one jurisdiction, no loopholes or regulatory gaps allow the same systems to be exported to other countries where they could be used to harm the human rights of marginalized groups. This remains a glaring gap in the UK, US, and EU approaches, as they fail to take into account the global power imbalances of these technologies, especially their impact on communities in the Global Majority whose voices are not represented in these discussions. There have already been documented cases of outsourced workers being exploited in Kenya and Pakistan by companies developing AI tools.

As we enter 2024, now is the time to not only ensure that AI systems are rights-respecting by design, but also to guarantee that those who are impacted by these technologies are not only meaningfully involved in decision-making on how AI technology should be regulated, but also that their experiences are continually surfaced and are centred within these discussions.More than lip service by lawmakers, we need binding regulation that holds companies and other key industry players to account and ensures that profits do not come at the expense of human rights protections. International, regional and national governance efforts must complement and catalyse each other, and global discussions must not come at the expense of meaningful national regulation or binding regulatory standards these are not mutually exclusive. This is the level at which accountability is servedwe must learn from past attempts to regulate tech, which means ensuring robust mechanisms are introduced to allow victims of AI-inflicted rights violations to seek justice.

Read the original here:
The Urgent but Difficult Task of Regulating Artificial Intelligence - Amnesty International

Comparing Student Reactions To Lectures In Artificial Intelligence And Physics – Science 2.0

In the past two weeks I visited two schools in Veneto to engage students with the topic of Artificial Intelligence, which is something everybody seems to be happy to hear about these days: on the 10th of January I visited a school in Vicenza, and on the 17th a school in Venice. In both cases there were about 50-60 students, but there was a crucial difference: while the school in Venezia (the "Liceo Marco Foscarini", where I have been giving lectures in the past within the project called "Art and Science") was a classical liceum and the high-schoolers who came to listen to my presentation were between 16 and 18 years old, the one in Vicenza was a middle school, and its attending students were between 11 and 13 years old. Since the contents of the lecture could withstand virtually no change - I was too busy during these first few post-Christmas weeks - the two-pronged test was an effective testing ground to spot differences in the reaction of the two audiences. To be honest, I approached the first event with some worries that the content I was presenting to those young kids was going to be a bit overwhelming to them, so maybe in hindsight we could imagine that the impression I got was biased by this "low expectations" attitude.

To make matters worse, because my lecture was the first in a series organized by a local academy, with comparticipation of the Comune of Vicenza, the lecture I gave had to follow speeches from the school director, the maior of Vicenza, and a couple of other introductions - something that I was sure was further decreasing the stamina and willingness to listen to a frontal lecture of the young audience. In fact, I was completely flabberghasted.

Not only did the middle schoolers in Vicenza follow with attention and in full silence the 80-minutes-long talk I had prepared. They also interrupted a few times with witty questions (as I had begged them to do, in fact). At the end of the presentation, I was hit by a rapid succession of questions ranging over the full contents of the lecture - from artificial intelligence to particle physics, to details about the SWGO experiment, astrophysics, and what not. I counted about 20 questions and then lost track of that. This continued after the end of the event, when some of the students were not completely happy yet and came to meet me and ask for more detail.

Above, a moment during the lecture in Vicenza

When I gave the same lecture in Venice, I must say I did receive again several interesting questions. But in comparison, the Foscarini teenagers were clearly a bit less enthusiastic on the whole of the topic of the lecture. Maybe my assessment comes from the bias I was mentioning earlier; and in part, I have to say I have much more experience with high-schoolers than with younger students, so I knew better what to expect and I was not surprised by the outcome.

This comparison seems to align with what has been once observed by none other than Carl Sagan. I have to thank Phil Warnell here, who commenting on Facebook to a post I wrote there on my experience with middle schoolers cited a piece from Sagan that is quite relevant:

I cannot but concur with what Sagan says in these two quotes. I also believe that part of the unwillingness of high-schoolers to ask questions is due to the judgment of their peers. What happens is that until we are 12 or 13 we for the most part have not yet had experience with the negative feedback we may get by being participative in school events, and we do not yet fear the reaction of our friends and not-so-friendly schoolmates. It seems that kind of experience grows a shell around them, making them a bit less willing to expose themselves and speak up to discuss what they did not understand, or to express enthusiasm. I think that is a bit sad, but it is of course part of our early trajectory amid experiences that form us and equip us with the vaccines we are going to need in the rest of our life.

See original here:
Comparing Student Reactions To Lectures In Artificial Intelligence And Physics - Science 2.0

Critics Say Sweeping Artificial Intelligence Regulations Could Target Parody, Satire Such as South Park, Family Guy – R Street

Its just not workable, a fellow at the R Street Institute, Shoshana Weissmann, tells the Sun. Although AI impersonation is a problem and fraud laws should protect against it, thats not what this law would do, she says.

The bill defines likeness as the actual or simulated image or likeness of an individual, regardless of the means of creation, that is readily identifiable by virtue of face, likeness, or other distinguishing characteristic. It defines voice as any medium containing the actual voice or a simulation of the voice of an individual, whether recorded or generated by computer, artificial intelligence, algorithm, or other digital technology, service, or device to the extent that an individual is readily identifiable from the sound of it.

Theres no exception for parody, and basically, the way they define digital creations is just so broad, it would cover cartoons, Ms. Weissmann says, adding that the bill would extend to shows such as South Park and Family Guy, which both do impersonations of people.

Its understood that this isnt the real celebrity. When South Park made fun of Ben Affleck, it wasnt really Ben Affleck. And they even used his picture at one point, but it was clear they were making fun of him. But under the pure text of this law, that would be unlawful, she says.

If the bill was enacted, someone would sue immediately, she says, adding that it would not pass First Amendment scrutiny.

Lawmakers should be more careful to ensure these regulations dont run afoul of the Constitution, she says, but instead, they have haphazard legislation like this that just doesnt make any functional sense.

While the bill does include a section relating to the First Amendment defense, Ms. Weissmann says, its essentially saying that after youre sued under our bill, you can use the First Amendment as a defense. But you can do that anyway under the bill. That doesnt change that.

Because of the threat of being dragged into court and spending thousands of dollars on lawyers, the bill would effectively be chilling speech, she notes.

One of the harms defined in the bill includes severe emotional distress of any person whose voice or likeness is used without consent.

Lets say Ben Affleck said he had severe emotional distress because South Park parodied him, Ms. Weissmann says. He could sue under this law. Thats insane, absolutely insane.

The bill would be more workable if it was made more specific and narrow to actual harms, and also made sure that people couldnt sue over very obvious parodies, she says. The way its drafted now, however, is going to apply to a lot more than they intended, she adds.

See the rest here:
Critics Say Sweeping Artificial Intelligence Regulations Could Target Parody, Satire Such as South Park, Family Guy - R Street