Archive for the ‘Artificial Intelligence’ Category

The Terrifying A.I. Scam That Uses Your Loved One’s Voice – The New Yorker

On a recent night, a woman named Robin was asleep next to her husband, Steve, in their Brooklyn home, when her phone buzzed on the bedside table. Robin is in her mid-thirties with long, dirty-blond hair. She works as an interior designer, specializing in luxury homes. The couple had gone out to a natural-wine bar in Cobble Hill that evening, and had come home a few hours earlier and gone to bed. Their two young children were asleep in bedrooms down the hall. Im always, like, kind of one ear awake, Robin told me, recently. When her phone rang, she opened her eyes and looked at the caller I.D. It was her mother-in-law, Mona, who never called after midnight. Im, like, maybe its a butt-dial, Robin said. So I ignore it, and I try to roll over and go back to bed. But then I see it pop up again.

She picked up the phone, and, on the other end, she heard Monas voice wailing and repeating the words I cant do it, I cant do it. I thought she was trying to tell me that some horrible tragic thing had happened, Robin told me. Mona and her husband, Bob, are in their seventies. Shes a retired party planner, and hes a dentist. They spend the warm months in Bethesda, Maryland, and winters in Boca Raton, where they play pickleball and canasta. Robins first thought was that there had been an accident. Robins parents also winter in Florida, and she pictured the four of them in a car wreck. Your brain does weird things in the middle of the night, she said. Robin then heard what sounded like Bobs voice on the phone. (The family members requested that their names be changed to protect their privacy.) Mona, pass me the phone, Bobs voice said, then, Get Steve. Get Steve. Robin took thisthat they didnt want to tell her while she was aloneas another sign of their seriousness. She shook Steve awake. I think its your mom, she told him. I think shes telling me something terrible happened.

Steve, who has close-cropped hair and an athletic build, works in law enforcement. When he opened his eyes, he found Robin in a state of panic. She was screaming, he recalled. I thought her whole family was dead. When he took the phone, he heard a relaxed male voicepossibly Southernon the other end of the line. Youre not gonna call the police, the man said. Youre not gonna tell anybody. Ive got a gun to your moms head, and Im gonna blow her brains out if you dont do exactly what I say.

Steve used his own phone to call a colleague with experience in hostage negotiations. The colleague was muted, so that he could hear the call but wouldnt be heard. You hear this??? Steve texted him. What should I do? The colleague wrote back, Taking notes. Keep talking. The idea, Steve said, was to continue the conversation, delaying violence and trying to learn any useful information.

I want to hear her voice, Steve said to the man on the phone.

The man refused. If you ask me that again, Im gonna kill her, he said. Are you fucking crazy?

O.K., Steve said. What do you want?

The man demanded money for travel; he wanted five hundred dollars, sent through Venmo. It was such an insanely small amount of money for a human being, Steve recalled. But also: Im obviously gonna pay this. Robin, listening in, reasoned that someone had broken into Steves parents home to hold them up for a little cash. On the phone, the man gave Steve a Venmo account to send the money to. It didnt work, so he tried a few more, and eventually found one that did. The app asked what the transaction was for.

Put in a pizza emoji, the man said.

After Steve sent the five hundred dollars, the man patched in a female voicea girlfriend, it seemedwho said that the money had come through, but that it wasnt enough. Steve asked if his mother would be released, and the man got upset that he was bringing this up with the woman listening. Whoa, whoa, whoa, he said. Baby, Ill call you later. The implication, to Steve, was that the woman didnt know about the hostage situation. That made it even more real, Steve told me. The man then asked for an additional two hundred and fifty dollars to get a ticket for his girlfriend. Ive gotta get my baby mama down here to me, he said. Steve sent the additional sum, and, when it processed, the man hung up.

By this time, about twenty-five minutes had elapsed. Robin cried and Steve spoke to his colleague. You guys did great, the colleague said. He told them to call Bob, since Monas phone was clearly compromised, to make sure that he and Mona were now safe. After a few tries, Bob picked up the phone and handed it to Mona. Are you at home? Steve and Robin asked her. Are you O.K.?

Mona sounded fine, but she was unsure of what they were talking about. Yeah, Im in bed, she replied. Why?

Artificial intelligence is revolutionizing seemingly every aspect of our lives: medical diagnosis, weather forecasting, space exploration, and even mundane tasks like writing e-mails and searching the Internet. But with increased efficiencies and computational accuracy has come a Pandoras box of trouble. Deepfake video content is proliferating across the Internet. The month after Russia invaded Ukraine, a video surfaced on social media in which Ukraines President, Volodymyr Zelensky, appeared to tell his troops to surrender. (He had not done so.) In early February of this year, Hong Kong police announced that a finance worker had been tricked into paying out twenty-five million dollars after taking part in a video conference with who he thought were members of his firms senior staff. (They were not.) Thanks to large language models like ChatGPT, phishing e-mails have grown increasingly sophisticated, too. Steve and Robin, meanwhile, fell victim to another new scam, which uses A.I. to replicate a loved ones voice. Weve now passed through the uncanny valley, Hany Farid, who studies generative A.I. and manipulated media at the University of California, Berkeley, told me. I can now clone the voice of just about anybody and get them to say just about anything. And what you think would happen is exactly whats happening.

Robots aping human voices are not new, of course. In 1984, an Apple computer became one of the first that could read a text file in a tinny robotic voice of its own. Hello, Im Macintosh, a squat machine announced to a live audience, at an unveiling with Steve Jobs. It sure is great to get out of that bag. The computer took potshots at Apples main competitor at the time, saying, Id like to share with you a maxim I thought of the first time I met an I.B.M. mainframe: never trust a computer you cant lift. In 2011, Apple released Siri; inspired by Star Treks talking computers, the program could interpret precise commandsPlay Steely Dan, say, or, Call Momand respond with a limited vocabulary. Three years later, Amazon released Alexa. Synthesized voices were cohabiting with us.

Still, until a few years ago, advances in synthetic voices had plateaued. They werent entirely convincing. If Im trying to create a better version of Siri or G.P.S., what I care about is naturalness, Farid explained. Does this sound like a human being and not like this creepy half-human, half-robot thing? Replicating a specific voice is even harder. Not only do I have to sound human, Farid went on. I have to sound like you. In recent years, though, the problem began to benefit from more money, more dataimportantly, troves of voice recordings onlineand breakthroughs in the underlying software used for generating speech. In 2019, this bore fruit: a Toronto-based A.I. company called Dessa cloned the podcaster Joe Rogans voice. (Rogan responded with awe and acceptance on Instagram, at the time, adding, The future is gonna be really fucking weird, kids.) But Dessa needed a lot of money and hundreds of hours of Rogans very available voice to make their product. Their success was a one-off.

In 2022, though, a New York-based company called ElevenLabs unveiled a service that produced impressive clones of virtually any voice quickly; breathing sounds had been incorporated, and more than two dozen languages could be cloned. ElevenLabss technology is now widely available. You can just navigate to an app, pay five dollars a month, feed it forty-five seconds of someones voice, and then clone that voice, Farid told me. The company is now valued at more than a billion dollars, and the rest of Big Tech is chasing closely behind. The designers of Microsofts Vall-E cloning program, which dbuted last year, used sixty thousand hours of English-language audiobook narration from more than seven thousand speakers. Vall-E, which is not available to the public, can reportedly replicate the voice and acoustic environment of a speaker with just a three-second sample.

Voice-cloning technology has undoubtedly improved some lives. The Voice Keeper is among a handful of companies that are now banking the voices of those suffering from voice-depriving diseases like A.L.S., Parkinsons, and throat cancer, so that, later, they can continue speaking with their own voice through text-to-speech software. A South Korean company recently launched what it describes as the first AI memorial service, which allows people to live in the cloud after their deaths and speak to future generations. The company suggests that this can alleviate the pain of the death of your loved ones. The technology has other legal, if less altruistic, applications. Celebrities can use voice-cloning programs to loan their voices to record advertisements and other content: the College Football Hall of Famer Keith Byars, for example, recently let a chicken chain in Ohio use a clone of his voice to take orders. The film industry has also benefitted. Actors in films can now speak other languagesEnglish, say, when a foreign movie is released in the U.S. That means no more subtitles, and no more dubbing, Farid said. Everybody can speak whatever language you want. Multiple publications, including The New Yorker, use ElevenLabs to offer audio narrations of stories. Last year, New Yorks mayor, Eric Adams, sent out A.I.-enabled robocalls in Mandarin and Yiddishlanguages he does not speak. (Privacy advocates called this a creepy vanity project.)

But, more often, the technology seems to be used for nefarious purposes, like fraud. This has become easier now that TikTok, YouTube, and Instagram store endless videos of regular people talking. Its simple, Farid explained. You take thirty or sixty seconds of a kids voice and log in to ElevenLabs, and pretty soon Grandmas getting a call in Grandsons voice saying, Grandma, Im in trouble, Ive been in an accident. A financial request is almost always the end game. Farid went on, And heres the thing: the bad guy can fail ninety-nine per cent of the time, and they will still become very, very rich. Its a numbers game. The prevalence of these illegal efforts is difficult to measure, but, anecdotally, theyve been on the rise for a few years. In 2020, a corporate attorney in Philadelphia took a call from what he thought was his son, who said he had been injured in a car wreck involving a pregnant woman and needed nine thousand dollars to post bail. (He found out it was a scam when his daughter-in-law called his sons office, where he was safely at work.) In January, voters in New Hampshire received a robocall call from Joe Bidens voice telling them not to vote in the primary. (The man who admitted to generating the call said that he had used ElevenLabs software.) I didnt think about it at the time that it wasnt his real voice, an elderly Democrat in New Hampshire told the Associated Press. Thats how convincing it was.

Read the original:
The Terrifying A.I. Scam That Uses Your Loved One's Voice - The New Yorker

Chinas lawmakers walk fine line between AI development and tighter regulation – South China Morning Post

We must establish a unified market for computing power services and the effective use of resources across the country, Yu, a CPPCC member, said.

Xi Jinpings hi-tech push steals the spotlight at Chinas two sessions

His appeal resonated with other delegates, including telecoms equipment maker ZTEs senior vice-president Miao Wei and Ma Kui, general manager at China Mobiles Sichuan branch, who both called for increased investment in and more coordinated development of computing infrastructure. Miao and Ma are NPC delegates.

Computing power has become the focus of international competition, said Ma, who also highlighted the imbalance of the Chinese AI industry, with research teams located mostly in first -tier cities such as Beijing and Shanghai but computing resources clustered in other smaller cities.

The calls for a state-orchestrated computing infrastructure come after five Chinese government bodies, including MIIT and the National Development and Reform Commission, the countrys top economic planner, issued a policy titled East-West Compute Transfer to coordinate computing resources between Chinas eastern and coastal provinces and its western inland regions.

But Zhang Yunquan, a CPPCC member and a research fellow from the Chinese Academy of Sciences, said the project would not help efforts to train large language (LLM) models for AI, as it mainly serves traditional data centre and cloud computing demands.

Instead, Zhang proposed state-led efforts to coordinate academic and industrial resources to build up a sovereign LLM.

Cao Peng, chair of the technology committee at Chinese e-commerce giant JD.com and head of its cloud unit, called for the development of home-made AI chips to circumvent Washingtons export controls.

Two sessions 2024: Chinas construction of particle collider may start in 2027

Liu Qingfeng, chairman at iFlyTek, a Chinese AI specialist known for its voice recognition capability, called for a national-level approach to systematically and rapidly propel our countrys artificial general intelligence growth.

We need to acknowledge the gap and consolidate resources from the state level to accelerate the catch-up [with US AI firms], according to Liu.

Zeng Yi, a CPPCC member and head of China Electronics Corporation, warned that China was lagging in generative AI when it came to talent and basic scientific research. We are all very anxious about being left behind, Zeng said.

Premier Li Qiang introduced an AI+ initiative to integrate the power of AI across traditional sectors to drive economic growth, and to push for technology upgrades. Meanwhile, Chinas lawmakers and political advisers voiced concern about potential disruptions from AI, and called for effective regulation.

Lou Xiangping, head of China Mobiles branch in the central Henan province, proposed an accountability system to hold service providers such as operators of local ChatGPT-like services responsible for possible mishaps.

China has already implemented a registration system that requires local LLMs to apply for approval before providing public services. More than 40, or around one-fifth of the countrys total number of LLMs, have been given the green light for public release.

Zhang Yi, a CPPCC member and senior partner at law firm King & Wood Mallesons, tabled his proposal about improving AI regulation but also cautioned that too many laws might hinder the development of the local industry.

In explaining his proposal to local media, Zhang said China needs to balance regulation and development through an approach that clearly defines what is illegal, while also allowing companies to innovate and explore new areas.

As global AI competition intensifies [we] need to be wary of how overbearing legal intervention could inhibit the healthy and orderly development of AI, he said.

Read the original here:
Chinas lawmakers walk fine line between AI development and tighter regulation - South China Morning Post

The benefits and risks of Artificial Intelligence – IT Brief Australia

In little more than 12 months, generative AI has evolved from being a technical novelty into a powerful business tool. However senior IT managers believe the technology brings with it risks as well as benefits.

According to the Immuta 2024 State of Data Security Report, 88% of senior managers say their staff are already using AI tools, regardless of whether their organisation has a firm policy of adoption.

Asked to nominate the key IT security benefits offered by AI, respondents to the Immuta survey pointed to improved phishing attack identification and threat simulation as two of the biggest. Others included anomaly detection and better audits and reporting.

When it came to identifying AI-related risks, inadvertent exposure of sensitive information by employees and unauthorised use of purpose-built models out of context were nominated by respondents. Additional named risks included the inadvertent exposure of sensitive data by large language models (LLOMs) and the poisoning of training data.

Continuing growth Despite these concerns, organisational uptake of AI appears likely to remain brisk. Analyst firm Gartner predicts that IT spending will increase more than 70% during the next year, and a significant portion will be invested in AI-related technologies and tools. Organisations will need to continue to embrace this new technology to remain competitive and relevant in todays economic landscape.

Its likely that 2024 will also become the year of the AI control system. Aside from the hype surrounding generative AI, there is a broader issue around developing a control system for the technology. This is because AI brings an entirely new paradigm where there is little or no human control. AI initiatives, therefore, wont get into full-scale production without a new form of control system in place.

At the same time, organisations will come to realise that, as AI usage increases, they need to focus even more attention on data security. As we have seen with governments around the world, there has also been an urgent need to enact news laws and regulations to ensure that data privacy and data security concerns with generative AI are addressed.

As the technology evolves, it will become clear that the key to harnessing the power of large-language model (LLM)-based AI lies in having a robust data governance framework. Such a framework is essential not only for guiding the ethical and secure use of LLMs but also for establishing standards for measuring their outputs and ensuring integrity.

The evolution of LLMs will open new avenues for applications in data analysis, customer service, and decision-making processes, further embedding LLMs into the fabric of data-driven industries.

The biggest winners when it comes to AI usage will be the organisations that create real value from better data engineering processes that are used to leverage models using their own data and business context. The key impact for these companies will be better knowledge management.

An ongoing reprioritisation and reassignment of resources With the pace of change in technology and data usage likely to continue to increase, organisations will be forced to redirect resources into new data-related areas that will become priorities. Examples include data governance and compliance, data quality, and data integration.

Despite ongoing pressure to do more with less, organisations cant and wont halt investment in IT. These investments will be focussed on the critical building blocks that form the foundation of a modern data stack that is required to support AI initiatives.

Also, the traditional demarcation between data and application layers in an IT infrastructure will be replaced by a more integrated approach focused on data products. Rather than a few dozen apps, there will be hundreds of data products. Dubbed a data-centric architecture, this approach will allow organisations to extract greater value from their data resources and better support their operations.

By working closer to the data, data teams can reduce latency and improve performance, opening up new possibilities for real-time reporting and analytics. This, in turn, supports better decision-making and more efficient business processes.

The coming year will see some fundamental changes in the way businesses manage and work with AI and data. Those that take time to experiment with the technology and determine its best use cases will be best placed to extract maximum value and achieve optimal results.

Go here to see the original:
The benefits and risks of Artificial Intelligence - IT Brief Australia

Learn the ways of machine learning with Python through one of these 5 courses and specializations – Fortune

The fastest growing jobs in the world right now are ones dealing with AI and machine learning. Thats according to the World Economic Forum.

This should come at no surprise as new technology is being deployed practically on the daily that is revolutionizing the ways in which the globe works through automation and machine intelligence.

ADVERTISEMENT

Beyond having foundational skills in mathematics and computer science and soft skills like problem-solving and communication, core to the AI and machine learning space is programmingspecifically Python. The programming language is one of the most in-demand for all tech experts.

Python plays an integral part of machine learning specialists everyday tasks, says Ratinder Paul Singh Ahuja, CTO and VP at Pure Storage. He specifically points its diverse set of libraries and their relevant roles:

As you can imagine, the best practices in the everchanging AI may differ depending on the day, task, and company. So, building foundational skills overalland being able to differentiate yourselfis important in the space.

The good news for those who are looking to learn the ropes in the machine learning and Python space, there are seemingly endless ways to gain knowledge onlineand even for free.

For those exploring the subject on your own, resources like W3Schools, Kaggle, and Googles crash course are good options. Even as simple as watching YouTube videos and checking out GitHub can be useful.

I think if you focus on core technical skills, and also the ability to differentiate, I think that theres still plenty of opportunity for AI enthusiasts to get into the market, says Rakesh Anigundi, Ryzen AI product lead at AMD.

Anigundi adds that because the field and job market is so complicated, even companies themselves are trying to figure out what are the most useful skills to build products and solve problems. So, doing anything you can to stay ahead of the game can be part of what helps propel your career.

For those looking for a little bit of a deeper dive into machine learning with Python, Fortune has listed some of the options on the market; theyre largely self-paced but vary slightly in terms of price and length.

Participants can watch hours of free videos about machine learning. At the end, each course has one learning multiple-choice question. Users are provided five different challenges to take on. The interactive projects include the creation of a book recommendation engine, neural network SMS text classifier, and cat and dog image classifier.

Cost: Free

Length: Self-paced; 36 lessons + 5 projects

Course examples: Tensorflow; Deep Learning Demystified

Hosted with edX, this introductory course allows students to learn about machine learning and AI straight from two of Harvards expert computer science professors. Participants are exposed to topics like algorithms, neutral networks, and natural language processing. Video transcripts are also notably available in nearly a dozen other languages. For those wanting to learn more, the course is part of Harvards computer science for artificial intelligence professional certificate program.

Cost: Free (certificate available for $299)

Length: 6 weeks (45 hours/week)

Course learning goals: Explore advanced data science; train models; examine result; recognize data bias

Data scientists from IBM guide students through machine learning algorithms, Python classifications techniques, and data regressions. Participants are recommended to have a working knowledge of Python, data analysis, and data visualization as well as high school-level mathematics.

Cost: $49/month

Length: 12 hours (approximately)

Module examples: Regression; Classification; Clustering

With nearly 100 hours of content, instructors from Stanford University and DeepLearning.ai, including renowned AI and edtech leader Andrew Ng, walk students through the foundations of machine learning. It also focuses on the applications of AI into the real world, especially Silicon Valley. Participants are recommended to have some basic coding experience with knowledge of high school-level mathematics.

Cost: $49/month

Length: 2 months (10 hours/week)

Course examples: Supervised Machine Learning: Regression and Classification; Advanced Learning Algorithms; Unsupervised Learning, Recommenders, Reinforcement Learning

A professor from the University of Michigans school of information and college of engineering teaches students the ins and outs of machine learning, with discussion of regressions, classifications, neural networks, and more. The course is for individuals with already some existing knowledge in the data and AI world. It is part of a larger specialization focused on data science methods and techniques.

Cost: $49/month

Length: 31 hours (approximately)

Course examples: Fundamentals of Machine Learning; Supervised Machine Learning; Evaluation

Check out all ofFortunesrankings of degree programs, and learn more about specificcareer paths.

Link:
Learn the ways of machine learning with Python through one of these 5 courses and specializations - Fortune

A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Cant Be Done. – EdSurge

When Satya Nitta worked at IBM, he and a team of colleagues took on a bold assignment: Use the latest in artificial intelligence to build a new kind of personal digital tutor.

This was before ChatGPT existed, and fewer people were talking about the wonders of AI. But Nitta was working with what was perhaps the highest-profile AI system at the time, IBMs Watson. That AI tool had pulled off some big wins, including beating humans on the Jeopardy quiz show in 2011.

Nitta says he was optimistic that Watson could power a generalized tutor, but he knew the task would be extremely difficult. I remember telling IBM top brass that this is going to be a 25-year journey, he recently told EdSurge.

He says his team spent about five years trying, and along the way they helped build some small-scale attempts into learning products, such as a pilot chatbot assistant that was part of a Pearson online psychology courseware system in 2018.

But in the end, Nitta decided that even though the generative AI technology driving excitement these days brings new capabilities that will change education and other fields, the tech just isnt up to delivering on becoming a generalized personal tutor, and wont be for decades at least, if ever.

Well have flying cars before we will have AI tutors, he says. It is a deeply human process that AI is hopelessly incapable of meeting in a meaningful way. Its like being a therapist or like being a nurse.

Instead, he co-founded a new AI company, called Merlyn Mind, that is building other types of AI-powered tools for educators.

Meanwhile, plenty of companies and education leaders these days are hard at work chasing that dream of building AI tutors. Even a recent White House executive order seeks to help the cause.

Earlier this month, Sal Khan, leader of the nonprofit Khan Academy, told the New York Times: Were at the cusp of using A.I. for probably the biggest positive transformation that education has ever seen. And the way were going to do that is by giving every student on the planet an artificially intelligent but amazing personal tutor.

Khan Academy has been one of the first organizations to use ChatGPT to try to develop such a tutor, which it calls Khanmigo, that is currently in a pilot phase in a series of schools.

Khans system does come with an off-putting warning, though, noting that it makes mistakes sometimes. The warning is necessary because all of the latest AI chatbots suffer from what are known as hallucinations the word used to describe situations when the chatbot simply fabricates details when it doesnt know the answer to a question asked by a user.

AI experts are busy trying to offset the hallucination problem, and one of the most promising approaches so far is to bring in a separate AI chatbot to check the results of a system like ChatGPT to see if it has likely made up details. Thats what researchers at Georgia Tech have been trying, for instance, hoping that their muti-chatbot system can get to the point where any false information is scrubbed from an answer before it is shown to a student. But its not yet clear that approach can get to a level of accuracy that educators will accept.

At this critical point in the development of new AI tools, though, its useful to ask whether a chatbot tutor is the right goal for developers to head toward. Or is there a better metaphor than tutor for what generative AI can do to help students and teachers?

Michael Feldstein spends a lot of time experimenting with chatbots these days. Hes a longtime edtech consultant and blogger, and in the past he wasnt shy about calling out what he saw as excessive hype by companies selling edtech tools.

In 2015, he famously criticized promises about what was then the latest in AI for education a tool from a company called Knewton. The CEO of Knewton, Jose Ferreira, said his product would be like a robot tutor in the sky that can semi-read your mind and figure out what your strengths and weaknesses are, down to the percentile. Which led Feldstein to respond that the CEO was selling snake oil because, Feldstein argued, the tool was nowhere near to living up to that promise. (The assets of Knewton were quietly sold off a few years later.)

So what does Feldstein think of the latest promises by AI experts that effective tutors could be on the near horizon?

ChatGPT is definitely not snake oil far from it, he tells EdSurge. It is also not a robot tutor in the sky that can semi-read your mind. It has new capabilities, and we need to think about what kinds of tutoring functions todays tech can deliver that would be useful to students.

He does think tutoring is a useful way to view what ChatGPT and other new chatbots can do, though. And he says that comes from personal experience.

Feldstein has a relative who is battling a brain hemorrhage, and so Feldstein has been turning to ChatGPT to give him personal lessons in understanding the medical condition and his loved-ones prognosis. As Feldstein gets updates from friends and family on Facebook, he says, he asks questions in an ongoing thread in ChatGPT to try to better understand whats happening.

When I ask it in the right way, it can give me the right amount of detail about, What do we know today about her chances of being OK again? Feldstein says. Its not the same as talking to a doctor, but it has tutored me in meaningful ways about a serious subject and helped me become more educated on my relatives condition.

While Feldstein says he would call that a tutor, he argues that its still important that companies not oversell their AI tools. Weve done a disservice to say theyre these all-knowing boxes, or they will be in a few months, he says. Theyre tools. Theyre strange tools. They misbehave in strange ways as do people.

He points out that even human tutors can make mistakes, but most students have a sense of what theyre getting into when they make an appointment with a human tutor.

When you go into a tutoring center in your college, they dont know everything. You dont know how trained they are. Theres a chance they may tell you something thats wrong. But you go in and get the help that you can.

Whatever you call these new AI tools, he says, it will be useful to have an always-on helper that you can ask questions to, even if their results are just a starting point for more learning.

What are new ways that generative AI tools can be used in education, if tutoring ends up not being the right fit?

To Nitta, the stronger role is to serve as an assistant to experts rather than a replacement for an expert tutor. In other words, instead of replacing, say, a therapist, he imagines that chatbots can help a human therapist summarize and organize notes from a session with a patient.

Thats a very helpful tool rather than an AI pretending to be a therapist, he says. Even though that may be seen as boring, by some, he argues that the technologys superpower is to automate things that humans dont like to do.

In the educational context, his company is building AI tools designed to help teachers, or to help human tutors, do their jobs better. To that end, Merlyn Mind has taken the unusual step of building its own so-called large language model from scratch designed for education.

Even then, he argues that the best results come when the model is tuned to support specific education domains, by being trained with vetted datasets rather than relying on ChatGPT and other mainstream tools that draw from vast amounts of information from the internet.

What does a human tutor do well? They know the student, and they provide human motivation, he adds. Were all about the AI augmenting the tutor.

Go here to see the original:
A Technologist Spent Years Building an AI Chatbot Tutor. He Decided It Cant Be Done. - EdSurge