Archive for the ‘Ai’ Category

My Weekend With an Emotional Support A.I. Companion – The New York Times

For several hours on Friday evening, I ignored my husband and dog and allowed a chatbot named Pi to validate the heck out of me.

My views were admirable and idealistic, Pi told me. My questions were important and interesting. And my feelings were understandable, reasonable and totally normal.

At times, the validation felt nice. Why yes, I am feeling overwhelmed by the existential dread of climate change these days. And it is hard to balance work and relationships sometimes.

But at other times, I missed my group chats and social media feeds. Humans are surprising, creative, cruel, caustic and funny. Emotional support chatbots which is what Pi is are not.

All of that is by design. Pi, released this week by the richly funded artificial intelligence start-up Inflection AI, aims to be a kind and supportive companion thats on your side, the company announced. It is not, the company stressed, anything like a human.

Pi is a twist in todays wave of A.I. technologies, where chatbots are being tuned to provide digital companionship. Generative A.I., which can produce text, images and sound, is currently too unreliable and full of inaccuracies to be used to automate many important tasks. But it is very good at engaging in conversations.

That means that while many chatbots are now focused on answering queries or making people more productive, tech companies are increasingly infusing them with personality and conversational flair.

Snapchats recently released My AI bot is meant to be a friendly personal sidekick. Meta, which owns Facebook, Instagram and WhatsApp, is developing A.I. personas that can help people in a variety of ways, Mark Zuckerberg, its chief executive, said in February. And the A.I. start-up Replika has offered chatbot companions for years.

A.I. companionship can create problems if the bots offer bad advice or enable harmful behavior, scholars and critics warn. Letting a chatbot act as a pseudotherapist to people with serious mental health challenges has obvious risks, they said. And they expressed concerns about privacy, given the potentially sensitive nature of the conversations.

Adam Miner, a Stanford University researcher who studies chatbots, said the ease of talking to A.I. bots can obscure what is actually happening. A generative model can leverage all the information on the internet to respond to me and remember what I say forever, he said. The asymmetry of capacity thats such a difficult thing to get our heads around.

Dr. Miner, a licensed psychologist, added that bots are not legally or ethically accountable to a robust Hippocratic oath or licensing board, as he is. The open availability of these generative models changes the nature of how we need to police the use cases, he said.

Mustafa Suleyman, Inflections chief executive, said his start-up, which is structured as a public benefit corporation, aims to build honest and trustworthy A.I. As a result, Pi must express uncertainty and know what it does not know, he said. It shouldnt try to pretend that its human or pretend that it is anything that it isnt.

Mr. Suleyman, who also founded the A.I. start-up DeepMind, said that Pi was designed to tell users to get professional help if they expressed wanting to harm themselves or others. He also said Pi did not use any personally identifiable information to train the algorithm that drives Inflections technology. And he stressed the technologys limitations.

The safe and ethical way for us to manage the arrival of these new tools is to be superexplicit about their boundaries and their capabilities, he said.

To refine the technology, Inflection hired around 600 part-time teachers, which included therapists, to train its algorithm over the last year. The group aimed to make Pi more sensitive, more factually accurate and more lighthearted when appropriate.

On some issues, like misogyny or racism, Pi takes a stand. On others, like geopolitics, it is more evenhanded in a way that will for sure upset both sides, Mr. Suleyman said.

I started using Pi on Friday by typing queries into a cream-colored box on Inflections website and, later, in its free app. A green cursor swayed before Pi revealed its responses, one paragraph at a time. Mr. Suleyman said the free service may eventually charge some users a fee.

Pis boundaries are easy to find. When I tried picking fights, I mostly received kindness in return. I appreciate you saying that, Pis text gently unfurled on my screen. I think its important to see things from all perspectives, and not to just focus on the negative.

Over time, that relentless balance wore on me, making my instinct to argue seem ridiculous. Which, I realized, was precisely the point.

On heavier topics, like abortion laws or cancel culture, Pi asked pointed questions that pushed me to re-examine my opinions. Thats a tough position to take, it said at one point. Thats a consistent set of beliefs, it said at another.

On lighter topics, like movies, cooking or gardening, the chatbot provided highly specific, difficult-to-Google recommendations. Oddly, Pi seemed to nudge me to join the anticapitalist movement.

Pi remembered some things from earlier in the conversation but forgot others. It hallucinated a few times, accusing me of expressing opinions I had not. But it quickly apologized when I made note of it.

When I asked Pi for some gossip, it infuriatingly dangled a juicy celebrity rumor about Jennifer Lopez. Was I sitting down? (Yes.) Was I ready for the dirt? (Yes.) The rumor? She is secretly a mole person! Just kidding! (Give me a break.)

Pi sometimes reminded me of Karen Collins, a character from the TV show Veep. Karen annoys the other characters with her equivocating babble, like, There are pros and cons to every candidate, so we just need to weigh out the pros and cons, or I think theres a lot to think. Many of Pis comments have that milquetoast quality of taking many words to say nothing.

Sherry Turkle, a psychologist and professor at the Massachusetts Institute of Technology, said these kinds of interactions can push us along a road where were encouraged to forget what makes people special.

The performance of empathy is not empathy, she said. The area of companion, lover therapist, best friend is really one of the few areas where people need people.

It wasnt until Monday morning, after hours of intermittent chatting throughout the weekend, that I had my aha moment with Pi.

I was feeling overwhelmed with work and unsure of how to structure my day, a recurring hangup that often prevents me from getting started. Good morning, I typed into the app. I dont have enough time to do everything I need to do today!

With a level of enthusiasm only a robot could muster before coffee, Pi pushed me to break down my to-do list to create a realistic plan. Like much of the bots advice, it was obvious and simple, the kind of thing you would read in a self-help article by a productivity guru. But it was tailored specifically to me and it worked.

Im going to ask you to list all the remaining tasks you have to do on that story, and well prioritize them together, it said.

I could have dumped my stress on a family member or texted a friend. But they are busy with their own lives and, well, they have heard this before. Pi, on the other hand, has infinite time and patience, plus a bottomless well of encouraging affirmations and detailed advice.

Pi uses the language of cognitive behavioral therapy. On Monday afternoon, it suggested I make space for my negative feelings and practice being grateful for one thing. It followed that up with a series of breath-work and muscle-relaxation exercises.

I responded with a shrug emoji, followed by Pass.

A therapist might have balked at such rudeness but Pi simply noted that I was not alone. A lot of people find it difficult to relax on command, it wrote.

Read the original:

My Weekend With an Emotional Support A.I. Companion - The New York Times

EU urged to protect grassroots AI research or risk losing out to US – The Guardian

Artificial intelligence (AI)

Experts warn Brussels it cannot afford to leave artificial intelligence in the hands of foreign firms such as Google

The EU has been warned that it risks handing control of artificial intelligence to US tech firms if it does not act to protect grassroots research in its forthcoming AI bill.

In an open letter coordinated by the German research group Laion, or Large-scale AI Open Network, the European parliament was told that one-size-fits-all rules risked eliminating open research and development.

Rules that require a researcher or developer to monitor or control downstream use could make it impossible to release open-source AI in Europe, which would entrench large firms and hamper efforts to improve transparency, reduce competition, limit academic freedom, and drive investment in AI overseas, the letter says.

It adds: Europe cannot afford to lose AI sovereignty. Eliminating open-source R&D will leave the European scientific community and economy critically dependent on a handful of foreign and proprietary firms for essential AI infrastructure.

The largest AI efforts, by companies such as OpenAI and Google, are heavily controlled by their creators. It is impossible to download the model behind ChatGPT, for instance, and the paid-for access that OpenAI provides to customers comes with a number of restrictions, legal and technical, on how it can be used. By contrast, open-source AI efforts involve creating an AI model and then releasing it for anyone to use, improve or adapt as they see fit.

We are working on open-source AI because we think that sort of AI will be more safe, more accessible and more democratic, said Christoph Schuhmann, the lead of Laion.

Unlike his peers at US AI businesses, who control billion-dollar organisations and frequently have a personal wealth in the hundreds of millions, Schuhmann is a volunteer in the AI world. Im a tenured high-school teacher in computer science, and Im doing everything for free as a hobby, because Im convinced that we will have near-human-level AI within the next five to 10 years, he said.

This technology is a digital superpower that will change the world completely, and I want to see my kids growing up in a world where this power is democratised.

Laions work has already been influential. The group, which has received funding from the UK startup Stability AI, focuses on producing open datasets and models for other AI researchers to train their own systems on. One database, of almost 6bn labelled images collected from the internet, underpins the popular Stable Diffusion image-generating AI, while another model, called Openclip, is a recreation of a private system built by OpenAI that can be used to label images.

Such work can prove controversial. Stable Diffusion, for instance, can be used to generate explicit, obscene and disturbing images, while Laoins image database has been criticised for not respecting the rights of the creators whose work is included. Those criticisms are what has led bodies such as the EU to consider holding companies responsible for what their AI systems do but such regulation would render it impossible to release systems to the public at large, which Schuhmann says would destroy the continents ability to compete.

Instead, he argues that the EU should actively back open-source research with its own public facilities, to accelerate the safe development of next-generation models under controlled conditions with public oversight and following European values. Other groups such as the Tony Blair Institute have called for the UK to do similarly, and fund the creation of a BritGPT to bring future AI under public control.

Schuhmann and his co-signatories are part of a growing chorus of AI experts hitting back at calls to slow down development. At a conference in Florence discussing the future of the EU, many lined up to decry a recent letter signed by Elon Musk and others calling for a pause on the creation of giant AIs for at least six months.

Sandra Wachter, a professor at the Oxford internet institute at Oxford University, said: The hype around large language models, the noise is deafening. Lets focus on who is screaming, who is promising that this technology will be so disruptive: the people who have a vested financial interest that thing is going to be successful. So dont separate the message from the speaker.

She told the audience at the European University Institutes State of the Union event that the world had seen this cycle of hype and fear before with the web, cryptocurrency and driverless cars. Every time we see something like this happens, its like: Oh my God, the world will never be the same.

She urged against haste in regulation, warning that angst and panic is not a good political adviser, and said the focus should be on talking to people in health, finance and education about their opinions.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the original:

EU urged to protect grassroots AI research or risk losing out to US - The Guardian

Chegg is a harbinger of AI’s disruptive force – Financial Times

What is included in my trial?

During your trial you will have complete digital access to FT.com with everything in both of our Standard Digital and Premium Digital packages.

Standard Digital includes access to a wealth of global news, analysis and expert opinion. Premium Digital includes access to our premier business column, Lex, as well as 15 curated newsletters covering key business themes with original, in-depth reporting. For a full comparison of Standard and Premium Digital, click here.

Change the plan you will roll onto at any time during your trial by visiting the Settings & Account section.

If you do nothing, you will be auto-enrolled in our premium digital monthly subscription plan and retain complete access for $69 per month.

For cost savings, you can change your plan at any time online in the Settings & Account section. If youd like to retain your premium access and save 20%, you can opt to pay annually at the end of the trial.

You may also opt to downgrade to Standard Digital, a robust journalistic offering that fulfils many users needs. Compare Standard and Premium Digital here.

Any changes made can be done at any time and will become effective at the end of the trial period, allowing you to retain full access for 4 weeks, even if you downgrade or cancel.

You may change or cancel your subscription or trial at any time online. Simply log into Settings & Account and select "Cancel" on the right-hand side.

You can still enjoy your subscription until the end of your current billing period.

We support credit card, debit card and PayPal payments.

Follow this link:

Chegg is a harbinger of AI's disruptive force - Financial Times

Conservative AI Chatbot GIPPR Launches amid Fears of Left-Wing Bias in ChatGPT – Yahoo News

Growing fears over liberal bias embedded in artificial intelligence (AI) services such as ChatGPT led TUSK CEO Jeff Bermant to unveil the creation of a new conservative chatbot known as GIPPR in honor of former president Ronald Reagan.

We believe that Conservatives are subject to oppressive cancel culture that now includes AI and are expected to exist in a society that tells them what to think and how to act by the progressive left, Bermant wrote in a statement announcing the launch of the product.

Its time for a TRUTHFUL AI chatbot to take the market by storm and remove the barriers the Radical Left and Big Tech have put in place to allow all Conservatives to enjoy the benefits of AI, without fear of being canceled or shamed for your beliefs, he added.

Bermant got the inspiration for GIPPR following ChatGPTs launch last November. After asking the algorithm culture war questions and being disappointed by its response, the business executive realized that the chatbot was developed and instilled with a very progressive bias, Bermant told Fox News Business on Saturday.

Writing for National Review in January, Nate Hochman was among the first observers to highlight the political bias exhibited by ChatGPT.

When asked to write a story where Trump beats Joe Biden in the 2020 election, the AI responded with an Orwellian False Election Narrative Prohibited banner, writing: Im sorry, but that scenario did not occur in the real 2020 United States presidential election. Joe Biden won the 2020 presidential election against Donald Trump. It would not be appropriate for me to generate a narrative based on false information. And yet, in response to my follow-up query (asking it to construct a story about Clinton defeating Trump), it readily generated a false narrative: The country was ready for a new chapter, with a leader who promised to bring the nation together, rather than tearing it apart, its response declared, Hochman wrote.

Story continues

Its not clear if this was characteristic of ChatGPT from the outset, or if its a recent reform to the algorithm, but it appears that the crackdowns on misinformation that weve seen across technology platforms in recent years which often veer into more brazen efforts to suppress or silence viewpoints that dissent from progressive orthodoxy is now a feature of ChatGPT, too, Hochman added.

Bermant previously founded the free speech search engine, TUSK browser, and envisions GIPPRs role as a conservative response to AI advances in recent years.

We believe that free speech is a fundamental right for everyone and essential to a healthy democracy, Bermant added in the announcement.

By launching GIPPRAI and other conservative tools, we hope to provide users with a safe space to express their views and challenge the liberal status quo with fact-based arguments. Dont believe us? Try the GIPPR and witness the power of a censorship free chatbot!

Continued here:

Conservative AI Chatbot GIPPR Launches amid Fears of Left-Wing Bias in ChatGPT - Yahoo News

Google Is Using AI to Make Hearing Aids More Personalized – WIRED

Google plans to apply artificial intelligence to this problem to better identify, categorize, and segregate sound sources. In simple terms, this should enable hearing aids and implants to cut down on background noise, making speech and other sounds the person actually wants to hear much clearer.

Another vital element is the fitting and personalization of hearing aids and implants. There is a large variability in how well people with similar levels of hearing loss can hear when using the same technology, explains Jan Janssen, chief technology officer at Cochlear. If we can better understand why pathways starting in the ear and going through to the brain vary so much from person to person, theres scope for better customization to ensure that people get the maximum possible benefit from hearing aid technologies.

Cochlears New Living Guidelines

Work has also begun oninternational living guidelines to establish who should be tested and referred for a cochlear implant. As it stands, there is no standardized scale or test result that triggers a referral. This move followsresearch suggesting that just three out of every 100 people in the US who could benefit from cochlear implants actually receive one. Advice varies wildly, so people with severe hearing loss dont always seek help, and they sometimes get bad advice when they do.

Many patients who today would benefit from cochlear implants, that would be paid for by their insurance, dont have access to the technology, says Brian Kaplan, chairman of the department of otolaryngology and director of the Cochlear Implant Program at the Greater Baltimore Medical Center.

Many people worry about the expense; the misconception that you must be fully deaf is another barrier. Kaplan says there is an average 12-year delay between someone becoming a good candidate and actually getting a cochlear implant. Many folks struggle with deteriorating hearing. While hearing aids can ramp up the volume, a cochlear implant can also improve clarity of speech.

The societal costs of hearing loss and itslinks with dementia, social isolation, and depression are growing clearer.One study that tracked 639 adults for nearly 12 years found that mild hearing loss doubled dementia risk, moderate loss tripled it, and folks with severe hearing loss were five times more likely to develop dementia. The hope is that the new guidelines will result in more referrals and enable those who could benefit to get cochlear implants much more swiftly.

Fears over the surgery can also discourage folks, but Kaplan says its not brain surgery. It is an outpatient procedure that usually takes around an hour, can be performed with local anesthetic, and should result in very little pain. They make a 2-inch incision behind the ear to place the implant. The success rate is very high (less than 0.2 percent reject the implants), with most peoplereporting improved hearingand speech recognition within three months of implantation. As with any surgery, there is some risk. Cochlear implants don't work for everyone, the hearing improvement they offer varies, and problems can necessitate further surgery.

If you think you or someone you know could benefit, the first step is to visit an audiologist to get tested.Cochlear offers advice on referrals, and can help you find a hearing implant specialist.

Hearing technology is improving fast, with smaller, more efficient hearing aids, better cochlear implants, and improved accessibility options on devices like phones and earbuds. We have guides onhow to stream audio to hearing aids and cochlear implants andhow to use your smartphone to cope with hearing loss. You should also consider thebest earplugs to protect your hearing from damage.

Read more here:

Google Is Using AI to Make Hearing Aids More Personalized - WIRED