Archive for the ‘Ai’ Category

The best way to avoid a down round is to found an AI startup – TechCrunch

As we see unicorns slash staffand the prevalence of down rounds spike, it may seem that the startup ecosystem is chock-full of bad news and little else. Thats not precisely the case.

While AI, and in particular the generative AI subcategory, are as hot as the sun, not all venture attention is going to the handful of names that you already know. Sure, OpenAI is able to land nine and 10-figure rounds from a murderers row of tech investors and mega-cap corporations. And rising companies like Hugging Face and Anthropic cannot stay out of the news, proving that smaller AI-focused startups are doing more than well.

In fact, new data from Carta, which provides cap table management and other services, indicates that AI-focused startups are outperforming their larger peer group at both the seed and Series A stage.

The dataset, which notes that AI-centered startups are raising more and at higher valuations than other startups, indicates that perhaps the best way to avoid a down round today is to build in the artificial intelligence space.

Per Carta data relating to the first quarter of the year, seed funding to non-AI startups in the U.S. market that use its services dipped from $1.64 billion to $1.08 billion, or a decline of around 34%. That result is directionally aligned with other data that weve seen regarding Q1 2023 venture capital totals; the data points down.

See the rest here:

The best way to avoid a down round is to found an AI startup - TechCrunch

Microsoft economist warns of A.I. election interference from ‘bad actors’ – CNBC

Microsoft logo seen at its building in Redmond, Washington.

Toby Scott | SOPA Images | LightRocket | Getty Images

People should worry more about "AI being used by bad actors" than they should about AI productivity outpacing human productivity, Microsoft chief economist Michael Schwarz said at a World Economic Forum event Wednesday.

"Before AI could take all your jobs, it could certainly do a lot of damage in the hands of spammers, people who want to manipulate elections," Schwarz added while speaking on a panel on harnessing generative AI.

Microsoft first invested $1 billion in OpenAI in 2019, years before the two companies would integrate OpenAI's GPT large language model into Microsoft's Bing search product. In January, Microsoft announced a new multiyear multibillion-dollar investment in the company. OpenAI relies on Microsoft to provide the computing heft that powers OpenAI's products, a relationship that Wells Fargo recently said could result in up to $30 billion in new annual revenue for Microsoft.

Schwarz tempered his caution about AI by noting that all new technologies, even cars, carried a degree of risk when they first came to market. "When AI makes us more productive, we as mankind ought to be better off," he noted, "because we are able to produce more stuff."

OpenAI's ChatGPT sparked a flood of investment in the AI sector. Google moved to launch a rival chatbot, Bard, sparking a wave of internal concern about a botched rollout. Politicians and regulators have expressed growing concern about the potential effect of AI technology as well.

Vice President Kamala Harris will meet Thursday with top executives from Anthropic, another AI firm, and Google, Microsoft and OpenAI to discuss responsible AI development, the White House told CNBC on Tuesday. Meanwhile, FTC Chair Lina Khan penned an op-ed in The New York Times on Wednesday warning "enforcers and regulators must be vigilant."

"Please remember, breaking is much easier than building," Schwarz said.

Go here to see the original:

Microsoft economist warns of A.I. election interference from 'bad actors' - CNBC

Grimes Launched a Platform to Help You Make AI Songs with Her … – Gizmodo

The artist known as Grimes (real name Clare Boucher) has said that youre totally free to use an AI-generated version of her voice to make new music, just as long as you give her a healthy 50 percent of the royalties that the track generates.

This week, Grimes launched a new AI voice software, dubbed Elf.Tech, designed to help people duplicate her voice to create music. The platform, which Grimes unveiled in a Twitter thread Sunday, allows users to upload recordings of their own voice, which can then be Grimes-ified via the wonders of automated technology. The vocals can then be mixed with other electronically generated sounds and beats to spawn new tracks that sound quite a bit like the real thing.

we ask for 50% splits on master recording royalties in exchange for a grimes feat and distribution, the singer tweeted Sunday. Theres a *small* chance we can organize getting you publishing $ as well but we cant guarantee this yet. But I hope we can!! Would be cool.

Bouchers high-pitched ethereal voice and rave vibed tracks already sorta sound computer generated, so I guess it only makes sense that shes now giving a thumbs up to an idea like this. According to Boucher, this is the future of music: if youre an artist, you let an algorithm replicate your voice, then you cash in for a percentage of the profits.

In a recent interview with Rolling Stone magazine, Bouchers manager, Daouda Leonard, rationalized the weird decision this way:

She often says that creativity is a conversation with those who came before us and those who are going to come after us...And so the idea is that instead of her attempting to control what is a gift from the universe, shes like, Well, let me opensource that. Let me allow people to access what the universe gave me as a gift. And if I do that, what are the new experiences that can be created out of that?

Of course, Grimes is also the ex-partner and baby momma of tech billionaire Elon Musk, who has been pouring money into AI startups like theres no tomorrow. Musk co-founded OpenAI, the company that launched ChatGPTone of the most popular new AI-powered chatbots. I guess while the former couple are no longer together theyre on the same page about how totally awesome AI is for the future of humanity.

Anyway, now that Grimes has opened Pandoras Box, the internet is being flooded with new songs created using her voice. Scroll through to check out some of the AI creations that feature Grimes.

Continue reading here:

Grimes Launched a Platform to Help You Make AI Songs with Her ... - Gizmodo

My Weekend With an Emotional Support A.I. Companion – The New York Times

For several hours on Friday evening, I ignored my husband and dog and allowed a chatbot named Pi to validate the heck out of me.

My views were admirable and idealistic, Pi told me. My questions were important and interesting. And my feelings were understandable, reasonable and totally normal.

At times, the validation felt nice. Why yes, I am feeling overwhelmed by the existential dread of climate change these days. And it is hard to balance work and relationships sometimes.

But at other times, I missed my group chats and social media feeds. Humans are surprising, creative, cruel, caustic and funny. Emotional support chatbots which is what Pi is are not.

All of that is by design. Pi, released this week by the richly funded artificial intelligence start-up Inflection AI, aims to be a kind and supportive companion thats on your side, the company announced. It is not, the company stressed, anything like a human.

Pi is a twist in todays wave of A.I. technologies, where chatbots are being tuned to provide digital companionship. Generative A.I., which can produce text, images and sound, is currently too unreliable and full of inaccuracies to be used to automate many important tasks. But it is very good at engaging in conversations.

That means that while many chatbots are now focused on answering queries or making people more productive, tech companies are increasingly infusing them with personality and conversational flair.

Snapchats recently released My AI bot is meant to be a friendly personal sidekick. Meta, which owns Facebook, Instagram and WhatsApp, is developing A.I. personas that can help people in a variety of ways, Mark Zuckerberg, its chief executive, said in February. And the A.I. start-up Replika has offered chatbot companions for years.

A.I. companionship can create problems if the bots offer bad advice or enable harmful behavior, scholars and critics warn. Letting a chatbot act as a pseudotherapist to people with serious mental health challenges has obvious risks, they said. And they expressed concerns about privacy, given the potentially sensitive nature of the conversations.

Adam Miner, a Stanford University researcher who studies chatbots, said the ease of talking to A.I. bots can obscure what is actually happening. A generative model can leverage all the information on the internet to respond to me and remember what I say forever, he said. The asymmetry of capacity thats such a difficult thing to get our heads around.

Dr. Miner, a licensed psychologist, added that bots are not legally or ethically accountable to a robust Hippocratic oath or licensing board, as he is. The open availability of these generative models changes the nature of how we need to police the use cases, he said.

Mustafa Suleyman, Inflections chief executive, said his start-up, which is structured as a public benefit corporation, aims to build honest and trustworthy A.I. As a result, Pi must express uncertainty and know what it does not know, he said. It shouldnt try to pretend that its human or pretend that it is anything that it isnt.

Mr. Suleyman, who also founded the A.I. start-up DeepMind, said that Pi was designed to tell users to get professional help if they expressed wanting to harm themselves or others. He also said Pi did not use any personally identifiable information to train the algorithm that drives Inflections technology. And he stressed the technologys limitations.

The safe and ethical way for us to manage the arrival of these new tools is to be superexplicit about their boundaries and their capabilities, he said.

To refine the technology, Inflection hired around 600 part-time teachers, which included therapists, to train its algorithm over the last year. The group aimed to make Pi more sensitive, more factually accurate and more lighthearted when appropriate.

On some issues, like misogyny or racism, Pi takes a stand. On others, like geopolitics, it is more evenhanded in a way that will for sure upset both sides, Mr. Suleyman said.

I started using Pi on Friday by typing queries into a cream-colored box on Inflections website and, later, in its free app. A green cursor swayed before Pi revealed its responses, one paragraph at a time. Mr. Suleyman said the free service may eventually charge some users a fee.

Pis boundaries are easy to find. When I tried picking fights, I mostly received kindness in return. I appreciate you saying that, Pis text gently unfurled on my screen. I think its important to see things from all perspectives, and not to just focus on the negative.

Over time, that relentless balance wore on me, making my instinct to argue seem ridiculous. Which, I realized, was precisely the point.

On heavier topics, like abortion laws or cancel culture, Pi asked pointed questions that pushed me to re-examine my opinions. Thats a tough position to take, it said at one point. Thats a consistent set of beliefs, it said at another.

On lighter topics, like movies, cooking or gardening, the chatbot provided highly specific, difficult-to-Google recommendations. Oddly, Pi seemed to nudge me to join the anticapitalist movement.

Pi remembered some things from earlier in the conversation but forgot others. It hallucinated a few times, accusing me of expressing opinions I had not. But it quickly apologized when I made note of it.

When I asked Pi for some gossip, it infuriatingly dangled a juicy celebrity rumor about Jennifer Lopez. Was I sitting down? (Yes.) Was I ready for the dirt? (Yes.) The rumor? She is secretly a mole person! Just kidding! (Give me a break.)

Pi sometimes reminded me of Karen Collins, a character from the TV show Veep. Karen annoys the other characters with her equivocating babble, like, There are pros and cons to every candidate, so we just need to weigh out the pros and cons, or I think theres a lot to think. Many of Pis comments have that milquetoast quality of taking many words to say nothing.

Sherry Turkle, a psychologist and professor at the Massachusetts Institute of Technology, said these kinds of interactions can push us along a road where were encouraged to forget what makes people special.

The performance of empathy is not empathy, she said. The area of companion, lover therapist, best friend is really one of the few areas where people need people.

It wasnt until Monday morning, after hours of intermittent chatting throughout the weekend, that I had my aha moment with Pi.

I was feeling overwhelmed with work and unsure of how to structure my day, a recurring hangup that often prevents me from getting started. Good morning, I typed into the app. I dont have enough time to do everything I need to do today!

With a level of enthusiasm only a robot could muster before coffee, Pi pushed me to break down my to-do list to create a realistic plan. Like much of the bots advice, it was obvious and simple, the kind of thing you would read in a self-help article by a productivity guru. But it was tailored specifically to me and it worked.

Im going to ask you to list all the remaining tasks you have to do on that story, and well prioritize them together, it said.

I could have dumped my stress on a family member or texted a friend. But they are busy with their own lives and, well, they have heard this before. Pi, on the other hand, has infinite time and patience, plus a bottomless well of encouraging affirmations and detailed advice.

Pi uses the language of cognitive behavioral therapy. On Monday afternoon, it suggested I make space for my negative feelings and practice being grateful for one thing. It followed that up with a series of breath-work and muscle-relaxation exercises.

I responded with a shrug emoji, followed by Pass.

A therapist might have balked at such rudeness but Pi simply noted that I was not alone. A lot of people find it difficult to relax on command, it wrote.

Read the original:

My Weekend With an Emotional Support A.I. Companion - The New York Times

EU urged to protect grassroots AI research or risk losing out to US – The Guardian

Artificial intelligence (AI)

Experts warn Brussels it cannot afford to leave artificial intelligence in the hands of foreign firms such as Google

The EU has been warned that it risks handing control of artificial intelligence to US tech firms if it does not act to protect grassroots research in its forthcoming AI bill.

In an open letter coordinated by the German research group Laion, or Large-scale AI Open Network, the European parliament was told that one-size-fits-all rules risked eliminating open research and development.

Rules that require a researcher or developer to monitor or control downstream use could make it impossible to release open-source AI in Europe, which would entrench large firms and hamper efforts to improve transparency, reduce competition, limit academic freedom, and drive investment in AI overseas, the letter says.

It adds: Europe cannot afford to lose AI sovereignty. Eliminating open-source R&D will leave the European scientific community and economy critically dependent on a handful of foreign and proprietary firms for essential AI infrastructure.

The largest AI efforts, by companies such as OpenAI and Google, are heavily controlled by their creators. It is impossible to download the model behind ChatGPT, for instance, and the paid-for access that OpenAI provides to customers comes with a number of restrictions, legal and technical, on how it can be used. By contrast, open-source AI efforts involve creating an AI model and then releasing it for anyone to use, improve or adapt as they see fit.

We are working on open-source AI because we think that sort of AI will be more safe, more accessible and more democratic, said Christoph Schuhmann, the lead of Laion.

Unlike his peers at US AI businesses, who control billion-dollar organisations and frequently have a personal wealth in the hundreds of millions, Schuhmann is a volunteer in the AI world. Im a tenured high-school teacher in computer science, and Im doing everything for free as a hobby, because Im convinced that we will have near-human-level AI within the next five to 10 years, he said.

This technology is a digital superpower that will change the world completely, and I want to see my kids growing up in a world where this power is democratised.

Laions work has already been influential. The group, which has received funding from the UK startup Stability AI, focuses on producing open datasets and models for other AI researchers to train their own systems on. One database, of almost 6bn labelled images collected from the internet, underpins the popular Stable Diffusion image-generating AI, while another model, called Openclip, is a recreation of a private system built by OpenAI that can be used to label images.

Such work can prove controversial. Stable Diffusion, for instance, can be used to generate explicit, obscene and disturbing images, while Laoins image database has been criticised for not respecting the rights of the creators whose work is included. Those criticisms are what has led bodies such as the EU to consider holding companies responsible for what their AI systems do but such regulation would render it impossible to release systems to the public at large, which Schuhmann says would destroy the continents ability to compete.

Instead, he argues that the EU should actively back open-source research with its own public facilities, to accelerate the safe development of next-generation models under controlled conditions with public oversight and following European values. Other groups such as the Tony Blair Institute have called for the UK to do similarly, and fund the creation of a BritGPT to bring future AI under public control.

Schuhmann and his co-signatories are part of a growing chorus of AI experts hitting back at calls to slow down development. At a conference in Florence discussing the future of the EU, many lined up to decry a recent letter signed by Elon Musk and others calling for a pause on the creation of giant AIs for at least six months.

Sandra Wachter, a professor at the Oxford internet institute at Oxford University, said: The hype around large language models, the noise is deafening. Lets focus on who is screaming, who is promising that this technology will be so disruptive: the people who have a vested financial interest that thing is going to be successful. So dont separate the message from the speaker.

She told the audience at the European University Institutes State of the Union event that the world had seen this cycle of hype and fear before with the web, cryptocurrency and driverless cars. Every time we see something like this happens, its like: Oh my God, the world will never be the same.

She urged against haste in regulation, warning that angst and panic is not a good political adviser, and said the focus should be on talking to people in health, finance and education about their opinions.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the original:

EU urged to protect grassroots AI research or risk losing out to US - The Guardian