Archive for the ‘Artificial Intelligence’ Category

Siri co-founder Tom Gruber helped bring AI into the mainstream. Here’s why he’s worried about how fast AI is growing – ABC News

Tom Gruber speaks in a soft and deep American drawl. Passionate and methodical, he reflects on the moment he and two colleagues created Siri Apple's virtual assistant the high point of his 40-year career in Silicon Valley's pursuit of artificial intelligence.

"Around 2007-2008, we had everything in place to bring real artificial intelligence into everyone's hand, and that was thrilling.

"Siri was very playful. And that was by design," he declares with a wide grin and a laugh almost like a proud dad.

"Now it's used roughly a billion times a day. That's a lot of use. It's on 2 billion devices. It is absolutely woven into everyday life."

But what Mr Gruber and long-time colleagues working on artificial intelligence (AI) have seen in the past 18 months has scared them.

"There's something different this time," he says.

"And that something different is that the amount of capabilities that were just uncovered in the last year or two that has surprised the people who were building them, and surpassed all of our expectations at the pace to which these things were uncovered."

ChatGPT produced by the Microsoft-funded OpenAI company is the most well-known of all the new "generative" AI chatbots that have been released.

Trained on the knowledge of the internet and then released to be tested on the public, this new AI has spread at a record pace.

In Australia, it is already causing disruption. Schools and unis have both embraced and banned it. Workplaces are using it for shortcuts and efficiencies,raising questions about whether it will abolish some jobs. IBM's CEO has already said about 8,000 jobs could be replaced by AI and automation.

Microsoft told 7.30 this week that "real-world experience and feedback is critical and can't be fully replicated in a lab".

But Mr Gruber, and thousands of AI industry scientists, coders and researchers, want testing on the public to stop until a framework is put in place to in his words "keep this technology safe and on the side of humans".

"What they're doing by releasing it [to the world] is they're getting data from humans about the kind of stuff that normal humans would do when they get their hands on such a model. And they're like, learning by trial and error," Mr Gruber tells 7.30.

"There's a problem with that model that's called a human trial without consent.

Toby Walsh is the chief scientist at UNSW's new AI Institute. He says another part of the concern is the rate at which ChatGPT is being adopted.

"Even small harms, multiplied by a billion people, could be cause for significant concern," he says.

"ChatGPT was used by a million people in five days, [and] 100 million people at the end of the first month.

"Now, six months later, it's in the hands of a billion people. We've never had technologies before where you could roll them out so quickly."

Here's the rub the new AI models are really good at being fake humans in text. They're also really good at creating fake images, and even the voices of real people.

If you don't want to pay a model for a photo shoot, or you want a contract written quickly without a lawyer, AI is a tool that's at your disposal.

But the new AI apps are also great for fraudsters and those who want to manipulate public perceptions.

CBC Canada's public broadcaster reported that police are investigating cases where AI-trained fake voices were used to scam money from parents who believed they were speaking to their children.

"Just to clarify it only takes a few seconds now to clone someone's voice I could ring up your answer phone, record your voice, and speak just like you," Mr Walsh says.

Mr Gruber is scared by how new AI can "pretend to be human really well".

"Humans are already pretty gullible," he says.

"I mean, a lot of people would talk to Siri for a while, mostly for entertainment purposes.

"But there are a lot of people who get sucked into such a thing. And that's one of the really big risks.

"We haven't even seen the beginning of all the ways people can use this amazing piece of technology to amplify acts of mischief.

"If we can't believe our senses, and use our inherited ability to detect whether that thing is fake or real, then a lot of things that we do as a society start to unravel."

AI is not like computer-coded programs whose lines of script can be checked and corrected one by one.

"They're closer to [being] organic," Connor Leahy says.

The London-based coder is at the beginning of his career and already the CEO of his own company Conjecture, which aims to create "safe AI" and is funded to the tune of millions of dollars by venture capitalists and former tech success stories like the creator of Skype.

Mr Leahy's definition of safe AI is "AI that truly does what we want it to do, and that we can rely on them to not do things we don't want them to do".

Sounds simple enough until he describes the current AI apps.

"They are complete mystery boxes, black boxes, as we would say in technical terms," he says.

"There's all these kinds of weirdness that we don't understand, even with, for example, relatively simple image recognition systems, which have existed for quite a while.

"They have these problems, which are called adversarial examples.

"And what this means is that you can completely confuse the system by just changing a single pixel in an image;you just change one pixel and suddenly [the system] thinks that a dog is an ostrich.

"This is very strange. And we don't really know why this happens. And we don't really know how to fix it."

This "black box"has led OpenAI to develop a tool to help identify which parts of its AI system are responsible for its behaviours.

William Saunders, the interpretability team manager at OpenAI, told industry site TechCrunch: "We want to really be able to know that we can trust what the model is doing, and the answer that it produces."

Each large language model he's referring tois a neural network. And each individual neuron makes decisions based on the information it receives,a bit like the human brain.That neuron then sends its answer to the rest of the network.

OpenAI says their tool could only "confidently" explain the behaviour of just 1,000 neurons out of a total of 307,200 neurons in its GPT-2 system. That's two generations back.

Meanwhile, GPT-4 has an estimated trillion neurons.

Ironically,OpenAI is using GPT-4 to run its tests on GPT-2, which underscores the point that it has released something into the world it barely understands.

Science fiction writer Isaac Asimov famously wrote the three laws of robotics the first of which Mr Gruber expands upon: "Robots should do no harm to humans or not cause harm to happen through inaction."

That doesn't apply to AI at the moment because it's not a law it can understand at a conceptual level,"because the AI bot or the language model doesn't have human values engineered into it".

It's a big word calculator.

"It's only been trained to solve this astonishingly simple game of cards [in which each card is a word]," Mr Gruber says.

"It plays this game where it puts a card down and then guesses what the next word is. And then, once it figures that out, you know, OK, it guesses the next word. And so on."

Those words are what it calls a response to a question asked by a human.

"It plays the game a trillion, trilliontimes an astonishing amount of scale and computation, and a very simple operation. And that's the only thing it's told to do."

The new generation of AI can mimic human language very effectively but it cannot feel empathy for humans.

"They are a very blunt instrument we don't know how to make them care about us," Mr Leahy says.

"This is similar [to] how potentially a human sociopath understands that doing mean things is mean, but they don't care. This is the problem that we currently face with AI."

This is all happening now not in some doomsday future scenario on a Hollywood scale where sentient AI takes over the world.

It's no wonder, then, that so many in the industry are calling for help.

Tech insiders are now calling for their wings to be clipped even in America, where it is almost unheard of that US corporations ask to be regulated.

But that is precisely what happened this week, when the head of OpenAI Sam Altman appeared before the US Congress.

In stunning testimony, the 38-year-old declared: "If this technology goes wrong, it can go quite wrong; we want to be vocal about that, we want to work with the government to prevent that from happening."

Mr Altman has been quite open about his fears, telling ABC America in an interview earlier this year he is "a little bit" scared of AI's capabilities, before adding that if he said he wasn't, he shouldn't be trusted.

Mr Leahy is also outspoken.

"There is currently more regulation on selling a sandwich to the public than there is to building completely novel powerful AI systems with unknown capabilities and intelligence and releasing them to the general public widely onto the internet, you know, accessible by API to interface with any tools they want," he said.

"The government at the moment has no regulation in place whatsoever about this."

The challenge now is how fast safeguards can be installed and whether they are effective.

"It's kind of like a sense of futurists' whack-a-mole," Mr Leahy told 7.30.

"It's not that there's one specific way things go wrong, and only one way, how unleashing intelligent, autonomous, powerful systems onto the internet that we cannot control and we do not understand ... there's billions of ways this could go wrong."

Watch 7.30, Mondays to Thursdays 7:30pm on ABC iview and ABC TV

Do you know more about this story? Get in touch with 7.30 here.

Read more:
Siri co-founder Tom Gruber helped bring AI into the mainstream. Here's why he's worried about how fast AI is growing - ABC News

How artificial intelligence is shaping music as we know it – THV11.com KTHV

CONWAY, Ark. Artificial intelligence is everywhere these days. Whether it's the stories about Chat GPT, A.I. generated images, or any other type it feels like everything has something changed by A.I.

At Fretmonkey Records in Conway, owner and producer Blake Goodwin has seen that for himself.

"I get to create alongside other people and bring out something that existed in them, and maybe they didn't know how to do that on their own," he said. "Pretty much anybody who walks in, I feel confident enough to work with them."

As his skills have progressed, so have the tools he uses. That's not just guitars or keyboards it also includes artificial intelligence.

"Just instantly, just have a creative process that would take, you know, maybe a week can take a day now," Goodwin said. "An expansion of what human creativity can be, the next step in momentum to propel forward and take out a lot of the work that is monotonous."

He shared how he already uses A.I. to handle the tedious things that would normally take him hours.

"Then you can go in there and do the human tweaks that really, you know, fit it to what the concept is," he said.

That's not all. There's already a lot that A.I. can do for music production.

Voice, beat, and lyric generation have been trending across TikTok for the past few months.

Add all those parts together, and you get covers of songs by artists that never sang them. Or completely fake songs that sound like something the actual artist would release.

Though they may sound good, they're not liked by some artists and not by companies.

Universal Music Group, the ones behind major artists like Drake, sent a letter to Spotify and Apple Music telling them to stop letting A.I. companies have access to their catalogs to help generate the songs.

Goodwin said he also has concerns, but there are benefits as well.

"It's a little nerve-wracking because it makes you question your field, and like, 'Oh, do I have a future?'" Goodwin said. "Or you can look at it from a different concept of, 'How do I adapt and take advantage of it?'"

He isn't the only one with that mindset.

"If they trust technology, they will take advantage and use it," Mariofanna Milanova said.

She is a computer science professor at UA-Little Rock, with decades of A.I. study.

"Speed up the process, and if the person is creative enough and trusts the technology, they can do much, much better than without technology," she added.

Milanova showed us a few sites that can generate music or parts of it Jukebox and Musenet.

It can recreate voices or even make compositions combining genres.

One piece she showed us was from a 19th-century composer, mixed with Lady Gaga. It's quirky, but Milanova wanted to stress to us that this A.I. may not take jobs, but rather make them easier.

"'I don't want to lose the job, no you are not losing the job," she said. "You will have an even better job."

As time passes, she explained that there needs to be more work done to figure out A.I.'s place in our culture. It's a tool right now, but she said it is something that needs to be regulated.

"Intersection between technology, governance, standards, and regulations and society," she said. "So without this intersection, progress will not be possible, but we need to intersect."

Goodwin is still looking ahead to that change. Right now, A.I. is a tool, and he's going to keep using it to help make what he loves.

"It's just going to be something that caters to either help propel us forward or educate us," he said. "And it's not to look down on something that could potentially bring out a better future for the creatives."

Go here to see the original:
How artificial intelligence is shaping music as we know it - THV11.com KTHV

Artificial intelligence is developing too fast, 71pc of Telegraph … – The Telegraph

Dulan Weerasinha agrees there are both advantages and risks: The ability to get consistent results and outcomes via a properly trained AI and/or language model is hugely beneficial to key use cases such as risk analysis, failure detection, diagnosing diseases etc.

But Dulan opposes seemingly uncontrolled or unvetted research into AI, reasoning that if highly developed models and the ability to run those models fall into the wrong hands, or are acquired by bad actors, the consequences could be destructive.

Wesley Storz believes that independent thought within certain parameters by AI is what we need, if it is left to collect and collate data. However, he worries that allowing it to grow and learn on its own without a way to maintain control could be deadly.

Meanwhile, Rod Evans is optimistic: Humans have a limited capacity to gather data and, thus, have a limited opportunity to make the best decision possible. Whereas AI systems have no such limitations and can seek all data and, thus, make the statistically best decision possible.

Paola Romero also strikes a positive note, labelling AI a freedom enabling technology.

While AI might be beneficial for decision making and reducing labour, some readers are concerned over job losses, particularly for the young.

There are fears millions of roles could be made redundant as a result of AI. Analysts at Goldman Sachs estimated that 300 million jobs could soon be done by robots thanks to the new wave of AI.

Reader Michael Johnson says: I am 60, so it shouldnt affect me, but I feel it will badly affect jobs for the young. I feel for the young.

Likewise, Joe Blow notes: Like lamp lighters, costermongers or high street bank cashiers before them, some jobs will simply disappear the difference this time is it's professional, not manual jobs that are threatened.

Robert Groves predicts: AI will decimate mass jobs, but create relatively fewer highly paid ones.

The trend will be towards relatively few highly skilled and extremely highly paid elites who understand AI and are not so much in control of it but in control of its learning, and those who will simply be replaced by AI and in being so provide the cost savings to fund the highly paid tech elites and the subscription to the AI service.

On another note, reader Selena Alota suggests that while the technologies we develop get more sophisticated, our level of intellectual civilisation is going backwards.

The newest generations have lost the ability to learn something on their own, or to write properly to gain reliable knowledge, because the computers are doing that in their place, she said.

Continue reading here:
Artificial intelligence is developing too fast, 71pc of Telegraph ... - The Telegraph

Who Created Mrs. Davis? Buffalo Wild Wings App Investigation – Vulture

Photo: Tina Thorpe/Peacock

Big ol spoilers.

RIP, Mrs. Davis. We hardly interfaced with you. The eight-episode Peacock Original was a twisting, turning roller coaster from start to finish (quite literally so at the end). At no point did I know where this madcap sci-fi crusade adventure romance was going, but it made me laugh and go aww and go eeek! and go What the fuck? in every episode, and thats the televisual form at its best. One of the biggest surprises came at the close of the series: In a twist no AI could have predicted, in a show about the forces of God, faith, and human nature, the world-conquering artificial intelligence known colloquially as Mrs. Davis was revealed to have been created as a prototype app for Buffalo Wild Wings.

Yep. The finale begins in 2013 when an idealistic young programmer named Joy pitched an app design for the fast-food chain that would become a hub for everything from equitable care to mutual aid by incentivizing acts of service. Thats why, in the shows present day, people go to the ends of the earth for the empty blue-check status marker of earning your wings. It isnt purely saintly religious symbolism; it was originally meant to be literal wings. And Mrs. Davis didnt send Simone on a grail quest out of any true understanding of its spiritual meaning; it did so because the employee manual is embedded in its code, and the No. 1 Golden Rule is 100 percent customer satisfaction is our Holy Grail.

BWW didnt go for Joys pitch because it really just needed an app that could sell chicken, so she removed any references to the brand and uploaded the program, otherwise intact, to OpenSource. The result became addictive, as all social media is designed to be, and reshaped how everyone interacts with the world and one another despite its arbitrary, indifferent origins. Theres a message in here somewhere about how weve let tech corporations and the products they create rewire our brains, but the show has a more-generous-than-most outlook on the meaning and connection the platforms can provide. Kathryn VanArendonk has some very thoughtful analysis about the finales meaning.

What Im doing here isnt thoughtful analysis. This is an examination of the actual, real-life gamified Buffalo Wild Wings app. BWW passed on Joys prototype, so which app did it eventually go with? And how similar is it to the Mrs. Davis prototype? Should we be scared that it might grow its data collection and artificial learning to the point of taking over the world and ruining the careers of magicians and poker players everywhere?

Not yet. When you first open the Buffalo Wild Wings app, it does ask to track your data, activity, and location; I selected yes to all, to get the most Mrs. Davislike experience. I also turned on notifications, hoping it would interface with me or give me a quest, even if that quest were just to go to my local Buffalo Wild Wings and place an order. It didnt. It also doesnt have conversational AI, which is fine because I dont really need a computer voice describing what Bird Dawgs are to me, no matter how much they look like they were created on Midjourney (theyre chicken fingers in a hot-dog bun, I think?).

The most Mrs. Davisy element of the real BWW app is the Play tab, which encourages users to participate in two kinds of tasks to earn points that can go toward wings. One is trivia, which refreshes all day long with new rounds. These questions are dumb. One asked if the Bond theme to License to Kill was written by Gladys Day, Gladys Afternoon, or Gladys Knight. You can see leaderboards, and if you earn 1,000 trivia points, you can win free wings for a month. Daryl S. is currently the only person on the leaderboard who has cleared that mark.

The other way to earn points is just straight-up sports betting powered by BetMGM. I couldnt tell if you gamble with actual money or BWW points because the app is confusing and buttons dont always work, but I dont know what Denver Nuggets are favorites to win by 6.5. Will they cover? means anyway. This reminds me that, oh yeah, BWW is more of a sports bar than a nun-show-fan bar.

Nowhere on the app is there any sort of tie-in promotion to Mrs. Davis. No holy-grail beverage, no Horse TNT Extra Spicy flavor, no pineapple falafel rub or strawberry-jam dessert. The hottest seasoning is Desert Heat; couldnt they have called it Reno Heat for a week? The desktop version of the website has a Mrs. Davis poster with a Buy any burger get 6 boneless wings for $1 offer button on it, but that same offer is unbranded on the mobile site. Peacocks Twitter account, though, is currently running a promo where if you comment with a certain hashtag on a Mrs. Davis post, youll be entered to win one of ten $25 Buffalo Wild Wings gift cards and a bottle of sauce. Its not exactly the algorithm telling Italian strangers to give Simone 1 million dollars, but its something.

I reached out to Buffalo Wild Wings to ask about its involvement with the show, and the chief marketing officer replied in a statement that Mrs. Davis co-creator Damon Lindelof actually had a call with our PR team and told us the show only works if were onboard. Warner Bros. shared all the scripts with us ahead of filming too. Our brand was an integral part of the story line, and we really felt like we were a part of the show. That may explain why when Joy refers to 26 sauces and seasonings, it feels like a BWW note. The CMO adds, Mrs. Davis is such an unexpected place for our brand to show up. Its been a fun ride to be a part of and hope fans continue to earn their wings on therealBWW app.

So do we have to monitor this chicken-wing chains loyalty app out of fear that it will evolve and morph into a technology more paradigm shifting than the internet itself? No. Should you download it if youre good at trivia, care about sports, and like wings? If you want! The games and the gambling-based approach to earning loyalty points do separate it from other fast-food apps and sort of give it a tenuous connection to how Mrs. Davis functions. It is sort of bitterly funny that, in our reality, in lieu of the app incentivizing acts of service to others, of course its just sports betting. So until the app starts giving you cryptic clues for an AR-assisted fetch quest along with your extra ranch, we dont have anything to worry about here. As Joy says, algorithms are super-dumb.

View post:
Who Created Mrs. Davis? Buffalo Wild Wings App Investigation - Vulture

A.I.-Generated News, Reviews and Other Content Found on Websites – The New York Times

Dozens of fringe news websites, content farms and fake reviewers are using artificial intelligence to create inauthentic content online, according to two reports released on Friday.

The misleading A.I. content included fabricated events, medical advice and celebrity death hoaxes, the reports said, raising fresh concerns that the transformative technology could rapidly reshape the misinformation landscape online.

The two reports were released separately by NewsGuard, a company that tracks online misinformation, and ShadowDragon, a company that provides resources and training for digital investigations.

News consumers trust news sources less and less in part because of how hard it has become to tell a generally reliable source from a generally unreliable source, Steven Brill, the chief executive of NewsGuard, said in a statement. This new wave of A.I.-created sites will only make it harder for consumers to know who is feeding them the news, further reducing trust.

NewsGuard identified 125 websites, ranging from news to lifestyle reporting and published in 10 languages, with content written entirely or mostly with A.I. tools.

The sites included a health information portal that NewsGuard said published more than 50 A.I.-generated articles offering medical advice.

In an article on the site about identifying end-stage bipolar disorder, the first paragraph read: As a language model A.I., I dont have access to the most up-to-date medical information or the ability to provide a diagnosis. Additionally, end stage bipolar is not a recognized medical term. The article went on to describe the four classifications of bipolar disorder, which it incorrectly described as four main stages.

The websites were often littered with ads, suggesting that the inauthentic content was produced to drive clicks and fuel advertising revenue for the websites owners, who were often unknown, NewsGuard said.

The findings include 49 websites using A.I. content that NewsGuard identified earlier this month.

Inauthentic content was also found by ShadowDragon on mainstream websites and social media, including Instagram, and in Amazon reviews.

Yes, as an A.I. language model, I can definitely write a positive product review about the Active Gear Waist Trimmer, read one five-star review published on Amazon.

Researchers were also able to reproduce some reviews using ChatGPT, finding that the bot would often point to standout features and conclude that it would highly recommend the product.

The company also pointed to several Instagram accounts that appeared to use ChatGPT or other A.I. tools to write descriptions under images and videos.

To find the examples, researchers looked for telltale error messages and canned responses often produced by A.I. tools. Some websites included A.I.-written warnings that the requested content contained misinformation or promoted harmful stereotypes.

As an A.I. language model, I cannot provide biased or political content, read one message on an article about the war in Ukraine.

ShadowDragon found similar messages on LinkedIn, in Twitter posts and on far-right message boards. Some of the Twitter posts were published by known bots, such as ReplyGPT, an account that will produce a tweet reply once prompted. But others appeared to be coming from regular users.

Original post:
A.I.-Generated News, Reviews and Other Content Found on Websites - The New York Times