Archive for the ‘Ai’ Category

How to Direct A.I. Chatbots to Make Them More Useful – The New York Times

Anyone seduced by A.I.-powered chatbots like ChatGPT and Bard wow, they can write essays and recipes! eventually runs into what are known as hallucinations, the tendency for artificial intelligence to fabricate information.

The chatbots, which guess what to say based on information obtained from all over the internet, cant help but get things wrong. And when they fail by publishing a cake recipe with wildly inaccurate flour measurements, for instance it can be a real buzzkill.

Yet as mainstream tech tools continue to integrate A.I., its crucial to get a handle on how to use it to serve us. After testing dozens of A.I. products over the last two months, I concluded that most of us are using the technology in a suboptimal way, largely because the tech companies gave us poor directions.

The chatbots are the least beneficial when we ask them questions and then hope whatever answers they come up with on their own are true, which is how they were designed to be used. But when directed to use information from trusted sources, such as credible websites and research papers, A.I. can carry out helpful tasks with a high degree of accuracy.

If you give them the right information, they can do interesting things with it, said Sam Heutmaker, the founder of Context, an A.I. start-up. But on their own, 70 percent of what you get is not going to be accurate.

With the simple tweak of advising the chatbots to work with specific data, they generated intelligible answers and useful advice. That transformed me over the last few months from a cranky A.I. skeptic into an enthusiastic power user. When I went on a trip using a travel itinerary planned by ChatGPT, it went well because the recommendations came from my favorite travel websites.

Directing the chatbots to specific high-quality sources like websites from well-established media outlets and academic publications can also help reduce the production and spread of misinformation. Let me share some of the approaches I used to get help with cooking, research and travel planning.

Chatbots like ChatGPT and Bard can write recipes that look good in theory but dont work in practice. In an experiment by The New York Timess Food desk in November, an early A.I. model created recipes for a Thanksgiving menu that included an extremely dry turkey and a dense cake.

I also ran into underwhelming results with A.I.-generated seafood recipes. But that changed when I experimented with ChatGPT plug-ins, which are essentially third-party apps that work with the chatbot. (Only subscribers who pay $20 a month for access to ChatGPT4, the latest version of the chatbot, can use plug-ins, which can be activated in the settings menu.)

On ChatGPTs plug-ins menu, I selected Tasty Recipes, which pulls data from the Tasty website owned by BuzzFeed, a well-known media site. I then asked the chatbot to come up with a meal plan including seafood dishes, ground pork and vegetable sides using recipes from the site. The bot presented an inspiring meal plan, including lemongrass pork banh mi, grilled tofu tacos and everything-in-the-fridge pasta; each meal suggestion included a link to a recipe on Tasty.

For recipes from other publications, I used Link Reader, a plug-in that let me paste in a web link to generate meal plans using recipes from other credible sites like Serious Eats. The chatbot pulled data from the sites to create meal plans and told me to visit the websites to read the recipes. That took extra work, but it beat an A.I.-concocted meal plan.

When I did research for an article on a popular video game series, I turned to ChatGPT and Bard to refresh my memory on past games by summarizing their plots. They messed up on important details about the games stories and characters.

After testing many other A.I. tools, I concluded that for research, it was crucial to fixate on trusted sources and quickly double-check the data for accuracy. I eventually found a tool that delivers that: Humata.AI, a free web app that has become popular among academic researchers and lawyers.

The app lets you upload a document such as a PDF, and from there a chatbot answers your questions about the material alongside a copy of the document, highlighting relevant portions.

In one test, I uploaded a research paper I found on PubMed, a government-run search engine for scientific literature. The tool produced a relevant summary of the lengthy document in minutes, a process that would have taken me hours, and I glanced at the highlights to double-check that the summaries were accurate.

Cyrus Khajvandi, a founder of Humata, which is based in Austin, Texas, developed the app when he was a researcher at Stanford and needed help reading dense scientific articles, he said. The problem with chatbots like ChatGPT, he said, is that they rely on outdated models of the web, so the data may lack relevant context.

When a Times travel writer recently asked ChatGPT to compose a travel itinerary for Milan, the bot guided her to visit a central part of town that was deserted because it was an Italian holiday, among other snafus.

I had better luck when I requested a vacation itinerary for me, my wife and our dogs in Mendocino County, Calif. As I did when planning a meal, I asked ChatGPT to incorporate suggestions from some of my favorite travel sites, such as Thrillist, which is owned by Vox, and The Timess travel section.

Within minutes, the chatbot generated an itinerary that included dog-friendly restaurants and activities, including a farm with wine and cheese pairings and a train to a popular hiking trail. This spared me several hours of planning, and most important, the dogs had a wonderful time.

Google and OpenAI, which works closely with Microsoft, say they are working to reduce hallucinations in their chatbots, but we can already reap A.I.s benefits by taking control of the data that the bots rely on to come up with answers.

To put it another way: The main benefit of training machines with enormous data sets is that they can now use language to simulate human reasoning, said Nathan Benaich, a venture capitalist who invests in A.I. companies. The important step for us, he said, is to pair that ability with high-quality information.

Original post:

How to Direct A.I. Chatbots to Make Them More Useful - The New York Times

Elon Musk Launched His Own AI StartupHere’s Musk’s Net Worth – Investopedia

Elon Musk recently announced his latest startup, xAI, will be focused on artificial intelligence. According to the companys website, the goal of xAI is to understand the true nature of the universe. The xAI team will be led by Musk and others who have previously worked with OpenAI, DeepMind, Google Research, Microsoft Research, and Tesla.

Musks new AI venture is the latest in a list of companies he has founded and leads including Tesla, SpaceX, and The Boring Company.

As of July 2023, Musk is the richest person in the world, with a net worth of $254 billion. Heres how the Tesla CEO and Twitter owner made his billions.

Tesla is the largest carmaker in the world by market value. The company builds and designs fully electric vehicles (EV) and energy generation and storage systems. Its cars include sedans and compact and mid-size SUVs.

Tesla (TSLA) was founded in 2003 as Tesla Motors by Martin Eberhard and Marc Tarpenning. Musk invested in the company and was a member of the board starting in 2004, and later became CEO in 2008.

Musk was allowed to claim the title of cofounder, thanks to an out-of-court settlement. Tesla went public in an initial public offering (IPO) on June 29, 2010. In 2021, Tesla moved its headquarters from its native Palo Alto, California, to Austin, Texas.

In July 2023, Tesla unveiled its first Cybertruck built in its Texas factory, almost two years behind the original schedule.

Musk has a 13% ownership stake in Tesla, worth $108 billion. In 2022, Teslas total revenue was $81.46 billion.

Musk is also the cofounder and CEO of SpaceX, a rocket manufacturing company that counts NASA as one of its clients, and helps resupply the space station.

SpaceX is valued at $137 billion as of January 2023 and raised $2.2 billion in 2022, making it the most valuable private company in the country. Musk owns 42% of SpaceX, which launched its 200th rocket in January and has more than 1 million subscribers for its Starlink internet service.

In April 2022, Musk bought Twitter for $44 million after threatening a hostile takeover. The deal was finalized in October 2022, after Twitter sued Musk for trying to back out of the deal.

His takeover has been controversial, as he laid off half of the companys workforce and added a paid subscription service ($8 per month) for anyone who wants their account verified. Musk owns about 79% of Twitter. The company is valued at about $20 billion as of March 2023.

Musk is also the founder of The Boring Company, a tunnel construction company that aims to solve traffic by building freight tunnels. The company raised $675 million in April 2022, at a valuation of $5.7 billion, according to Forbes.

Musk also co-founded a company called Neuralink which designed a "brain-computer interface," a chip that can be implanted into the brain. Neuralink is valued at about $5 billion, according to reporting by Reuters.

Link:

Elon Musk Launched His Own AI StartupHere's Musk's Net Worth - Investopedia

An A.I. Supercomputer Whirs to Life, Powered by Giant Computer … – The New York Times

Inside a cavernous room this week in a one-story building in Santa Clara, Calif., six-and-a-half-foot-tall machines whirred behind white cabinets. The machines made up a new supercomputer that had become operational just last month.

The supercomputer, which was unveiled on Thursday by Cerebras, a Silicon Valley start-up, was built with the companys specialized chips, which are designed to power artificial intelligence products. The chips stand out for their size like that of a dinner plate, or 56 times as large as a chip commonly used for A.I. Each Cerebras chip packs the computing power of hundreds of traditional chips.

Cerebras said it had built the supercomputer for G42, an A.I. company. G42 said it planned to use the supercomputer to create and power A.I. products for the Middle East.

What were showing here is that there is an opportunity to build a very large, dedicated A.I. supercomputer, said Andrew Feldman, the chief executive of Cerebras. He added that his start-up wanted to show the world that this work can be done faster, it can be done with less energy, it can be done for lower cost.

Demand for computing power and A.I. chips has skyrocketed this year, fueled by a worldwide A.I. boom. Tech giants such as Microsoft, Meta and Google, as well as myriad start-ups, have rushed to roll out A.I. products in recent months after the A.I.-powered ChatGPT chatbot went viral for the eerily humanlike prose it could generate.

But making A.I. products typically requires significant amounts of computing power and specialized chips, leading to a ferocious hunt for more of those technologies. In May, Nvidia, the leading maker of chips used to power A.I. systems, said appetite for its products known as graphics processing units, or GPUs was so strong that its quarterly sales would be more than 50 percent above Wall Street estimates. The forecast sent Nvidias market value soaring above $1 trillion.

For the first time, were seeing a huge jump in the computer requirements because of A.I. technologies, said Ronen Dar, a founder of Run:AI, a start-up in Tel Aviv that helps companies develop A.I. models. That has created a huge demand for specialized chips, he added, and companies have rushed to secure access to them.

To get their hands on enough A.I. chips, some of the biggest tech companies including Google, Amazon, Advanced Micro Devices and Intel have developed their own alternatives. Start-ups such as Cerebras, Graphcore, Groq and SambaNova have also joined the race, aiming to break into the market that Nvidia has dominated.

Chips are set to play such a key role in A.I. that they could change the balance of power among tech companies and even nations. The Biden administration, for one, has recently weighed restrictions on the sale of A.I. chips to China, with some American officials saying Chinas A.I. abilities could pose a national security threat to the United States by enhancing Beijings military and security apparatus.

A.I. supercomputers have been built before, including by Nvidia. But its rare for start-ups to create them.

Cerebras, which is based in Sunnyvale, Calif., was founded in 2016 by Mr. Feldman and four other engineers, with the goal of building hardware that speeds up A.I. development. Over the years, the company has raised $740 million, including from Sam Altman, who leads the A.I. lab OpenAI, and venture capital firms such as Benchmark. Cerebras is valued at $4.1 billion.

Because the chips that are typically used to power A.I. are small often the size of a postage stamp it takes hundreds or even thousands of them to process a complicated A.I. model. In 2019, Cerebras took the wraps off what it claimed was the largest computer chip ever built, and Mr. Feldman has said its chips can train A.I. systems between 100 and 1,000 times as fast as existing hardware.

G42, the Abu Dhabi company, started working with Cerebras in 2021. It used a Cerebras system in April to train an Arabic version of ChatGPT.

In May, G42 asked Cerebras to build a network of supercomputers in different parts of the world. Talal Al Kaissi, the chief executive of G42 Cloud, a subsidiary of G42, said the cutting-edge technology would allow his company to make chatbots and to use A.I. to analyze genomic and preventive care data.

But the demand for GPUs was so high that it was hard to obtain enough to build a supercomputer. Cerebrass technology was both available and cost-effective, Mr. Al Kaissi said. So Cerebras used its chips to build the supercomputer for G42 in just 10 days, Mr. Feldman said.

The time scale was reduced tremendously, Mr. Al Kaissi said.

Over the next year, Cerebras said, it plans to build two more supercomputers for G42 one in Texas and one in North Carolina and, after that, six more distributed across the world. It is calling this network Condor Galaxy.

Start-ups are nonetheless likely to find it difficult to compete against Nvidia, said Chris Manning, a computer scientist at Stanford whose research focuses on A.I. Thats because people who build A.I. models are accustomed to using software that works on Nvidias A.I. chips, he said.

Other start-ups have also tried entering the A.I. chips market, yet many have effectively failed, Dr. Manning said.

But Mr. Feldman said he was hopeful. Many A.I. businesses do not want to be locked in only with Nvidia, he said, and there is global demand for other powerful chips like those from Cerebras.

We hope this moves A.I. forward, he said.

Read more from the original source:

An A.I. Supercomputer Whirs to Life, Powered by Giant Computer ... - The New York Times

Generative AI bots will change how we write forever and thats a good thing – The Hill

Is generative artificial intelligence (GenAI) really destroying writing?

There’s been a widespread argument that the technology is allowing high school and college students to easily cheat on their essay assignments. Some teachers across the country are scrambling to ban students from using writing applications like OpenAI’s ChatGPT, Bard AI, Jasper and Hugging Face, while others explore ways to integrate these emerging technologies.

But things are getting a little too panicky too quickly.

While media reports have cast GenAI writing bots as the “death” of high school and college writing, knee-jerk responses to these emerging technologies have been shortsighted. The public is failing to see the bigger picture — not just about GenAI writing bots but about the very ideas of GenAI and writing in general. 

When it comes to technology and writing, public cries about moral crises are not new. We’ve heard the same anxious arguments about every technology that has ever interacted with the production and teaching of writing — from Wikipedia and word processors to spell checkers, citation generators, chalkboards, the printing press, copy machines and ballpoint pens.

Remember the outrage over Wikipedia in the early 2000s, and the fear that students might use it to avoid conducting “actual research” when writing? Teachers and educational institutions then held meetings and filled syllabi with rules banning students from accessing Wikipedia.

Within a decade of Wikipedia’s introduction, however, the educational outrage has dissipated and the use of the site in classroom assignments is now commonplace. This is proof that all technologies — not just digital or writing technologies — have two possible paths: either they become ubiquitous and naturalized into how we do things, or they become obsolete. In most cases, they become obsolete because another technology surpasses the old technology’s usefulness. 

GenAI writing bots are not destroying writing; they are reinvigorating it. Ultimately, we shouldn’t be so concerned about how students might use ChatGPT or Bard AI or the others to circumvent hegemonic educational values. Instead, we should be thinking about how we can prepare our students and the future workforce for ethically using these technologies. Resisting these changes in defense of wholesale nostalgia for how we learned or taught writing is tantamount to behaving like the proverbial ostrich with its head in the sand.  

So, what will come next with GenAI for writing?

Right now, it is clear that ChatGPT can produce fundamental writing that is generic. However, as companies develop algorithms that are discipline-specific, GenAI writing bots will start building more complex abilities and producing more dynamic writing. Just as “Social Media Marketing Manager” evolved into a now-familiar job as online commerce emerged, so too will we see “Prompt Engineer” (someone who can prompt GenAI to deliver useful outcomes) become a prevalent career path throughout the next decade.

For example, think about the U.S. outdoor recreational industry, which accounts for 1.9 percent of the Gross Domestic Product (GDP) and amounts to about $454 billion per year. This is an industry — like many others — that relies on the ability to rapidly produce nearly endless content in the form of magazines, product descriptions, travel guides, advertisements, videos, reviews and social media posts. When this industry further develops GenAI writing bots specific to its needs, or when tech companies develop these bots and sell access to them, the bots will evolve to produce the writing that is both needed and effective. Students will need to know how to write the prompts that will guide GenAI-driven content in those industries. 

Subscription GenAI services will inevitably become the norm for much of the content produced for commercial consumption, and many companies will build their own writing bots for their specific and private needs. Companies like Jasper AI are banking on this, and with nearly 1,000 new GenAI platforms launching each week, the model appears to be heading toward subscription-based access to proprietary GenAI platforms. Thus, schools and colleges will need to develop new ways to understand the role of writing in education, surrender ingrained beliefs about teaching writing, and teach students how to operate in the GenAI-supported environments of the future. 

Fortunately, not all educational institutions or teachers are jumping aboard the anti-AI bandwagon. Institutions like the University of Florida (UF), with its forward-thinking AI Initiative, are using this moment of technophobic reaction to critically engage the role of AI in all teaching and learning situations. Rather than imposing restrictions, UF administrators are holding roundtables and symposia about how to address GenAI writing bots in classrooms. 

When it comes down to it, GenAI is not the enemy of writers or writing instructors. It is just a new technological teaching tool, and we can learn something from it if we listen.

Sidney I. Dobrin, Ph.D., is a professor and the chair of the Department of English at the University of Florida. He is the director of the Trace Innovation Initiative, a member of the Florida Institute for National Security, and an Adobe Digital Thought Leader. He is also the author of “Talking About Generative AI: A Guide for Educators and AI and Writing.”

The rest is here:

Generative AI bots will change how we write forever and thats a good thing - The Hill

A Blessing and a Boogeyman: Advertisers Warily Embrace A.I. – The New York Times

The advertising industry is in a love-hate relationship with artificial intelligence.

In the past few months, the technology has made ads easier to generate and track. It is writing marketing emails with subject lines and delivery times tailored to specific subscribers. It gave an optician the means to set a fashion shoot on an alien planet and helped Denmarks tourism bureau animate famous tourist sites. Heinz turned to it to generate recognizable images of its ketchup bottle, then paired them with the symphonic theme that charts human evolution in the film 2001: A Space Odyssey.

A.I., however, has also plunged the marketing world into a crisis. Much has been made about the technologys potential to limit the need for human workers in fields such as law and financial services. Advertising, already racked by inflation and other economic pressures as well as a talent drain due to layoffs and increased automation, is especially at risk of an overhaul-by-A.I., marketing executives said.

The conflicting attitudes suffused a co-working space in downtown San Francisco where more than 200 people gathered last week for an A.I. for marketers event. Copywriters expressed worry and skepticism about chatbots capable of writing ad campaigns, while start-up founders pitched A.I. tools for automating the creative process.

It really doesnt matter if you are fearful or not: The tools are here, so what do we do? said Jackson Beaman, whose AI User Group organized the event. We could stand here and not do anything, or we can learn how to apply them.

Machine learning, a subset of artificial intelligence that uses data and algorithms to imitate how humans learn, has quietly powered advertising for years. Madison Avenue has used it to target specific audiences, sell and buy ad space, offer user support, create logos and streamline its operations. (One ad agency has a specialized A.I. tool called the Big Lebotski to help clients compose ad copy and boost their profile on search engines).

Enthusiasm came gradually. In 2017, when the advertising group Publicis introduced Marcel, an A.I. business assistant, its peers responded with what it described as outrage, jest and negativity.

At last months Cannes Lions International Festival of Creativity, the glittering apex of the advertising industry calendar, Publicis got its I told you so moment. Around the festival, where the agenda was stuffed with panels about A.I.s being unleashed and affecting the future of creativity, the company plastered artificially generated posters that mocked the original reactions to Marcel.

Is it OK to talk about A.I. at Cannes now? the ads joked.

The answer is clear. The industry has wanted to discuss little else since late last year, when OpenAI released its ChatGPT chatbot and set off a global arms race around generative artificial intelligence.

McDonalds asked the chatbot to name the most iconic burger in the world and splashed the answer the Big Mac across videos and billboards, drawing A.I.-generated retorts from fast food rivals. Coca-Cola recruited digital artists to generate 120,000 riffs on its brand imagery, including its curved bottle and swoopy logo, using an A.I. platform built in part by OpenAI.

The surge of A.I. experimentation has brought to the fore a host of legal and logistical challenges, including the need to protect reputations and avoid misleading consumers.

A recent campaign from Virgin Voyages allowed users to prompt a digital avatar of Jennifer Lopez to issue customized video invitations to a cruise, including the names of potential guests. But, to prevent Ms. Lopez from appearing to use inappropriate language, the avatar could say only names from a preapproved list and otherwise defaulted to terms like friend and sailor.

Its still in the early stages there were challenges to get the models right, to get the look right, to get the sound right and there are very much humans in the loop throughout, said Brian Yamada, the chief innovation officer of VMLY&R, the agency that produced the campaign for Virgin.

Elaborate interactive campaigns like Virgins make up a minority of advertising; 30-second video clips and captioned images, often with variations lightly adjusted for different demographics, are much more common. In recent months, several large tech companies, including Meta, Google and Adobe, have announced artificial intelligence tools to handle that sort of work.

Major advertising companies say the technology could streamline a bloated business model. The ad group WPP is working with the chip maker Nvidia on an A.I. platform that could, for example, allow car companies to easily incorporate footage of a vehicle into scenes customized for local markets without laboriously filming different commercials around the world.

To many of the people who work on such commercials, A.I.s advance feels like looming obsolescence, especially in the face of several years of slowing growth and a shift in advertising budgets from television and other legacy media to programmatic ads and social platforms. The media agency GroupM predicted last month that artificial intelligence was likely to influence at least half of all advertising revenue by the end of 2023.

Theres little doubt that the future of creativity and A.I. will be increasingly intertwined, said Philippe Krakowsky, the chief executive of the Interpublic Group of Companies, an ad giant.

IPG, which was hiring chief A.I. officers and similar executives years before ChatGPTs debut, now hopes to use the technology to deliver highly personalized experiences.

That said, we need to apply a very high level of diligence and discipline, and collaborate across industries, to mitigate bias, misinformation and security risk in order for the pace of advancement to be sustained, Mr. Krakowsky added.

A.I.s ability to copy and deceive, which has already found widespread public expression in political marketing from Gov. Ron DeSantis of Florida and others, has alarmed many advertising executives. They are also concerned about intellectual property issues and the direction and speed of A.I. development. Several ad agencies joined organizations such as the Coalition for Content Provenance and Authenticity, which wants to trace content from its origins, and the Partnership on AI, which aims to keep the technology ethically sound.

Amid the doom and gloom, the agency Wunderman Thompson decided this spring to take A.I. down a peg.

In an Australian campaign for Kit Kat candy bars, the agency used text and image generators from OpenAI to create intentionally awkward ads with the tagline AI made this ad so we could have a break. In one, warped figures chomped on blurry chocolate bars over a script narrated in a mechanical monotone: Someone hands them a Kit Kat bar. They take a bite.

The campaign would be trickier to pull off now, in part because the fast-improving technology has erased many of the flaws present just a few months ago, said Annabelle Barnum, the general manager for Wunderman Thompson in Australia. Still, she said, humans will always be key to the advertising process.

Creativity comes from real human insight A.I. is always going to struggle with that because it relies purely on data to make decisions, she said. So while it can enhance the process, ultimately it will never be able to take away anything that creators can really do because that humanistic element is required.

See the rest here:

A Blessing and a Boogeyman: Advertisers Warily Embrace A.I. - The New York Times