Archive for the ‘Ai’ Category

Meta’s CFO Said That Generative AI Will Not Be a Meaningful Driver of Revenue in 2024 — Does This Mean AI Is All … – The Motley Fool

In today's video, I discuss the recent updates affecting Meta Platforms (META 20.32%). Check out the short video to learn more, consider subscribing, and click the special offer link below.

*Stock prices used were the after-market prices of Feb. 1, 2024. The video was published on Feb. 1, 2024.

Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Jose Najarro has positions in Meta Platforms and Nvidia. The Motley Fool has positions in and recommends Meta Platforms and Nvidia. The Motley Fool has a disclosure policy. Jose Najarro is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their link they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.

Go here to see the original:

Meta's CFO Said That Generative AI Will Not Be a Meaningful Driver of Revenue in 2024 -- Does This Mean AI Is All ... - The Motley Fool

I Tested a Next-Gen AI Assistant. It Will Blow You Away – WIRED

The most famous virtual valets around todaySiri, Alexa, and Google Assistantare a lot less impressive than the latest AI-powered chatbots like ChatGPT or Google Bard. When the fruits of the recent generative AI boom get properly integrated into those legacy assistant bots, they will surely get much more interesting.

To get a preview of whats next, I took an experimental AI voice helper called vimGPT for a test run. When I asked it to subscribe to WIRED, it got to work with impressive skill, finding the correct web page and accessing the online form. If it had access to my credit card details Im pretty sure it would have nailed it.

Although hardly an intelligence test for a human, buying something online on the open web is a lot more complicated and challenging than the tasks that Siri, Alexa, or the Google Assistant typically handle. (Setting reminders and getting sports results are so 2010.) It requires making sense of the request, accessing the web to find the correct site, then correctly interacting with the relevant page or forms. My helper correctly navigated to WIREDs subscription page and even found the form therepresumably impressed by the prospect of receiving all WIREDs entertaining and insightful journalism for only $1 a monthbut fell at the final hurdle because it lacked a credit card. VimGPT makes use of Googles open source browser Chromium that doesnt store user information. My other experiments showed that the agent is, however, very adept at searching for funny cat videos or finding cheap flights.

VimGPT is an experimental open-source program built by Ishan Shah, a lone developer, not a product in development, but you can bet that Apple, Google, and others are doing similar experiments with a view to upgrading Siri and other assistants. VimGPT is built on GPT-4V, the multimodal version of OpenAIs famous language model. By analyzing a request it can determine what to click on or type more reliably than text-only software can, which has to attempt to make sense of the web by untangling messy HTML. A year from now, I would expect the experience of using a computer to look very different, says Shah, who says he built vimGPT in only a few days. Most apps will require less clicking and more chatting, with agents becoming an integral part of browsing the web.

Shah is not the only person who believes that the next logical step after chatbots like ChatGPT is agents that use computers and roam the Web. Ruslan Salakhutdinov, a professor at Carnegie Mellon University who was Apples director of AI research from 2016 to 2020, believes that Siri and other assistants are in line for an almighty AI upgrade. The next evolution is going to be agents that can get useful tasks done, Salakhutdinov says. Hooking Siri up to AI like that powering ChatGPT would be useful, he says, but it will be so much more impactful if I ask Siri to do stuff, and it just goes and solves my problems for me.

Salakhutdinov and his students have developed several simulated environments designed for testing and honing the skills of AI helpers that can get things done. They include a dummy ecommerce website, a mocked-up version of a Reddit-like message board, and a website of classified ads. This virtual testing ground for putting agents through their paces is called VisualWebArena.

The rest is here:

I Tested a Next-Gen AI Assistant. It Will Blow You Away - WIRED

Tim Cook says big Apple AI announcement is coming later this year – Mashable

Apple is almost ready to show us what it can do in the artificial intelligence space.

During Apple's earnings call on Thursday, Apple CEO Tim Cook said that the company will make an AI announcement sometime in 2024.

As we look ahead, we will continue to invest in these and other technologies that shape our future. That includes AI, where we continue to spend a tremendous amount of time and effort and were excited to share details of our ongoing work in that space later this year," said Cook.

As is typical of Cook, who is always extremely careful not to unveil anything in advance, almost all details were absent. But he did confirm he's talking about generative AI, best known to most folks through OpenAI's super-smart assistant, ChatGPT.

Let me just say that I think theres a huge opportunity for Apple with generative AI and with AI, without getting into many more details or getting out ahead of myself," said Cook.

Cook's remarks seemingly confirm Bloomberg's recent report, which claimed that the company's upcoming iOS 18 will be one of the biggest iOS updates in the company's history. The update, which will likely be announced during Apple's WWDC conference in June, will reportedly include a smarter Siri, as well as more AI integration into basic iOS features including Messages and Apple Music.

Cook's announcement also comes on the eve of an incredibly important product launch for Apple. The company's first spatial computer, the $3,499 Vision Pro, launches on Friday. The VR/AR headset comes with some AI features; for example, Personas the virtualized versions of headset's wearers are created using AI models. But some analysts believe that the Vision Pro will mark Apple's biggest Apple push into AI yet, which will eventually include a separate AI App Store.

Go here to see the original:

Tim Cook says big Apple AI announcement is coming later this year - Mashable

Fact Sheet: Biden-Harris Administration Announces Key AI Actions Following President Bidens Landmark Executive … – The White House

Three months ago, President Biden issued a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Order directed sweeping action to strengthen AI safety and security, protect Americans privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more.

Today, Deputy Chief of Staff Bruce Reed will convene the White House AI Council, consisting of top officials from a wide range of federal departments and agencies. Agencies reported that they have completed all of the 90-day actions tasked by the E.O. and advanced other vital directives that the Order tasked over a longer timeframe.

Taken together, these activities mark substantial progress in achieving the EOs mandate to protect Americans from the potential risks of AI systems while catalyzing innovation in AI and beyond. Visit ai.gov to learn more.

Managing Risks to Safety and Security

The Executive Order directed a sweeping range of actions within 90 days to address some of AIs biggest threats to safety and security. These included setting key disclosure requirements for developers of the most powerful systems, assessing AIs risks for critical infrastructure, and hindering foreign actors efforts to develop AI for harmful purposes. To mitigate these and other risks, agencies have:

Innovating AI for Good

To seize AIs enormous promise and deepen the U.S. lead in AI innovation, President Bidens Executive Order directed increased investment in AI innovation and new efforts to attract and train workers with AI expertise. Over the past 90 days, agencies have:

The table below summarizes many of the activities federal agencies have completed in response to the Executive Order.

###

Read the original post:

Fact Sheet: Biden-Harris Administration Announces Key AI Actions Following President Bidens Landmark Executive ... - The White House

In the AI science boom, beware: your results are only as good as your data – Nature.com

Hunter Moseley says that good reproducibility practices are essential to fully harness the potential of big data.Credit: Hunter N.B. Moseley

We are in the middle of a data-driven science boom. Huge, complex data sets, often with large numbers of individually measured and annotated features, are fodder for voracious artificial intelligence (AI) and machine-learning systems, with details of new applications being published almost daily.

But publication in itself is not synonymous with factuality. Just because a paper, method or data set is published does not mean that it is correct and free from mistakes. Without checking for accuracy and validity before using these resources, scientists will surely encounter errors. In fact, they already have.

In the past few months, members of our bioinformatics and systems-biology laboratory have reviewed state-of-the-art machine-learning methods for predicting the metabolic pathways that metabolites belong to, on the basis of the molecules chemical structures1. We wanted to find, implement and potentially improve the best methods for identifying how metabolic pathways are perturbed under different conditions: for instance, in diseased versus normal tissues.

We found several papers, published between 2011 and 2022, that demonstrated the application of different machine-learning methods to a gold-standard metabolite data set derived from the Kyoto Encyclopedia of Genes and Genomes (KEGG), which is maintained at Kyoto University in Japan. We expected the algorithms to improve over time, and saw just that: newer methods performed better than older ones did. But were those improvements real?

Scientific reproducibility enables careful vetting of data and results by peer reviewers as well as by other research groups, especially when the data set is used in new applications. Fortunately, in keeping with best practices for computational reproducibility, two of the papers2,3 in our analysis included everything that is needed to put their observations to the test: the data set they used, the computer code they wrote to implement their methods and the results generated from that code. Three of the papers24 used the same data set, which allowed us to make direct comparisons. When we did so, we found something unexpected.

It is common practice in machine learning to split a data set in two and to use one subset to train a model and another to evaluate its performance. If there is no overlap between the training and testing subsets, performance in the testing phase will reflect how well the model learns and performs. But in the papers we analysed, we identified a catastrophic data leakage problem: the two subsets were cross-contaminated, muddying the ideal separation. More than 1,700 of 6,648 entries from the KEGG COMPOUND database about one-quarter of the total data set were represented more than once, corrupting the cross-validation steps.

NatureTech

When we removed the duplicates in the data set and applied the published methods again, the observed performance was less impressive than it had first seemed. There was a substantial drop in the F1 score a machine-learning evaluation metric that is similar to accuracy but is calculated in terms of precision and recall from 0.94 to 0.82. A score of 0.94 is reasonably high and indicates that the algorithm is usable in many scientific applications. A score of 0.82, however, suggests that it can be useful, but only for certain applications and only if handled appropriately.

It is, of course, unfortunate that these studies were published with flawed results stemming from the corrupted data set; our work calls their findings into question. But because the authors of two of the studies followed best practices in computational scientific reproducibility and made their data, code and results fully available, the scientific method worked as intended, and the flawed results were detected and (to the best of our knowledge) are being corrected.

The third team, as far as we can tell, included neither their data set nor their code, making it impossible for us to properly evaluate their results. If all of the groups had neglected to make their data and code available, this data-leakage problem would have been almost impossible to catch. That would be a problem not just for the studies that were already published, but also for every other scientist who might want to use that data set for their own work.

More insidiously, the erroneously high performance reported in these papers could dissuade others from attempting to improve on the published methods, because they would incorrectly find their own algorithms lacking by comparison. Equally troubling, it could also complicate journal publication, because demonstrating improvement is often a requirement for successful review potentially holding back research for years.

So, what should we do with these erroneous studies? Some would argue that they should be retracted. We would caution against such a knee-jerk reaction at least as a blanket policy. Because two of the three papers in our analysis included the data, code and full results, we could evaluate their findings and flag the problematic data set. On one hand, that behaviour should be encouraged for instance, by allowing the authors to publish corrections. On the other, retracting studies with both highly flawed results and little or no support for reproducible research would send the message that scientific reproducibility is not optional. Furthermore, demonstrating support for full scientific reproducibility provides a clear litmus test for journals to use when deciding between correction and retraction.

Now, scientific data are growing more complex every day. Data sets used in complex analyses, especially those involving AI, are part of the scientific record. They should be made available along with the code with which to analyse them either as supplemental material or through open data repositories, such as Figshare (Figshare has partnered with Springer Nature, which publishes Nature, to facilitate data sharing in published manuscripts) and Zenodo, that can ensure data persistence and provenance. But those steps will help only if researchers also learn to treat published data with some scepticism, if only to avoid repeating others mistakes.

See the original post here:

In the AI science boom, beware: your results are only as good as your data - Nature.com