Archive for the ‘Ai’ Category

These are JPMorgan’s top AI stock picks outside of the chip space – CNBC

Stocks tied to the artificial intelligence frenzy might be trading at a premium, but that doesn't mean they're unattractive, according to JPMorgan. The AI trend has propelled stocks to new heights. This year alone, the two highest outperformers in the Magnificent Seven cohort Nvidia and Meta Platforms have climbed 90% and 44%, respectively. On the other hand, the underperformance of Tesla and Apple , each down 31% and 10%, also speaks volumes: The days of merely riding the wave of these Big Tech names seem to be over, and companies will have to start proving that they have a fundamental AI story. Indeed, JPMorgan analyst Samik Chatterjee noted that within the hardware and networking space, the "AI Group" of stocks is trading at a 60% premium compared to its historical trading average. For comparison, the non-AI group of stocks is only at a 10% premium. Overall, Chatterjee said that the AI cohort is trading at a 55% premium relative to the non-AI basket. While that may be the case, the top AI picks still remain attractive including a few names outside of the chip space, he wrote. In the same note, Chatterjee released a list of his top AI stock picks. One of Chatterjee's top AI picks is PC and server manufacturer Dell . The stock "trades at one of the most inexpensive multiples relative to the AI Group despite the robust premium relative to its own historical average," he wrote. The AI bull case for Dell, which JPMorgan rates as overweight, is around its GPU-based server sales, the analyst said. Dell has rallied roughly 47% this year. On March 1, shares surged 31% Dell's best day since its 2018 return to the stock market after the company beat earnings and revenue estimates in its latest quarter. The analyst also likes Arista Networks as an AI pick, which has a 50% revenue exposure to the cloud trend. The AI bull case for the company, which is also rated overweight, is tied to Ethernet adoption in back-end networks, Chatterjee said. Arista is also cheaper than its peers, trading at just a 28% premium versus a 60% average, the analyst said. Arista has soared 30% this year. Goldman Sachs analyst Michael Ng recently stated his confidence that the stock can outperform Wall Street's earnings predictions.

See the original post:

These are JPMorgan's top AI stock picks outside of the chip space - CNBC

16 Changes to the Way Enterprises Are Building and Buying Generative AI – Andreessen Horowitz

Generative AI took the consumer landscape by storm in 2023, reaching over a billion dollars of consumer spend1 in record time. In 2024, we believe the revenue opportunity will be multiples larger in the enterprise.

Last year, while consumers spent hours chatting with new AI companions or making images and videos with diffusion models, most enterprise engagement with genAI seemed limited to a handful of obvious use cases and shipping GPT-wrapper products as new SKUs. Some naysayers doubted that genAI could scale into the enterprise at all. Arent we stuck with the same 3 use cases? Can these startups actually make any money? Isnt this all hype?

Over the past couple months, weve spoken with dozens of Fortune 500 and top enterprise leaders,2 and surveyed 70 more, to understand how theyre using, buying, and budgeting for generative AI. We were shocked by how significantly the resourcing and attitudes toward genAI had changed over the last 6 months. Though these leaders still have some reservations about deploying generative AI, theyre also nearly tripling their budgets, expanding the number of use cases that are deployed on smaller open-source models, and transitioning more workloads from early experimentation into production.

This is a massive opportunity for founders. We believe that AI startups who 1) build for enterprises AI-centric strategic initiatives while anticipating their pain points, and 2) move from a services-heavy approach to building scalable products will capture this new wave of investment and carve out significant market share.

As always, building and selling any product for the enterprise requires a deep understanding of customers budgets, concerns, and roadmaps. To clue founders into how enterprise leaders are making decisions about deploying generative AIand to give AI executives a handle on how other leaders in the space are approaching the same problems they haveweve outlined 16 top-of-mind considerations about resourcing, models, and use cases from our recent conversations with those leaders below.

In 2023, the average spend across foundation model APIs, self-hosting, and fine-tuning models was $7M across the dozens of companies we spoke to. Moreover, nearly every single enterprise we spoke with saw promising early results of genAI experiments and planned to increase their spend anywhere from 2x to 5x in 2024 to support deploying more workloads to production.

Last year, much of enterprise genAI spend unsurprisingly came from innovation budgets and other typically one-time pools of funding. In 2024, however, many leaders are reallocating that spend to more permanent software line items; fewer than a quarter reported that genAI spend will come from innovation budgets this year. On a much smaller scale, weve also started to see some leaders deploying their genAI budget against headcount savings, particularly in customer service. We see this as a harbinger of significantly higher future spend on genAI if the trend continues. One company cited saving ~$6 for each call served by their LLM-powered customer servicefor a total of ~90% cost savingsas a reason to increase their investment in genAI eightfold. Heres the overall breakdown of how orgs are allocating their LLM spend:

Enterprise leaders are currently mostly measuring ROI by increased productivity generated by AI. While they are relying on NPS and customer satisfaction as good proxy metrics, theyre also looking for more tangible ways to measure returns, such as revenue generation, savings, efficiency, and accuracy gains, depending on their use case. In the near term, leaders are still rolling out this tech and figuring out the best metrics to use to quantify returns, but over the next 2 to 3 years ROI will be increasingly important. While leaders are figuring out the answer to this question, many are taking it on faith when their employees say theyre making better use of their time.

Simply having an API to a model provider isnt enough to build and deploy generative AI solutions at scale. It takes highly specialized talent to implement, maintain, and scale the requisite computing infrastructure. Implementation alone accounted for one of the biggest areas of AI spend in 2023 and was, in some cases, the largest. One executive mentioned that LLMs are probably a quarter of the cost of building use cases, with development costs accounting for the majority of the budget. In order to help enterprises get up and running on their models, foundation model providers offered and are still providing professional services, typically related to custom model development. We estimate that this made up a sizable portion of revenue for these companies in 2023 and, in addition to performance, is one of the key reasons enterprises selected certain model providers. Because its so difficult to get the right genAI talent in the enterprise, startups who offer tooling to make it easier to bring genAI development in house will likely see faster adoption.

Just over 6 months ago, the vast majority of enterprises were experimenting with 1 model (usually OpenAIs) or 2 at most. When we talked to enterprise leaders today, theyre are all testingand in some cases, even using in productionmultiple models, which allows them to 1) tailor to use cases based on performance, size, and cost, 2) avoid lock-in, and 3) quickly tap into advancements in a rapidly moving field. This third point was especially important to leaders, since the model leaderboard is dynamic and companies are excited to incorporate both current state-of-the-art models and open-source models to get the best results.

Well likely see even more models proliferate. In the table below drawn from survey data, enterprise leaders reported a number of models in testing, which is a leading indicator of the models that will be used to push workloads to production. For production use cases, OpenAI still has dominant market share, as expected.

This is one of the most surprising changes in the landscape over the past 6 months. We estimate the market share in 2023 was 80%90% closed source, with the majority of share going to OpenAI. However, 46% of survey respondents mentioned that they prefer or strongly prefer open source models going into 2024. In interviews, nearly 60% of AI leaders noted that they were interested in increasing open source usage or switching when fine-tuned open source models roughly matched performance of closed-source models. In 2024 and onwards, then, enterprises expect a significant shift of usage towards open source, with some expressly targeting a 50/50 splitup from the 80% closed/20% open split in 2023.

Control (security of proprietary data and understanding why models produce certain outputs) and customization (ability to effectively fine-tune for a given use case) far outweighed cost as the primary reasons to adopt open source. We were surprised that cost wasnt top of mind, but it reflects the leaderships current conviction that the excess value created by generative AI will likely far outweigh its price. As one executive explained: getting an accurate answer is worth the money.

Enterprises still arent comfortable sharing their proprietary data with closed-source model providers out of regulatory or data security concernsand unsurprisingly, companies whose IP is central to their business model are especially conservative. While some leaders addressed this concern by hosting open source models themselves, others noted that they were prioritizing models with virtual private cloud (VPC) integrations.

In 2023, there was a lot of discussion around building custom models like BloombergGPT. In 2024, enterprises are still interested in customizing models, but with the rise of high-quality open source models, most are opting not to train their own LLM from scratch and instead use retrieval-augmented generation (RAG) or fine-tune an open source model for their specific needs.

In 2023, many enterprises bought models through their existing cloud service provider (CSP) for security reasonsleaders were more concerned about closed-source models mishandling their data than their CSPsand to avoid lengthy procurement processes. This is still the case in 2024, which means that the correlation between CSP and preferred model is fairly high: Azure users generally preferred OpenAI, while Amazon users preferred Anthropic or Cohere. As we can see in the chart below, of the 72% of enterprises who use an API to access their model, over half used the model hosted by their CSP. (Note that over a quarter of respondents did self-host, likely in order to run open source models.)

While leaders cited reasoning capability, reliability, and ease of access (e.g., on their CSP) as the top reasons for adopting a given model, leaders also gravitated toward models with other differentiated features. Multiple leaders cited the prior 200K context window as a key reason for adopting Anthropic, for instance, while others adopted Cohere because of their early-to-market, easy-to-use fine-tuning offering.

While large swathes of the tech community focus on comparing model performance to public benchmarks, enterprise leaders are more focused on comparing the performance of fine-tuned open-source models and fine-tuned closed-source models against their own internal sets of benchmarks. Interestingly, despite closed-source models typically performing better on external benchmarking tests, enterprise leaders still gave open-source models relatively high NPS (and in some cases higher) because theyre easier to fine-tune to specific use cases. One company found that after fine-tuning, Mistral and Llama perform almost as well as OpenAI but at much lower cost. By these standards, model performance is converging even more quickly than we anticipated, which gives leaders a broader range of very capable models to choose from.

Most enterprises are designing their applications so that switching between models requires little more than an API change. Some companies are even pre-testing prompts so the change happens literally at the flick of a switch, while others have built model gardens from which they can deploy models to different apps as needed. Companies are taking this approach in part because theyve learned some hard lessons from the cloud era about the need to reduce dependency on providers, and in part because the market is evolving at such a fast clip that it feels unwise to commit to a single vendor.

Enterprises are overwhelmingly focused on building applications in house, citing the lack of battle-tested, category-killing enterprise AI applications as one of the drivers. After all, there arent Magic Quadrants for apps like this (yet!). The foundation models have also made it easier than ever for enterprises to build their own AI apps by offering APIs. Enterprises are now building their own versions of familiar use casessuch as customer support and internal chatbotswhile also experimenting with more novel use cases, like writing CPG recipes, narrowing the field for molecule discovery, and making sales recommendations. Much has been written about the limited differentiation of GPT wrappers, or startups building a familiar interface (e.g., chatbot) for a well-known output of an LLM (e.g., summarizing documents); one reason we believe these will struggle is that AI further reduced the barrier to building similar applications in-house. However, the jury is still out on whether this will shift when more enterprise-focused AI apps come to market. While one leader noted that though they were building many use cases in house, theyre optimistic there will be new tools coming up and would prefer to use the best out there. Others believe that genAI is an increasingly strategic tool that allows companies to bring certain functionalities in-house instead of relying as they traditionally have on external vendors. Given these dynamics, we believe that the apps that innovate beyond the LLM + UI formula and significantly rethink the underlying workflows of enterprises or help enterprises better use their own proprietary data stand to perform especially well in this market.

Thats because 2 primary concerns about genAI still loom large in the enterprise: 1) potential issues with hallucination and safety, and 2) public relations issues with deploying genAI, particularly into sensitive consumer sectors (e.g., healthcare and financial services). The most popular use cases of the past year were either focused on internal productivity or routed through a human before getting to a customerlike coding copilots, customer support, and marketing. As we can see in the chart below, these use cases are still dominating in the enterprise in 2024, with enterprises pushing totally internal use cases like text summarization and knowledge management (e.g., internal chatbot) to production at far higher rates than sensitive human-in-the-loop use cases like contract review, or customer-facing use cases like external chatbots or recommendation algorithms. Companies are keen to avoid the fallout from generative AI mishaps like the Air Canada customer service debacle. Because these concerns still loom large for most enterprises, startups who build tooling that can help control for these issues could see significant adoption.

By our calculations, we estimate that the model API (including fine-tuning) market ended 2023 around $1.52B run-rate revenue, including spend on OpenAI models via Azure. Given the anticipated growth in the overall market and concrete indications from enterprises, spend on this area alone will grow to at least $5B run-rate by year end, with significant upside potential. As weve discussed, enterprises have prioritized genAI deployment, increased budgets and reallocated them to standard software lines, optimized use cases across different models, and plan to push even more workloads to production in 2024, which means theyll likely drive a significant chunk of this growth.

Over the past 6 months, enterprises have issued a top-down mandate to find and deploy genAI solutions. Deals that used to take over a year to close are being pushed through in 2 or 3 months, and those deals are much bigger than theyve been in the past. While this post focuses on the foundation model layer, we also believe this opportunity in the enterprise extends to other parts of the stackfrom tooling that helps with fine-tuning, to model serving, to application building, and to purpose-built AI native applications. Were at an inflection point in genAI in the enterprise, and were excited to partner with the next generation of companies serving this dynamic and growing market.

Link:

16 Changes to the Way Enterprises Are Building and Buying Generative AI - Andreessen Horowitz

The first batch of Rabbit R1 AI devices will be shipping next week – TechRadar

The Rabbit R1 wowed the tech world at CES 2024 earlier this year, and it's now been confirmed that the first 10,000 of these little AI-powered gadgets are going to be heading to the first people who preordered them in the US and Canada from Sunday, March 31.

As per a Rabbit post on social media (via Engadget), the first batch of devices will start leaving the factory on that date, though they may take three weeks or so to get into the hands of customers, due to various international and US customs processes.

If you were one of the first 10,000 people in the US to get your name down for a Rabbit R1, you can expect it around April 24th, Rabbit says. Of course there's always the chance of further delays, but that's the current estimate.

According to the FAQ on the Rabbit website, the second batch of orders will be shipping in April and May, with the third batch heading to customers during May and June, for US and Canada addresses. If you're in the UK or EU, shipping is expected to start by late April.

If you're completely new to the Rabbit R1, it functions a little like a smartphone, only there's an AI assistant doing all the jobs that apps normally do queueing up music, taking photos, booking hotels, and so on and so on.

In fact, the Rabbit software is clever enough to interact with your mobile apps, once you've shown it what to do. It's an interesting new take on the pocket computer, and it's attracted a lot of early buzz in the industry.

We know the Rabbit R1 is going to be powered, at least in part, by the Perplexity AI engine: this means you'll be able to chat with the device in the same way as you would with ChatGPT or with Copilot from Microsoft.

Get the hottest deals available in your inbox plus news, reviews, opinion, analysis and more from the TechRadar team.

You can still order the Rabbit R1 from Rabbit for $199 (about 160 / AU$305), though it might be a while before you get it. Rabbit CEO Jesse Lyu recently shared a demo of the device in action, if you want to get a feel of how it works.

View post:

The first batch of Rabbit R1 AI devices will be shipping next week - TechRadar

7 great Google Gemini AI prompts to try this weekend – Tom’s Guide

Spring is in the air, the trees are starting to become more green and the weather is getting warmer in the northern hemisphere. As thoughts turn to picnics and outdoor adventure, why not turn to technology for inspirational ways to mark the changing season?

Google Gemini is an incredibly power artificial intelligence tool, but as with any tool, you can often suffer from the same blank page problem when opening it for the first time ever or even the first time in a day.

That is in part why I created the Prompt_Jitsu column. A way to share prompt ideas that anyone can try and possibly get inspiration to do something fun. This week I am turning to Google Gemini again, but with the idea of spring as inspiration.

Prompting any artificial intelligence chatbot can be hit and miss. More often than in the past, you do get exactly what youd expect, but sometimes it throws you a curveball. Check out our guide to using Google Gemini if you haven't tried this chatbot before.

Ive tested each of these prompts in the free and Gemini Advanced versions of the Google chatbot and they worked fairly well but if you get something completely obscure, Id love to hear about it.

We're going to start with an image prompt. Google Gemini can't generate images in the UK or much of Europe so you'd need a VPN, or the same prompt would generate a descriptive scene rather than a picture which can be fun.

The prompt I've picked: "Generate a colorful Spring-themed image featuring a picnic in a field of wildflowers."

Upgrade your life with the Toms Guide newsletter. Subscribe now for a daily dose of the biggest tech news, lifestyle hacks and hottest deals. Elevate your everyday with our curated analysis and be the first to know about cutting-edge gadgets.

This will create a cartoon-like generic image of a picnic scene. You could further refine the prompt by replying to the images by typing something like "add the word Spring to the picture. If you put the word in " " marks, it will improve the text quality.

Next up, we need some food to go with that picnic. We're going to do this over a couple of prompts. The first is a simple one and will get a different result most times you try it although I've been wrong about that in previous weeks.

The prompt: "Suggest. delicious savory dish for a Spring picnic". Gemini should give you something like a salad, skewers or a frittata, and this is where prompt two comes into play, as this is not a "one shot" idea.

Next up you'll need to pick one of those recipes, or ask it to suggest some more. If you're happy say "give me the full recipe for x". In my case I asked for a full frittata.

The recipe required parmesan cheese, milk, eggs, thyme, oil, onion, asparagus, peas, goat cheese and herbs. The full recipe is on the Prompt_Jitsu GitHub repo.

We've got the picture and the food, now we need something else to do on the day. This is something Google Gemini can be good at, so I've asked it to "plan a perfect Spring day itinerary for a family with kids ages 5-12."

You can adjust the ages or even remove the kids part completely and ask for a fun day out on your own or with a partner. You could even use Gemini to plan a day with the boys or girls it should be able to adapt.

For me, it suggested that between 9 a.m. and midday we "embrace the outdoors" with a visit to a park or botanical garden and let the kids run free. After our amazing picnic, the chatbot said we should visit an art studio or pottery place for the afternoon.

After we wind down at home after a busy, fun-filled day, why not enjoy a story about the spring? This is something Gemini is very good at doing.

Use this prompt: "Write a short, whimsical story about a talking flower that blooms in the Spring." Or adapt it to suit your own circumstances, even putting the names of your children into the prompt. For example, have the chatbot call the flower after your kid.

I didn't do this and Gemini called the flower Primrose. She told of faraway lands where exotic flowers bloomed and of whispering rain showers painting the meadow green. You can read it on my GitHub.

After all that talk of talking flowers and from inspiration gained while being out and about in our fictional family day in the botanical garden, let's design a garden.

The prompt: "What are some tips for planning a successful Spring garden, including recommended plants and layout ideas?" This is another example where multiple prompts will be needed to get exactly what you want.

However, from a single prompt I got planning tips such as "knowing your zone, doing a sunlight assessment and preparing the soil," as well as the idea of planing seasonal vegetables and herbs. Gemini also said to plant tulips, daffodils and hyacinths.

If you want to get the best out of your garden though, check out this guide to preparing your garden for spring from Tom's Guide's homes content editor Cynthia Lawrence.

We all know that children don't get to sleep after the story, they always want one more thing. So why not a haiku about watching baby animals at play?

The prompt: "Create a Spring-themed haiku about the joy of watching baby animals play." You could swap baby animals play with anything you like what about watching monster trucks play or watching dolphins play?

Gemini gave me: "Soft chirps fill the breeze / Fluffy chicks chase butterflies / Spring's heart beats anew."

And for monster trucks: "Mud splatters like blooms / Giant tires churn earth reborn / Spring roars, a joyful wreck."

For this prompt, you want to make sure you've got the Google Maps extension turned on. To do this, click Settings, then extensions on the left-hand menu and toggle the Google Maps button to on.

We'll use Denver, Colorado as our location for this prompt, as there aren't many places to go hiking around me. The prompt: "I want to go on a scenic Spring hike this weekend. Can you suggest the top 3 hiking trails within a 30-mile radius of Denver, Colorado? Please provide trail details like length, difficulty level, notable features, and directions to the trailhead using Google Maps."

This will trigger the Google Maps extension, and Gemini suggested three top-rated hikes in the area, showed them as icons on a map and gave me all the difficulty level details. You can see my full interaction for this prompt on Gemini.

If you enjoyed the prompts this week why share your output with us and then try out aseries of Google Gemini prompt ideas or evenmake a story, song and imagesfrom previous weeks.

Go here to see the original:

7 great Google Gemini AI prompts to try this weekend - Tom's Guide

Microsoft and NVIDIA announce major integrations to accelerate generative AI for enterprises everywhere – Stories – Microsoft

REDMOND, Wash., and SAN JOSE, Calif. March 18, 2024 At GTC on Monday, Microsoft Corp. and NVIDIA expanded their longstanding collaboration with powerful new integrations that leverage the latest NVIDIA generative AI and Omniverse technologies across Microsoft Azure, Azure AI services, Microsoft Fabric and Microsoft 365.

Together with NVIDIA, we are making the promise of AI real, helping drive new benefits and productivity gains for people and organizations everywhere, said Satya Nadella, chairman and CEO, Microsoft. From bringing the GB200 Grace Blackwell processor to Azure, to new integrations between DGX Cloud and Microsoft Fabric, the announcements we are making today will ensure customers have the most comprehensive platforms and tools across every layer of the Copilot stack, from silicon to software, to build their own breakthrough AI capability.

AI is transforming our daily lives opening up a world of new opportunities, said Jensen Huang, founder and CEO of NVIDIA. Through our collaboration with Microsoft, were building a future that unlocks the promise of AI for customers, helping them deliver innovative solutions to the world.

Advancing AI infrastructure

Microsoft will be one of the first organizations to bring the power of NVIDIA Grace Blackwell GB200 and advanced NVIDIA Quantum-X800 InfiniBand networking to Azure, deliver cutting-edge trillion-parameter foundation models for natural language processing, computer vision, speech recognition and more.

Microsoft is also announcing the general availability of its Azure NC H100 v5 VM virtual machine (VM) based on the NVIDIA H100 NVL platform. Designed for midrange training and inferencing, the NC series of virtual machines offers customers two classes of VMs from one to two NVIDIA H100 94GB PCIe Tensor Core GPUs and supports NVIDIA Multi-Instance GPU (MIG) technology, which allows customers to partition each GPU into up to seven instances, providing flexibility and scalability for diverse AI workloads.

Healthcare and life sciences breakthroughs

Microsoft is expanding its collaboration with NVIDIA to transform healthcare and life sciences through the integration of cloud, AI and supercomputing technologies. By harnessing the power of Microsoft Azure alongside NVIDIA DGX Cloud and the NVIDIA Clara suite of microservices, healthcare providers, pharmaceutical and biotechnology companies, and medical device developers will soon be able to innovate rapidly across clinical research and care delivery with improved efficiency.

Industry leaders such as Sanofi and the Broad Institute of MIT and Harvard, industry ISVs such as Flywheel and SOPHiA GENETICS, academic medical centers like the University of Wisconsin School of Medicine and Public Health, and health systems like Mass General Brigham are already leveraging cloud computing and AI to drive transformative changes in healthcare and to enhance patient care.

Industrial digitalization

NVIDIA Omniverse Cloud APIs will be available first on Microsoft Azure later this year, enabling developers to bring increased data interoperability collaboration, and physics-based visualization to existing software applications. At NVIDIA GTC, Microsoft is demonstrating a preview of what is possible using Omniverse Cloud APIs on Microsoft Azure. Using an interactive 3D viewer in Microsoft Power BI, factory operators can see real-time factory data overlaid on a 3D digital twin of their facility to gain new insights that can speed up production.

NVIDIA Triton Inference Server and Microsoft Copilot

NVIDIA GPUs and NVIDIA Triton Inference Server help serve AI inference predictions in Microsoft Copilot for Microsoft 365. Copilot for Microsoft 365, soon available as a dedicated physical keyboard key on Windows 11 PCs, combines the power of large language models with proprietary enterprise data to deliver real-time contextualized intelligence, enabling users to enhance their creativity, productivity and skills.

From AI training to AI deployment

NVIDIA NIM inference microservices are coming to Azure AI to turbocharge AI deployments. Part of the NVIDIA AI Enterprise software platform, also available on the Azure Marketplace, NIM provides cloud-native microservices for optimized inference on more than two dozen popular foundation models, including NVIDIA-built models that users can experience at ai.nvidia.com. For deployment, the microservices deliver prebuilt, run-anywhere containers powered by NVIDIA AI Enterprise inference software including Triton Inference Server, TensorRT and TensorRT-LLM to help developers speed time to market of performance-optimized production AI applications.

About NVIDIA

Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The companys invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling industrial digitalization across markets. NVIDIA is now a full-stack computing infrastructure company with data-center-scale offerings that are reshaping industry. More information at https://nvidianews.nvidia.com/.

About Microsoft

Microsoft (Nasdaq MSFT @microsoft) enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.

For more information, press only:

Microsoft Media Relations, WE Communications for Microsoft, (425) 638-7777,[emailprotected]

Natalie Hereth, NVIDIA Corporation, [emailprotected]

Note to editors: For more information, news and perspectives from Microsoft, please visit Microsoft Source athttp://news.microsoft.com/source. Web links, telephone numbers and titles were correct at time of publication but may have changed. For additional assistance, journalists and analysts may contact Microsofts Rapid Response Team or other appropriate contacts listed athttps://news.microsoft.com/microsoft-public-relations-contacts.

NVIDIA forwardlooking statements

Certain statements in this press release including, but not limited to, statements as to: the benefits, impact, performance, features, and availability of NVIDIAs products and technologies, including NVIDIA Grace Blackwell Superchip, NVIDIA DGX Cloud, NVIDIA Omniverse Cloud APIs, NVIDIA AI and Accelerated Computing Platforms, and NVIDIA Generative AI Microservices; the benefits and impact of NVIDIAs collaboration with Microsoft, and the features and availability of its services and offerings; AI transforming our daily lives, the way we work and opening up a world of new opportunities; and building a future that unlocks the promise of AI for customers and brings transformative solutions to the world through NVIDIAs continued collaboration with Microsoft are forward-looking statements that are subject to risks and uncertainties that could cause results to be materially different than expectations. Important factors that could cause actual results to differ materially include: global economic conditions; NVIDIAs reliance on third parties to manufacture, assemble, package and test NVIDIAs products; the impact of technological development and competition; development of new products and technologies or enhancements to NVIDIAs existing product and technologies; market acceptance of NVIDIAs products or NVIDIA partners products; design, manufacturing or software defects; changes in consumer preferences or demands; changes in industry standards and interfaces; unexpected loss of performance of NVIDIAs products or technologies when integrated into systems; as well as other factors detailed from time to time in the most recent reports NVIDIA files with the Securities and Exchange Commission, or SEC, including, but not limited to, its annual report on Form 10-K and quarterly reports on Form 10-Q. Copies of reports filed with the SEC are posted on the companys website and are available from NVIDIA without charge. These forward-looking statements are not guarantees of future performance and speak only as of the date hereof, and, except as required by law, NVIDIA disclaims any obligation to update these forward-looking statements to reflect future events or circumstances.

Many of the products and features described herein remain in various stages and will be offered on a when-and-if-available basis. The statements above are not intended to be, and should not be interpreted as a commitment, promise, or legal obligation, and the development, release, and timing of any features or functionalities described for our products is subject to change and remains at the sole discretion of NVIDIA. NVIDIA will have no liability for failure to deliver or delay in the delivery of any of the products, features or functions set forth herein.

2024 NVIDIA Corporation. All rights reserved. NVIDIA, the NVIDIA logo, DGX, NVIDIA Clara, NVIDIA NIM, NVIDIA Omniverse, NVIDIA Triton Inference Server, and TensorRT are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and/or other countries. Other company and product names may be trademarks of the respective companies with which they are associated. Features, pricing, availability, and specifications are subject to change without notice.

See the original post:

Microsoft and NVIDIA announce major integrations to accelerate generative AI for enterprises everywhere - Stories - Microsoft