Archive for the ‘Ai’ Category

PLTR Stock Outlook: Is Palantirs AI Hype Worth the Premium? – InvestorPlace

Source: Ascannio / Shutterstock.com

No technology company of our time inspires as much hope and fear as Palantir (NASDAQ:PLTR). This is true for its technology, a deep learning database used primarily by the military. Its also true for PLTR stock and its prospects.

Bulls see Palantir worth $50 per share, bears barely $10. (It was selling at $26 on July 24.)

When it comes to technology, optimists see Palantir as a revolutionary defender of freedom and a magic bullet for productivity. Pessimists see Palantir as an overhyped, dangerous, authoritarian scam.

I believe its none of those things. I also believe its a speculative buy for a young investor.

Lets take a closer look.

Source: shutterstock.com/Tex vector

Palantir happily took the label of Artificial Intelligence after ChatGPT arrived. At the time the stock was selling for under $8 and the company was struggling to define itself as a data analysis company.

Its more of a Machine Internet company. It combines whats known about all your assets and offers strategic insights into deploying them, in real time.

The Pentagon loves it. Every month, it seems, Palantir is bagging another major contract, with more secret information being made available to more people. The latest is a $480 million, five-year deal for Maven, fusing data from intelligence, surveillance and reconnaissance systems.

Palantirs software can identify and optimize what our side has while identifying and strategizing against what the other side has. Dan Ives of Wedbush sees it worth $50 per share.

There are also civilian applications, for both government and commercial accounts. The company has strong relationships with both Oracle (NASDAQ:ORCL) and Microsoft (NASDAQ:MSFT). Its ability to coordinate hospital work won it a deal with Englands National Health Service last year.

Source: Dejan Lazarevic / Shutterstock.com

The pessimistic view starts with Palantir being primarily a military contractor.

Most military contractors have limited growth but are highly profitable because its difficult to get out of the military box. Palantir grew just 17% last year and has only been marginally profitable for about a year. While it was earning profits last year, it also had very negative cash flow, $1.78 billion worth.

Palantirs selling point with the military is that it is highly-proprietary system. Thats great if youre in the secrets business. Its not so great if youre a hospital or if something is broken and you need to fix it.

While other defense software contractors sell for 13-15 times sales, Palantir sells for closer to 26 times sales, even amid the latest sell-off. Its also vulnerable to what Gartner (NYSE:IT) calls the trough of disillusionment, the realization that AI may not fully justify the current hype.

CEO Alex Karp, who despite doing a good job seems overpaid at $1.1 billion, brags about Palantirs commercial revenue growth in his most recent stockholder letter, but its still just 24% of the business. Palantir remains and likely will always remain a military-first company. Thats why analysts have been saying it is priced to perfection. Thats analyst-speak for limited upside.

Source: Poetra.RH / Shutterstock.com

Most AI companies remain focused on the interface between people and data. I like the fact that Palantir is focused on the interface between machines and data.

Its this interface that gives Palantir value and should give speculators at sites like Stocktwits hope. The best AI systems today arent focused on replacing people so much as doing what people cant. People cant yet penetrate the fog of war.

Its by sticking to a clear, coherent strategy that the best companies, and generals, win. Palantir has that. The question is whether it has enough runway, earned serving the war machine, to justify its valuation.

This depends on its ability to grow the commercial side of the business. Look closely at those numbers when it next reports Aug. 5. If theyre good, go long.

On the date of publication, the responsible editor did not have (either directly or indirectly) any positions in the securities mentioned in this article.

As of this writing, Dana Blankenhorn had a LONG position in MSFT. The opinions expressed in this article are those of the writer, subject to theInvestorPlace.comPublishing Guidelines.

Dana Blankenhorn has been a financial and technology journalist since 1978. He is the author of Technologys Big Bang: Yesterday, Today and Tomorrow with Moores Law, available at the Amazon Kindle store. Write him at danablankenhorn@gmail.com, tweet him at @danablankenhorn, or subscribe to his free Substack newsletter.

See the article here:

PLTR Stock Outlook: Is Palantirs AI Hype Worth the Premium? - InvestorPlace

NIH findings shed light on risks and benefits of integrating AI into medical decision-making – National Institutes of Health (NIH) (.gov)

News Release

Tuesday, July 23, 2024

AI model scored well on medical diagnostic quiz, but made mistakes explaining answers.

Researchers at the National Institutes of Health (NIH) found that an artificial intelligence (AI) model solved medical quiz questionsdesigned to test health professionals ability to diagnose patients based on clinical images and a brief text summarywith high accuracy. However, physician-graders found the AI model made mistakes when describing images and explaining how its decision-making led to the correct answer. The findings, which shed light on AIs potential in the clinical setting, were published in npj Digital Medicine. The study was led by researchers from NIHs National Library of Medicine (NLM) and Weill Cornell Medicine, New York City.

Integration of AI into health care holds great promise as a tool to help medical professionals diagnose patients faster, allowing them to start treatment sooner, said NLM Acting Director, Stephen Sherry, Ph.D. However, as this study shows, AI is not advanced enough yet to replace human experience, which is crucial for accurate diagnosis.

The AI model and human physicians answered questions from the New England Journal of Medicine (NEJM)s Image Challenge. The challenge is an online quiz that provides real clinical images and a short text description that includes details about the patients symptoms and presentation, then asks users to choose the correct diagnosis from multiple-choice answers.

The researchers tasked the AI model to answer 207 image challenge questions and provide a written rationale to justify each answer. The prompt specified that the rationale should include a description of the image, a summary of relevant medical knowledge, and provide step-by-step reasoning for how the model chose the answer.

Nine physicians from various institutions were recruited, each with a different medical specialty, and answered their assigned questions first in a closed-book setting, (without referring to any external materials such as online resources) and then in an open-book setting (using external resources). The researchers then provided the physicians with the correct answer, along with the AI models answer and corresponding rationale. Finally, the physicians were asked to score the AI models ability to describe the image, summarize relevant medical knowledge, and provide its step-by-step reasoning.

The researchers found that the AI model and physicians scored highly in selecting the correct diagnosis. Interestingly, the AI model selected the correct diagnosis more often than physicians in closed-book settings, while physicians with open-book tools performed better than the AI model, especially when answering the questions ranked most difficult.

Importantly, based on physician evaluations, the AI model often made mistakes when describing the medical image and explaining its reasoning behind the diagnosis even in cases where it made the correct final choice. In one example, the AI model was provided with a photo of a patients arm with two lesions. A physician would easily recognize that both lesions were caused by the same condition. However, because the lesions were presented at different angles causing the illusion of different colors and shapes the AI model failed to recognize that both lesions could be related to the same diagnosis.

The researchers argue that these findings underpin the importance of evaluating multi-modal AI technology further before introducing it into the clinical setting.

This technology has the potential to help clinicians augment their capabilities with data-driven insights that may lead to improved clinical decision-making, said NLM Senior Investigator and corresponding author of the study, Zhiyong Lu, Ph.D. Understanding the risks and limitations of this technology is essential to harnessing its potential in medicine.

The study used an AI model known as GPT-4V (Generative Pre-trained Transformer 4 with Vision), which is a multimodal AI model that can process combinations of multiple types of data, including text and images. The researchers note that while this is a small study, it sheds light on multi-modal AIs potential to aid physicians medical decision-making. More research is needed to understand how such models compare to physicians ability to diagnose patients.

The study was co-authored by collaborators from NIHs National Eye Institute and the NIH Clinical Center; the University of Pittsburgh; UT Southwestern Medical Center, Dallas; New York University Grossman School of Medicine, New York City; Harvard Medical School and Massachusetts General Hospital, Boston; Case Western Reserve University School of Medicine, Cleveland; University of California San Diego, La Jolla; and the University of Arkansas, Little Rock.

The National Library of Medicine (NLM) is a leader in research in biomedical informatics and data science and the worlds largest biomedical library. NLM conducts and supports research in methods for recording, storing, retrieving, preserving, and communicating health information. NLM creates resources and tools that are used billions of times each year by millions of people to access and analyze molecular biology, biotechnology, toxicology, environmental health, and health services information. Additional information is available at https://www.nlm.nih.gov.

About the National Institutes of Health (NIH): NIH, the nation's medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.

NIHTurning Discovery Into Health

Qiao Jin, et al. Hidden Flaws Behind Expert-Level Accuracy of Multimodal GPT-4 Vision in Medicine. npj Digital Medicine. DOI: 10.1038/s41746-024-01185-7 (2024).

###

See the article here:

NIH findings shed light on risks and benefits of integrating AI into medical decision-making - National Institutes of Health (NIH) (.gov)

Big Tech says AI is booming. Wall Street is starting to see a bubble. – The Washington Post

SAN FRANCISCO A growing group of Wall Street analysts and tech investors is beginning to sound the alarm that the immense amount of money being poured into artificial intelligence by Big Tech companies, stock market investors and venture-capital firms could be leading to a financial bubble.

On Tuesday, analysts on Googles quarterly conference call peppered chief executive Sundar Pichai with questions about when the companys $12-billion-a-quarter investment in AI would begin paying off. And in the past few weeks, big Wall Street investment banks including Goldman Sachs and Barclays, as well as VCs such as Sequoia Capital, have issued reports raising concerns about the sustainability of the AI gold rush, arguing that the technology might not be able to make the kind of money to justify the billions being invested into it. Stock prices for big AI names including Google, Microsoft and Nvidia are all up significantly this year.

Despite its expensive price tag, the technology is nowhere near where it needs to be in order to be useful, Jim Covello, Goldman Sachss most senior stock analyst and a 30-year veteran of covering tech companies, said in a recent report about AI. Overbuilding things the world doesnt have use for, or is not ready for, typically ends badly.

Covellos comments are in sharp contrast to a different Goldman Sachs report from just over a year ago, in which some of the banks economists said AI could automate 300 million jobs around the world and increase global economic output by 7 percent in the next 10 years, spurring a spate of news coverage about the disruptive potential of AI.

Barclays said Wall Street analysts are expecting Big Tech companies to spend around $60 billion a year on developing AI models by 2026, but reap only around $20 billion a year in revenue from AI by that point. That kind of investment would be enough to power 12,000 products of a similar size to OpenAIs ChatGPT, Barclays analysts wrote in a recent report.

OpenAI released ChatGPT in November 2022, kicking off a race in Silicon Valley to build new AI products and get people to use them. Big Tech companies are spending tens of billions of dollars on the technology. Retail investors have bid up the price of those companies and their suppliers, especially Nvidia, which makes the computer chips used to train AI models. Year to date, shares of Google parent Alphabet are up 25 percent, Microsoft is up 15 percent, and Nvidia shares are up 140 percent.

Venture capitalists have also poured billions more into thousands of AI start-ups. The AI boom has helped contribute to the $55.6 billion that venture investors put into U.S. start-ups in the second quarter of 2024, the highest amount in a single quarter in two years, according to venture capital data firm PitchBook.

Tech executives insist that AI will change whole swaths of modern life, in the same way the internet or mobile phones did. AI technology has indeed improved drastically and is already being used to translate documents, write emails and help programmers code. But concern over whether the tech industry will be able to recoup the billions of dollars its investing in AI anytime soon or ever has risen among some firms that only last year were heralding the boom.

We do expect lots of new services but probably not 12,000 of them, Barclays analysts wrote. We sense that Wall Street is growing increasingly skeptical.

In April, Meta, Google and Nvidia all signaled their commitment to going all in on AI by telling investors during quarterly earnings calls that they would ramp up the amount of money theyre spending on building data centers to train and run AI algorithms. Google reiterated Tuesday it would spend more than $12 billion a quarter on its AI build-out. Microsoft and Meta are due to report their own earnings next week and may give further indication about their AI road maps.

Pichai said Tuesday that it would take time for AI products to mature and become more useful. He acknowledged the high cost of AI but said even if the AI boom slows down, the data centers and computer chips the company was buying could be put to other uses.

The risk of underinvesting is dramatically greater than the risk of overinvesting for us, Pichai said. Not investing to be at the front here has much more significant downsides.

A spokesperson for Microsoft declined to comment. A spokesperson for Meta did not respond to a request for comment.

Vinod Khosla, who co-founded computer network systems company Sun Microsystems and is one of Silicon Valleys most influential venture-capital investors, compared AI to personal computers, the internet and mobile phones in terms of how much it would affect society.

These are all fundamentally new platforms. In each of these, every new platform causes a massive explosion in applications, Khosla said. The rush into AI might cause a financial bubble where investors lose money, but that doesnt mean the underlying technology wont continue to grow and become more important, he said.

There was a dot-com bubble, according to Goldman Sachs, because prices went up and prices went down. According to me, internet traffic didnt go down at all.

As AI changes the way people work, do business and interact with one another, many start-ups will fail, he said. But overall the industry will make money on AI. He predicts there will eventually be multiple trillion-dollar businesses in AI, such as humanoid robots, AI assistants and programs that can completely replicate the work of highly paid software engineers.

But so far, AI is not contributing to an increase in venture capital getting a return on those investments. The amount of money made in venture capital exits, which represent initial public offerings or acquisitions of tech start-ups, fell to $23.6 billion in the second quarter, down slightly from $25.4 billion the previous quarter, according to PitchBook.

The tech industry would need to generate around $600 billion in revenue a year to make up for all the money being invested in AI right now, yet it is far from close to that number, David Cahn, a partner at venture firm Sequoia Capital, wrote in a blog post last month.

Speculative frenzies are part of technology, and so they are not something to be afraid of, Cahn said. But we need to make sure not to believe in the delusion that has now spread from Silicon Valley to the rest of the country, and indeed the world. That delusion says that were all going to get rich quick.

Microsofts and Googles revenue are growing, especially in their cloud businesses where they sell access to AI algorithms and the storage space to use them. Executives from the companies say AI is driving new interest in their products and will become a major moneymaker in the future. But some analysts are pointing out that there have been very few hugely successful stand-alone products, besides OpenAIs ChatGPT and Microsofts coding assistant GitHub Copilot.

Wall Street is growing increasingly skeptical given that ChatGPT and GitHub Copilot are the two breakout successes in consumer and enterprise thus far 20 months in, the Barclays analysts wrote in their report.

The cost of developing and running AI programs will come down as other companies compete with Nvidia and the technology becomes more efficient, said Vineet Jain, CEO of Egnyte, an AI and data management company. For now, the cost of providing AI products is too expensive, and he doesnt expect to make any AI-specific revenue this year. But as costs go down and demand continues to rise, that will change, Jain said.

The value proposition is absolutely there, but the expectation right now is still unrealistic, he said, referring to the frenzy to sell AI products to consumers and businesses.

Some start-ups have already come down from the heights of the early part of the AI boom. Inflection AI, a start-up founded by veterans of Googles famous DeepMind AI lab, raised $1.3 billion last year to build out their chatbot business. But in March, the companys founders left for jobs at Microsoft, taking some of their top employees with them to the tech giant. Other AI companies, like Stability AI, which was one of the first companies to build a widely popular AI image generator, have had to lay off workers. The industry is also facing lawsuits and regulatory challenges.

Bigger companies like Google and Microsoft will be able to keep spending money until demand for AI products increases, but smaller start-ups that have taken on a lot of venture capital might not survive the transition, Jain said.

Its like a souffl that keeps popping up and popping up, it has to come down a bit.

Go here to see the original:

Big Tech says AI is booming. Wall Street is starting to see a bubble. - The Washington Post

New AI Task Force Led By Michigan and Arizona Combats Deep Fakes and Election Misinformation in US – Good News Network

Capitol photo by Martin Jacobsen, CC license

In January, during a Democratic primary, thousands of voters received a robocall that used artificial intelligence to impersonate President Biden discouraging them from voting.

The political consultant responsible is now facing millions in fines and jail time for the 13 felony counts of voter suppression and 13 counts of impersonating a candidate, a misdemeanor.

To combat this new threat of AI deep fakes and misinformation in US elections, a new Artificial Intelligence Task Force is bringing together state and local elected officials to focus on ways to combat malicious AI-generated activity that threaten the democratic process.

Arizona Secretary of State Adrian Fontes doesnt speak German, but he created a deepfake that makes it nearly impossible to tell that it isnt actually Fontes speakingall to demonstrate just how alarmingly lifelike and manipulative AI-generated content can be.

Fontesalong with Michigan Secretary of State Jocelyn Benson and Minnesota Secretary of State Steve Simonare leading the fight to prepare election workers and voters in their states to be vigilante and savvy against the AI threats.

They are part of a coalition of secretaries of state working with the task force, created by the NewDEAL Forum, to develop tools and best practices to combat AI disinformation this election season.

In Michigan, weve enacted legislation to make it a crime for someone to knowingly distribute materially-deceptive deep fakes that are generated by AI when there is an intent behind it of harming the reputation of or the electoral prospects of a candidate, Secretary of State Benson told Democracy Docket, a digital news platform founded by attorney Marc Elias dedicated to voting rights and elections in the courts.

The new law, passed in November, makes that crime a felony.

GREAT NEWS ON CRIME: US Crime Rate Drops to Historic Lows With Murders, Rapes, and Robbery Plunging, New Statistics Show

In addition to that, we require any political advertisements that are generated in whole or substantially with the use of AI to include a statement that that ad was generated by artificial intelligence. That disclaimer requirement helps equip citizens with the knowledge of how to be critical consumers.

Both the important swing states of Arizona and Michigan have developed tabletop exercises to train election clerks to identify AI, and to practice linking them with law enforcement and first Responders, both for security and to rapidly respond to issues that may occur around voting, on or before election dayand also to be prepared to stop the negative impact of AI from spreading. (See their interviews in the video below)

A NewDEAL Forum poll conducted in Arizona in April found that only 41% of respondents knew anything about AI and elections.

Generative AI presents both tremendous opportunities and significant challenges, said New York State Assemblymember Alex Bores, Co-Chair of the NewDEAL Forum AI Task Force, and one of the few state legislators with a computer science background. Our goal is to craft policies to harness AIs potential to improve public services while proactively preparing for the threats and unforeseen challenges it poses to our democratic institutions.

In March, they published a report that outlines best practices for election officialsfrom secretaries of state to county election workersto mitigate the negative impacts of AI in elections. The advice includes more short-term practices, like public information campaigns about the threats, and protocols for a rapid response when they do arise.

VOTER RIGHTS VICTORY: 2 Conservatives on Supreme Court Seal Historic Decision to Preserve Voting Rights in Alabama Gerrymandering Case

The document also suggests legislation that state politicians can pass to help protect democracy from AI threats.

According to Democracy Docket, at least 40 states are introducing legislation to regulate the use of AI, but only 18 have laws that specifically address election-related AIand thankfully, now Michigan is one of them.

WATCH a discussion with Fontes and Benson on Democracy Docket (Subscribe to stay up to date with court cases around the US involving elections.)

PLEASE HELP TO SPREAD AWARENESS By Sharing the News on Social Media

Read the original here:

New AI Task Force Led By Michigan and Arizona Combats Deep Fakes and Election Misinformation in US - Good News Network

OpenAI announces SearchGPT, its AI-powered search engine – The Verge

OpenAI is announcing its much-anticipated entry into the search market, SearchGPT, an AI-powered search engine with real-time access to information across the internet.

The search engine starts with a large textbox that asks the user What are you looking for? But rather than returning a plain list of links, SearchGPT tries to organize and make sense of them. In one example from OpenAI, the search engine summarizes its findings on music festivals and then presents short descriptions of the events followed by an attribution link.

In another example, it explains when to plant tomatoes before breaking down different varieties of the plant. After the results appear, you can ask follow-up questions or click the sidebar to open other relevant links. Theres also a feature called visual answers, but OpenAI didnt get back to The Verge before publication on exactly how this works.

SearchGPT is just a prototype for now. The service is powered by the GPT-4 family of models and will only be accessible to 10,000 test users at launch, OpenAI spokesperson Kayla Wood tells The Verge. Wood says that OpenAI is working with third-party partners and using direct content feeds to build its search results. The goal is to eventually integrate the search features directly into ChatGPT.

Its the start of what could become a meaningful threat to Google, which has rushed to bake in AI features across its search engine, fearing that users will flock to competing products that offer the tools first. It also puts OpenAI in more direct competition with the startup Perplexity, which bills itself as an AI answer engine. Perplexity has recently come under criticism for an AI summaries feature that publishers claimed was directly ripping off their work.

OpenAI seems to have taken note of the blowback and says its taking a markedly different approach. In a blog post, the company emphasized that SearchGPT was developed in collaboration with various news partners, which include organizations like the owners of The Wall Street Journal, The Associated Press, and Vox Media, the parent company of The Verge. News partners gave valuable feedback, and we continue to seek their input, Wood says.

Publishers will have a way to manage how they appear in OpenAI search features, the company writes. They can opt out of having their content used to train OpenAIs models and still be surfaced in search.

Responses have clear, in-line, named attribution and links

SearchGPT is designed to help users connect with publishers by prominently citing and linking to them in searches, according to OpenAIs blog post. Responses have clear, in-line, named attribution and links so users know where information is coming from and can quickly engage with even more results in a sidebar with source links.

Releasing its search engine as a prototype helps OpenAI in a few different ways. First, if SearchGPTs results are wildly incorrect like when Google rolled out AI Overviews and told us to put glue on our pizza its easier to say, well, its a prototype! Theres also potential for getting attributions wrong or maybe wholesale ripping off articles like Perplexity was accused of doing.

This new product has been whispered about for months now, with The Information reporting about its development in February, then Bloomberg reporting more in May. We reported at the same time that OpenAI had been aggressively trying to poach Google employees for a search team. Some X users also noticed a new website OpenAI has been working on that hinted toward the move.

OpenAI has slowly been bringing ChatGPT more in touch with the real-time web. When GPT-3.5 was released, the AI model was already months out of date. Last September, OpenAI released a way for ChatGPT to browse the internet, called Browse with Bing, but it appears a lot more rudimentary than SearchGPT.

The rapid advancements by OpenAI have won ChatGPT millions of users, but the companys costs are adding up. The Information reported this week that OpenAIs AI training and inference costs could reach $7 billion this year, with the millions of users on the free version of ChatGPT only further driving up compute costs. SearchGPT will be free during its initial launch, and since the feature appears to have no ads right now, its clear the company will have to figure out monetization soon.

Read more here:

OpenAI announces SearchGPT, its AI-powered search engine - The Verge