Archive for the ‘Ai’ Category

Understanding AI’s Impact on Space Data with Planet’s Head of Product – Payload

Since the public release of ChatGPT just over a year ago, AI has driven technology investment, with some $27B being deployed to companies leveraging the latest developments in machine learning

So what? Space engineers are no strangers to AI writ largewho do you think is flying the Dragon spacecraft or landing the Falcon 9s booster?but the latest developments promise immediate impact for companies collecting sensor data in space.

Background: The innovation behind ChatGPT, an advancement in learning models based on the human brain called transformer neural networks, are only as valuable as the data used to train them. Last year, Salesforce CEO Marc Benioff told investors that AI customer value will only be provided by companies who have the data. Deloitte, the global consulting firm, argues that generative AI will be transformative for firms that collect unique data sets with satellites, which include Maxar, Planet Labs, Spire, Capella, ICEye and many others.

Planet Labs, a space data firm operating roughly 200 satellites in orbit, has long premised its business model on using machine learning to divine insights from its daily scans of the Earth. Payload spoke to Nate Gonzalez, Planets head of product, to understand what the last year of AI hype means for earth observation companies like his.

This transcript has been edited for length and clarity.

Have you noticed a change in the vibe of the tech community in the last year because of these advances in AI?

Yeah, which is funny cause I think that trying to pull apart AI from [Machine Learning] and [Large Language Models], it is kind of a veneer. The reality is everything is now powered by AI or everything has a flavor of it because like thats where the dollars are coming from and its a hard market.

Were in that cycle, and what I take a lot of comfort in is that our team knows it all really well. Weve been doing it for a long time. We have experts in their fields of how to do these operations, and now there is a new understanding of where that field has gone that they can jump to very quickly, and we can pivot on that and create real product and value there. But it is funny to see how many folks are now AI experts on their LinkedIn

What has changed for Planet, which has been training models on its data to do things like detect roads, buildings and ships since 2019?

Its not fundamentally different, but its accelerated some of the core work. Its different methodologies of machine learningits built on transformer networks, which is a different methodology of machine learning that is just vastly more powerful for specific applications that were starting to see. Thats where weve jumped into the future two, three, four years. When I came to the role [in Aug. 2021], I wrote a two year strategy, looked at what was on the horizon, and were beyond that.

Whats an example of how transformer modelsthe advancement in neural networks underlying tools like ChatGPTwork for Planet?

If I looked at the roadmap 18 months ago and were thinking about,when are we gonna be able to sit down and input text of, I would really like to monitor this county in Ireland to tell me when farmers are harvesting their crops, and I would like an alert on that every time a threshold is tripped over X.

It was a trickier problemokay, we have all the data, now we can go run and train models and were gonna have to be very specific. [But] just leveraging the transformer network model, whats happening with generative, allows you to search very quickly. And if you think about the depth of our archive, we have more than 50 petabytes of storage.

[When] a customer tells us they want to go access it, we can then run roughly 10x transformations on any bit that sits there.so, we get 500 petabytes of different data assets that can kind of be created and leveraged. Its like theres just this mountain of data to be able to sift through and you start getting into your embeddings work. It allows you to find patterns much more quickly than what youre necessarily looking for. And it allows you to have that chat interface just be almost like, just the front end to a deeper ML interdicted search experience.

Weve seen some impressive demonstrations; Planet partnered with Synthetic.AI to find the origins of 2023s Chinese spy balloon, and at an event last year Planet tracked down the bouncy castles covering Chinese missile silos based on a Google image search.

Thats what were uniquely well-positioned to do: you found the thing, lets help you analyze that, lets help you monitor it over time. Right now theres still a back and forth between Planet and a partner and the customer to say, you care about this specific thing, the partner will help find it or identify or train some of those models very quickly. We will then find that in the data, serve your results, and allow you to monitor that going forward.

Is it as simple as plugging a generic model into Planets data?

Its hard for Planet to think about training modelsgeneric AI models or ML models in geospatial are not that valuable because a lot of times they are very specific to the use case of the individual customer. Thats where working with partners and having infrastructure that allows partners to work and build with us is really valuable, because we have customers with all kinds of different demands and use cases. If you care about harvesting in the northern hemisphere versus the southern hemisphere, the markers that youre looking at from the data are slightly different just given the growing seasons and the type of crops.

What about the dreaded issue of AI hallucinations, or errors in models more broadly?

Almost invariably theres a human in the loop in the training phase. Even in the balloon instances its: how are we gonna potentially find a balloon? Draw a rough picture of it, see if we can find the thing and then see how many instances we find and then have a couple of people validate like, yep, thats actually what we are looking for.

Thats also why the answer is not necessarily to go hire an army of people, because you still have to be mindful of how much time do we want to spend training on something specific for a client versus a partner?

Planets data set seems obviously valuable to train machine learning modelsis it tempting to license that to large AI companies?

We have a unique set of data that does not exist elsewhere, so we want to be prudent in how we use that, because if you fully open it up and somebody trains a model where they no longer need your observations or they can take some other observation and its good enoughyou dont want to disintermediate yourself.

We are partnering liberally because we want to get the value out of the data. The most important thing for customers that were working with, who are driving decisions around policies that we care about, is how do we get them value out of the data quickly? I will partner all day long to go do that, to make sure that were faster in getting answers for customers.

Thats different than, hey, lets just monetize the fact that we have this data spigot and turn it on for anybody. Because the reality is that we are running an enterprise sales business where we sell a SaaS product to large civil governments and agricultural groups and others that are getting value from that.

See the rest here:

Understanding AI's Impact on Space Data with Planet's Head of Product - Payload

Amazon Stock Jumps On Earnings Beat, Cloud AI Product GrowthKey Price Levels to Watch – Investopedia

Key Takeaways

Amazon (AMZN) shares gained more than 7% in premarket trading Friday after the e-commerce giant late Thursday released a blowout quarterly earnings report fueled by growth in its retail business and increasing demand for the cloud divisions artificial intelligence products.

The one-time online bookseller attributed a 14% year-over-year (YOY) jump in fourth quarter revenue to a bumper holiday shopping season and the companys October Prime Day event. This Q4 was a record-breaking Holiday shopping season and closed out a robust 2023 for Amazon, CEO Andy Jassy said in a statement accompanying the quarterly results.

During the earnings call, Jassy said the company will take a cautious approach when considering new opportunities but plans to continue investing in new areas that resonate with customers while keeping an eye on efficiency. Were going to continue to invest in new things and new areas and things that are resonating with customers. Where we can find efficiencies and do more with less, were going to do that as well, he explained.

On the AI front, Jassy said that while the companys generative AI services are a relatively small business, they have the potential to generate tens of billions of dollars over the next few years. Furthermore, CFO Brian Olsavsky told analysts that interest in Amazon Web Services (AWS) generative AI products, such as Amazon Q and AI chatbot for businesses, had accelerated during the quarter. In September last year, Amazon said it plans to invest up to $4 billion in startup chatbot-maker Anthropic to take on cloud rivals in the AI arms race.

The AMZN share price has coiled within a rising wedge pattern on declining volume over the past five months, with an earnings-driven breakout likely in todays trading session. If the stock continues to trend higher, its worth keeping an eye on the $188 levelan area on the chart where the price may encounter overhead resistance from a horizontal trendline connecting the July and November 2021 swing highs. However, a volume-backed breakout to a new all-time high (ATH) coinciding with the 50-day moving average crossing back above the 200-day moving averaging could mark the beginning of another leg higher for the e-commerce giants stock.

Amazon shares were up 7.1% at $170.56 about 45 minutes before the opening bell Friday.

The comments, opinions, and analyses expressed on Investopedia are for informational purposes only. Read ourwarranty and liability disclaimerfor more info.

As of the date this article was written, the author does not own any of the above securities.

Read the original here:

Amazon Stock Jumps On Earnings Beat, Cloud AI Product GrowthKey Price Levels to Watch - Investopedia

Nvidia Stock Had an Amazing Week — Here is What Artificial Intelligence (AI) Semiconductor Investors Should Know – The Motley Fool

In today's video, I discuss the recent updates affecting Nvidia (NVDA 4.97%). Check out the short video to learn more, consider subscribing, and click the special offer link below.

*Stock prices used were the market prices of Feb. 2, 2024. The video was published on Feb. 2, 2024.

Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Jose Najarro has positions in Advanced Micro Devices, Alphabet, Meta Platforms, Microsoft, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Amazon, Meta Platforms, Microsoft, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends Intel and recommends the following options: long January 2023 $57.50 calls on Intel, long January 2025 $45 calls on Intel, and short February 2024 $47 calls on Intel. The Motley Fool has a disclosure policy. Jose Najarro is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe through their link they will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.

See the original post here:

Nvidia Stock Had an Amazing Week -- Here is What Artificial Intelligence (AI) Semiconductor Investors Should Know - The Motley Fool

Fact check: AI-generated images of children in Gaza DW 02/02/2024 – DW (English)

For days, certain pictures of small children lying huddled together on muddy ground or in front of tents have been shared on social media platforms such as TikTok, Instagram and X, formerly known as Twitter. They are often accompanied by a Palestinian flag or comments suggesting the children are located in the Gaza Strip.

That people in Gaza and children in particular are suffering in dire conditions without sufficient access to food, clean water and medical care has beenwell documented by the United Nations, human rights organizations, international mediaand the people themselves.

DW contacted several aid organizations with staff in the besieged territory and heard about living conditions for displaced people there. MattSugrue, Save the Children's director of program operations in Rafah, saidchildren and families were living in makeshift sheltersor struggling to find places to spend the night, and that there was a lack of toilets and clean water.

The UN has estimated that 85% of Gaza population of 2.2 million people have been displaced bythe Israeli military campaign against Hamas,which has been classified as a terrorist organizationby Germany and the European Union, along withthe United States and many other countries.Many of the displaced people currently live in emergency shelters.

Amid this suffering, unknown parties have chosen to create and circulate fake images about the situation on the ground usingartificial intelligence. DW Fact Check examined and concluded that the following three images had been created with the use of AI.

A picture of two boys wearing identical pajamas and huddled together under a turquoise blanket in a blue tent has been seen millions of times. They lie in mud, surrounded by brown puddles of water. But this image was generated by AI, and this is not clearly indicated in the post seen by DW, orin many similar posts.

DW Fact Check circled the parts where there is evidence that AI was used: The two boys each have a foot with only four toes, which is a characteristic AI mistake. The right foot of the boy on the left also appears quite large.

Furthermore, their interlocked fingers look too uniformand their wrists are not bent enough, and at the wrong angle. There is also something wrong with the back of the head and neck of the boy on the left. The body parts merge with the canvas of the tent and are pointed towardthe sternum.

The lighting in the photo also seems staged.Considering that the lamp lighting up the scene is presumably hanging from the ceiling, it provides a very even light, as seen in the reflections. Photo editing software can help achieve such effects, but the original shot must have good lighting conditions, which would require the use of complex equipment.

The same applies to this picture, which has appeared on X, Instagram, TikTok and other platforms. The reflections on the bottom of the bottle in the bottom-right corner of the picture seem particularly unnatural. However, generally, the typical AI mistakes often found on the body's extremitiesare more subtle.

On close inspection, the second toe of the lower foot of thegirl on the rightseems very large. And the bottoms of both girls' left feet are unusually straight, as if they were standing on the ground or were extremely flat-footed. Moreover, the girls' skin seems flawless, as is often the case in AI-generated images.

In the third picture that DW examined, the light seems quite natural, and the skin of the girls appears realistic. In this photo, the girls don't seem to resemble each other as much as the children in the other two pictures.

But obvious errors highlight the use of AI:their bodies seem to be fused together, and the girl in front appears to have no legs. This could be the case in reality, particularly after months of bombing by the Israeli military, but the patterns of the fabrics are also blurred in the encircled area. DW has concluded that this could also be an AI-generated mistake.

It's highly questionable to use AI imagery to illustrate real events, such as those happening in the Israel-Hamas war, especially if pictures aren't labeled as such. Such pictures have not only appeared on social media platforms, but also on certain news sites like The Palestinian Information Centerand Nordhessen-Journal,a regional German news outlet.

AI images don't document objective facts they are computer-generated images created according to parameters set by a person. DW categorizes such images, that are published and disseminated without being labeled as being generated by AI,as fake.

This article was originally written inGerman.

To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video

More:

Fact check: AI-generated images of children in Gaza DW 02/02/2024 - DW (English)

Amazon Introduces Rufus, an AI Shopping Tool, and Reports Earnings – The New York Times

Amazon entered the consumer chatbot fray on Thursday, announcing a new artificial intelligence personal shopping assistant as the company races to catch up with other tech giants.

Customers can ask the tool, Rufus, product questions directly in the search bar of the companys mobile app, Amazon said in a blog post. The A.I. will then provide answers in a conversational tone. The examples provided in the announcement included comparing different kinds of coffee makers, recommendations for gifts and a follow-up question about the durability of running shoes.

Rufus will be available starting on Thursday to a small subset of customers, according to the post, and it will be rolled out to additional customers in the coming weeks. Amazon declined to provide more details about how many people will be part of the tools initial release.

Amazon allows its employees to bring their dogs to work, and a dog named Rufus was one of the first to roam its offices in the companys early days.

Amazon has been racing to shake off the perception that it is behind on the wave of A.I. tools unleashed more than a year ago, when the start-up OpenAI released its ChatGPT chatbot. If customers find Rufus helpful and popular, Amazon could shake up the business of searching for products and control even more of the experience of shopping online.

Rufus lets customers discover items in a very different way than they have been able to on e-commerce websites, Andy Jassy, the companys chief executive, said on a call with investors. Its seamlessly integrated in the Amazon experience that customers are used to and love to be able to take action, he said.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

See the original post here:

Amazon Introduces Rufus, an AI Shopping Tool, and Reports Earnings - The New York Times