Archive for the ‘Artificial General Intelligence’ Category

Companies are losing faith in AI, and AI is losing money – Android Headlines

We just cant stop hearing about AI technology nowadays. This is because its supposed to be the next step in human achievement. Also, because companies just dont shut up about it! For all of its potential pitfalls and challenges, the tech industry seems to be confident in AI tech. However, thats just how it seems to the public. Behind the scenes, however, it seems that companies are losing faith in AI.

Its always behind closed doors where we see the most news. Weve seen Sundar Pichai, Sam Altman, and Satya Nadella among countless others on stage talking about how they have immense faith in their AI tech. Thats all fine and dandy, but do you think that theyre going to dedicate a keynote to any issues that are going on with their AI tech? Of course not! Its their job to make us think that everything is A-OK.

But, the thing is that things arent always A-OK in the world of AI tech. Theres a ton of doubt and tension throughout the tech industry regarding AI, and we only know about a fraction of it. We only hear what slips through the cracks. A testimony from an employee at a tech firm here, an exclusive leak there.

The fact of the matter is that the companies propping up this technology (the ones injecting billions of dollars into AI companies) are starting to shy away. Theyre not as likely to invest so much money in it. Sure, you cant go online without seeing an ad for some new AI service. You cant go on social media without seeing some new AI-generated video that makes you fear for the film industry. But, the people making that possible might be stepping back a bit.

Its money that makes the world go around, and its what makes your chatbot so smart. In case you dont know, AI is an extremely expensive technology to nurture. It costs money to train models, run data centers, secure GPUs, and so on. If youre looking to make an AI start-up, youll need some major investors.

Companies like Microsoft, Google, and Amazon among many others have been investing billions in AI start-ups to make the dream of AGI (artificial general intelligence) materialize. So, why are these investments slowing down?

The 2024 report from Standford Universitys Institution for Human-Centered AI revealed something a bit surprising. Investments in AI have been dropping year-over-year. According to the report, the peak of investments was actually a year before the big AI boom. The report (via GIZMONDO) 2021 AI saw investments of about $337 billion. That fell more than $100 billion to $234 billion in 2022. This was the year of the AI boom, so youd expect the numbers to soar in the next year. However, thats not the case. In 2023, investments dropped around $40 billion.

Even with the potential of generative AI, companies still seem to be wary of the technology. AI has infected just about every tech and creative industry on the planet, theres a ton of money to be made right?

The count of billion-dollar investments has slowed and is all but over, Garter analyst John-David Lovelock told TechCrunch earlier this year. Companies are investing in AI start-ups, but the age of $13 billion investments like we saw with Microsoft and OpenAI might be gone. Why?

Well, why did these companies invest in AI in the first place? Theyre pouring money into the technology because it has the potential to be a massive moneymaker. It has POTENTIAL. It shows all of the signs, and companies are hopeful. However, the fact of the matter is that no one really knows whats going to happen with AI technology. Were still in the early stages of generative AI development even though its been in production for years. AI employees, companies, and investors are all dreaming of a world where AI is spitting out money like a broken slot machine. Well, guess what, thats a dream.

AI is a gigantic money void. Companies invest a ton of money into it in the hopes that it will turn a profit in the future. However, it seems that the journey to a profit is taking longer than expected. If youve invested $5 billion in a company, and its still not turning a profit, then youre less likely to invest that much again.

Companies are starting to realize that AI isnt going to start making money soon. Several of the AI companies offer their services for monthly subscriptions. Thats a model that needs millions, if not hundreds of millions of customers to see some sort of return depending on how much money youve invested in a company. Disney+, with more than 200 million users still struggled to make a profit. It still might not have turned a profit yet.

Investors dont know when/if AI technology will be a cash cow tomorrow; what they know is that theyre burning a ton of money today. Companies are losing faith and money over AI.

There are reasons why you shouldnt trust everything that AI produces. There are people who use AI to spread misinformation, but AI can spread it on its own sometimes. The thing is that AI hallucinations are still a pretty big issue in the AI space, and thats something that companies are looking at. This is another reason why companies are losing faith in AI.

AI hallucinations are when an AI model basically makes up information. Youd get responses with no rhyme or reason. Its still one of the main problems holding AI technology back. General users are losing faith in the technology because of this. Along with general users, major companies are also slowing down their development because of this.

According to a recent study from Lucidworks (via Reuters), manufacturers are starting to get pretty wary about AI technology because of the accuracy issues. Earlier this year, the company surveyed 2,500 leaders who have a say in AI. About 58% of those leaders planned to increase spending on AI. Thats a massive drop from the 93% from last year. Back in 2023, the world was still getting a feel for what AI had to offer, and companies were still trying to get in as early as possible.

Now, companies are starting to see the true cost of AI. Not only that, but theyre starting to see just how badly AI can mess up. 44% of the manufacturing respondents expressed some sort of concern over AI accuracy.

So, these companies are holding onto their dollar bills just a little bit tighter.

Its tough to say what this means for the AI industry as a whole. Companies like Google, Microsoft, and OpenAI are going to continue dumping gallons of green into their AI machines. OpenAI has probably the most popular AI tool on the market, Google has been an AI company for years before ChatGPT, and Microsoft is still going crazy over AI. However, it seems that the rest of the industry is starting to lose some of the hype for AI.

At the end of the day, it all comes down to the almighty dollar. It depends on how much money companies are still willing to spend on AI technology.

Maybe the money that companies were investing is like Meta Threads user base. Remember when Threads was new? Its user base shot up to over 100 million within a week. Then, as people started to learn more about the app and what it was missing, its user base dropped. After Meta made improvements and added features, people started to rejoin.

Well, this might be what we could see with AI spending. During the initial period when ChatGPT was wowing the world, everyone jumped on board and wrote giant checks to fund this revolutionary new technology. However, after learning a bit more about the costs associated and the AI inaccuracies, theyre backing off. Well, as AI technology gets better, who knows if well see investments pick up again?

Right now, its anyones guess. Companies ar losing faith in AI, and that doesnt bode well for it. For all we know, this could be the start of the slow heat death of AI technology.

See the rest here:

Companies are losing faith in AI, and AI is losing money - Android Headlines

AGI isn’t here (yet): How to make informed, strategic decisions in the meantime – VentureBeat

It's time to celebrate the incredible women leading the way in AI! Nominate your inspiring leaders for VentureBeats Women in AI Awards today before June 18. Learn More

Ever since the launch of ChatGPT in November 2022, the ubiquity of words like inference, reasoning and training-data is indicative of how much AI has taken over our consciousness. These words, previously only heard in the halls of computer science labs or in big tech company conference rooms, are now overhead at bars and on the subway.

There has been a lot written (and even more that will be written) on how to make AI agents and copilots better decision makers. Yet we sometimes forget that, at least in the near term, AI will augment human decision-making rather than fully replace it. A nice example is the enterprise data corner of the AI world with players (as of the time of this articles publication) ranging from ChatGPT to Glean to Perplexity. Its not hard to conjure up a scenario of a product marketing manager asking her text-to-SQL AI tool, What customer segments have given us the lowest NPS rating?, getting the answer she needs, maybe asking a few follow-up questions and what if you segment it by geo?, then using that insight to tailor her promotions strategy planning.

This is AI augmenting the human.

Looking even further out, there likely will come a world where a CEO can say: Design a promotions strategy for me given the existing data, industry-wide best practices on the matter and what we learned from the last launch, and the AI will produce one comparable to a good human product marketing manager. There may even come a world where the AI is self-directed and decides that a promotions strategy would be a good idea and starts to work on it autonomously to share with the CEO that is, act as an autonomous CMO.

VB Transform 2024 Registration is Open

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now

Overall, its safe to say that until artificial general intelligence (AGI) is here, humans will likely be in the loop when it comes to making decisions of significance. While everyone is opining on what AI will change about our professional lives, I wanted to return to what it wont change (anytime soon): Good human decision making. Imagine your business intelligence team and its bevy of AI agents putting together a piece of analysis for you on a new promotions strategy. How do you leverage that data to make the best possible decision? Here are a few time (and lab) tested ideas that I live by:

Before seeing the data:

While looking at the data:

While making the decision:

At this point, if youre thinking, this sounds like a lot of extra work, you will find that this approach very quickly becomes second nature to your executive team and any additional time it incurs is high ROI: Ensuring all the expertise at your organization is expressed, and setting guardrails so the decision downside is limited and that you learn from it whether it goes well or poorly.

As long as there are humans in the loop, working with data and analyses generated by human and AI agents will remain a critically valuable skill set in particular, navigating the minefields of cognitive biases while working with data.

Sid Rajgarhia is on the investment team at First Round Capital and has spent the last decade working on data-driven decision making at software companies.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Continue reading here:

AGI isn't here (yet): How to make informed, strategic decisions in the meantime - VentureBeat

Apple’s AI Privacy Measures, Elon Musk’s Robot Prediction, And More: This Week In Artificial Intelligence – Alphabet … – Benzinga

The week was buzzing with tech news as Apple Inc. AAPL unveiled new AI privacy measures and Tesla Inc. TSLA CEO Elon Musk made predictions about the future of humanoid robots. Heres a quick round-up of the top stories.

Apple Eases AI Privacy Concerns: Apples Craig Federighi announced that users will have the option to opt out of the integration of OpenAI's ChatGPT with Siri, a move aimed at addressing growing AI privacy concerns. The new AI features, part of Apples in-house AI suite, Apple Intelligence, will be available on Apple devices this fall.

Read the full article here.

Musk Predicts Robot Population Boom: Tesla CEO Elon Musk believes that the number of humanoid robots will one day exceed the human population. Musk shared his thoughts during a game streaming session on X, formerly Twitter.

Read the full article here.

See Also: Trumps Niece Slams Latest Improvement To X: Elon Musk Is Even Stupider Than I Thought

OpenAI Accused of Hindering AI Progress: A software engineer at Alphabet Inc. GOOG GOOGL))) claimed that OpenAI, the parent company of ChatGPT, has set back the progress of Artificial General Intelligence (AGI) by 5 to 10 years. The engineer expressed his concerns during a conversation with podcaster Dwarkesh Patel.

Read the full article here.

Apple Integrates ChatGPT Features: Apple announced a partnership with Microsoft Corp.-backed OpenAI to integrate ChatGPT into iOS 18, iPadOS 18, and macOS 15 Sequoia. The AI feature will be available free of charge and will not log user data.

Read the full article here.

Steve Wozniak Cautions About AI Demos: Apple co-founder Steve Wozniak urged users to look beyond the impressive AI demos and test the new features themselves. In an interview with Bloomberg, Wozniak expressed a mix of excitement and cautious optimism about the new AI features, dubbed "Apple Intelligence."

Read the full article here.

Read Next: Elon Musk Reacts As Apple Offers Opt-Out Option For OpenAIs ChatGPT Integration, Easing Privacy Concerns

Photo courtesy: Shutterstock

This story was generated using Benzinga Neuro and edited by Rounak Jain

See the rest here:

Apple's AI Privacy Measures, Elon Musk's Robot Prediction, And More: This Week In Artificial Intelligence - Alphabet ... - Benzinga

AGI and jumping to the New Inference Market S-Curve – CMSWire

The Gist

Artificial general intelligence (AGI) has been the Holy Grail of AI for many decades. AGI is an application of strong AI that is defined as AI that can perform as well or better than humans on a wide range of cognitive tasks. There is much debate over when artificial general intelligence may be fully realized, especially with the current evolution of large language models (LLMs). For many people, AGI is something out of a science fiction movie that remains mostly theoretical. Others believe we have already reached AGI with the latest releases of Chat-GPT4o and Gemini Advanced.

Historically, we have used the Turing test as the measurement to determine if a system has reached artificial general intelligence. Created by Alan Turing in 1950 and originally called the Imitation Game, the test is largely based on three participants, an interrogator whose asks questions to the machine and human, the machine or system and the human who answers the question alongside the machine for comparison.

The criticism of the test is that it doesnt measure intelligence or any other human qualities. The foundational assumption that an interrogator can determine if a machine is thinking by comparing its behavior with human behavior has a lot of subjectivity and is not necessarily deterministic.

There is also lack of consensus on whether the modern LLMs have actually achieved AGI. In June 2022, Google claimed LaMDAhad passed the test, but critics quickly dismissed this as an advancement in fooling people you have intelligence rather than advancing toward AGI. The reality is that the test has outlived its usefulness.

Ray Kurzweil, a technology futurist, has spent much of his career making predictions on when we will reach AGI. In his recent talk at SXSW, he said he is sticking to his original prediction in 1999 that AI will match/surpass human intelligence by 2029.

But how will we know?

Related Article:The Quest for Achieving Artificial General Intelligence

Horizontal AI products like ChatGPT, Gemini, Midjourney, Dall-E have given millions of users exposure to the power of AI. To many, these AI platforms seem very smart as they can generate answers, compose songs and write code in seconds.

However, there is a big difference between AI and AGI. These current AI platforms are essentially highly efficient prediction machines because they have been trained on a large corpus of data. However, that does not enable creativity, logical reasoning and sensory perception.

As we move closer to artificial general intelligence, we need an accepted definition of AGI and a framework that truly measures these critical aspects of intelligence such as reasoning, creativity and sentience.

One approach is to consider artificial general intelligence as an end-to end intelligence supply chain encompassing all the capabilities needed to achieve AGI.

We can group the critical components needed for AGI into four major categories as follows:

Todays AI systems are mostly excelling at 1 and 2. For artificial general intelligence to be attained, we will need systems that can accomplish 3 and 4.

Achieving AGI will require further advances in algorithms, computing and data than what powers the models of today. Mimicking complex human behavior such as creativity, perceptions, learning and memory will require embodied cognition or learning from a multitude of senses or inputs. We also need systems and infrastructure that go beyond training.

Human intelligence is heavily based on logical reasoning. We understand cause and effect, deduce information from existing knowledge and make inferences. Reasoning algorithms let a system traverse knowledge representations, drawing conclusions and finding solutions. This goes beyond basic pattern matching, enabling a more humanlike problem-solving ability. Replicating similar processes is fundamental for an AI to achieve AGI.

The timing of artificial general intelligence remains uncertain, but when it does, its going to impact our lives, businesses and society significantly.

The real power of AI technology is still ahead of us.

Related Article:Can We Fix Artificial Intelligence's Serious PR Problem?

One of the prerequisites for achieving artificial general intelligence is the capability for AI inference, which is when an AI model produces accurate predictions or conclusions. Much of the computing power today is focused on model training. Model training is the stage when data is fed into a learning algorithm to produce a model. Training enables AI models to make accurate predictions when prompted.

AI can be divided into two major market segments training and inference. Today, many companies are focused on creating high-performance hardware for data center providers to conduct massive AI model training. For instance, Nvidia, controls more than 95% of the specialized AI chip market. They sell to major tech companies like Amazon, Meta, and Microsoft, which are believed to make up roughly 40% of its revenue.

However, the market will soon shift its focus to building inferencing infrastructure for generative AI applications. The inferencing market will quickly grow as Fortune 500 companies that are currently testing generative AI applications move into production deployment. New applications will also emerge that will require scale to support workloads across centralized cloud, edge computing and IoT (Internet of Things) devices.

Model training is a very computationally intensive process that takes a lot of time to complete. Inference is usually faster and much less resource-intensive. Inferencing boils down to running AI applications or workloads after models have been trained.

Inference is going to be 100 times bigger than training. Nvidia is really good at training but is not ideal for inference.

A pivot from training to inference may not be easy.

Nvidia was founded in 1993 long before the AI craze we see today. They were not initially focused on supplying AI hardware and software solutions and instead focused on creating graphics cards. As the PC market expanded and new applications such as Windows and gaming became prevalent, it became necessary to have dedicated hardware to handle the complicated tasks of 3D graphics processing. The opportunity to create high-performance processing units to support intensive computational operations in the PC and gaming market was not something that happens very often.

It turns out Nvidia struck gold with its GPU architectures. GPUs are well suited for AI for three primary reasons. They employ parallel processing; the systems scale up through high-performance interconnections creating supercomputing capabilities and the software for managing and tuning the stack for AI is broad and deep.

The idea of having separate hardware existed before Nvidia came onto the scene. For instance, the first Atari video game consoles, shipped in the 1970s, had graphics chips inside. And IBM had released the Professional Graphics Controller (PGA) which used an onboard Intel 8088 microprocessor to do video tasks. Silicon Graphics Inc or SGI also emerged as a dominant graphics player in the market in the late 1980s.

Things changed rapidly in 1993 with the release of a 3D game called Doom by game developer Id Software. Doom was the first mature, action-packed first-person shooter game on the market. Quake quickly followed and offered brand-new technical breakthroughs such as full real-time 3D rendering and online multiplayer. This paved the way for the dedicated graphics card market.

Nvidia didnt immediately rise to fame. The first product came in May 1995, called the NV1, which was a multimedia PCI card with graphics, sound, and gamepad support. However, the product flopped as the NV1 was not compatible with the leading graphics APIs at the time OpenGL, 3Dfx's Glide, etc. It wasnt until the Riva 128, launched in 1997 that the company saw success. At the time of launch, Nvidia had less than six weeks of cash left in the bank!

By the early 2000s, the graphics card market had drastically consolidated from over 30 to just three: Nvidia, ATI, and Intel taking up the low end. Nvidia coined the phrase General Processing Unit, or GPU, and set its sights on the broader compute market.

The opportunity to create new businesses in adjacent markets, outside your core business, is not something you see frequently. A shining example was Amazon, an online commerce company, that created a cloud computing platform, Amazon Web Services (AWS) from the technology components they created to run a massively scalable commerce platform. Uber, a ride-sharing company leveraged its backend infrastructure to launch a food delivery service called UberEATS.

In a similar fashion, Nvidia realized that its graphic processing units (GPUs) that powered many of the graphics hardware boards in PCs and gaming consoles had another use in accelerating mathematical operations. By investing in making GPUs programmable, they opened up their parallel processing capabilities to a wider variety of applications. This enabled high-performance computing to be more readily accessible and run on commodity hardware.

Their first venture into the high-performance computing (HPC) space with its CUDA parallel computing architecture, enabling GPUs to be used for general-purpose computing tasks. This capability helped sparked early breakthroughs in modern AI. Initial AI applications like Alexnet, a convolutional neural network (CNN) used to classify images, was unveiled in 2012. It was trained using just two of Nvidia's programmable GPUs.

The big discovery was that GPUs could massively accelerate neural network processing, or model training. As this began to spread among computer and data scientists, demand for Nvidias GPUs soared. In some ways, the AI revolution found Nvidia.

But that was just the beginning. Nvidias relentless pursuit of innovation led to a series of breakthrough architectures starting with the Turing architecture in 2018,which fused real-time ray tracing, AI, simulation, and rasterization to fundamentally change the way graphics processing worked. Turing featured new tensor cores, processors that accelerate deep learning training and inference, providing up to 500 trillion tensor operations per second. Tensor cores are essential building blocks of the NVIDIA solution that incorporates hardware, networking, software, libraries and optimized AI models. Tensor cores deliver significantly faster AI training times compared to traditional CUDA cores alone, which are primarily designed for general-purpose processing tasks and excel in parallel computing.

Nvidias rapid rate of innovation continued with subsequent architectural advancements with Ampere, Volta, Lovelace, Hopper and now Blackwell architectures. The H100 Tensor Core GPU was the first based on the Hopper architecture with over 80 billion transistors, built-in transformer engine, advanced NVLink inter-GPU communications and a second-generation multi-instance GPU (MIG).

The growth of computational power used to be governed by Moores Law, which predicted a doubling roughly every two years. Nvidias new Blackwell GPU has shattered expectations, increasing computational speed by over a thousand times in just eight years.

Whats good for training may not be good for inference.

There are still a limited number of AI applications in production today. Outside of a few large tech companies, very few corporations have advanced to running large-scale AI models in production. So most of the hardware focus has been on optimizing the hardware platform for training.

As the number of AI applications increases, the amount of compute a company uses for running models to respond to end-user requests will increase significantly. This will exceed the cost theyre spending on training today. The focus will then shift to optimizing hardware to reduce inference costs.

GPUs are well suited for the computational complexity of training. The workloads make it possible to split work across a few GPUs that are tightly interconnected. That makes reducing latency by distributing across low-end CPUs unrealistic.

However, this is not true for inference. The model weights are fixed and can easily be duplicated across many machines, so no communication is needed. This makes an army of commodity PCs and CPUs very appealing for applications relying on inference.

New companies like Groq are emerging that have the potential to be serious competitors in the AI chip market. This could pose a threat to Nvidia's dominance in the AI world.

Today, all the AI giants heavily rely on Nvidia to supply them with computing cards for mostly AI training with smaller demands on inference. The latest product, the H100 is still in high demand, remains costly (about $35,000 each) and only achieves inference speeds of 30-40 tokens per second. Compared to inference, training requires more stringent computing card specifications, especially in terms of memory size, which is growing close to 300 GB per card.

Groq's approach to neural network acceleration is radically different from Nvidias. The architecture opts for a single large processor with hundreds of functional units, which significantly reduces instruction decoding overhead. This architecture allows superior performance and reduced latencies, ideal for cloud services requiring real-time inferences.

Groqs secret sauce is its Logic Processing Unit (LPU) inference engines that are specifically engineered to address the two major bottlenecks faced by Large Language Models (LLMs) compute capacity and memory bandwidth. The LPU systems boast comparable, if not superior, compute power to GPUs and have eliminated external memory bandwidth bottlenecks, enabling faster generation of text sequences.

The realization that computational power was a bottleneck for AIs potential led to the inception of Groq and the creation of the LPU. Jonathan Rosswho initially began what became the TPU project at Google started Groq in 2016.

Nvidia remains well entrenched and will likely not be easy to dethrone. However, Groq has demonstrated that its vision of an innovative processor architecture can compete with industry giants.

There are tools emerging for machine learning that enable more efficient inferencing. Developed by Georgi Gerganov (the GG in GGML), GGML has emerged as a powerful and versatile tensor library, empowering developers to build and deploy high-performance machine learning applications across a wide spectrum of devices. It is designed to bring large-scale machine-learning models to commodity devices.

GGML is a lightweight engine that runs neural networks on C++. This is significant because it's fast, has no dependencies (pure C++) it's multi-platform, and can be easily ported to devices such as mobile phones. It defines a binary format for distributing large language models (LLMs) using quantization, a technique that allows LLMs to run on consumer hardware with effective CPU inferencing. It enables these big models to run on the CPU as fast as possible.

The benefit of GGML is it requires fewer resources to run, typically 4x less RAM requirements, and 4x less RAMbandwidthrequirements, and thus faster inference on the CPU.

Traditionally, inference is done on centralized servers in the cloud. However, tools like GGML are making it possible to do model inference on commodity devices at the network's edge. That is critical for low latency use cases like in self-driving cars.

GGML is empowering AI developers to harness the full potential of machine learning on everyday hardware. It provides an impressive array of features, is an open standard and has been optimized for Apple Silicon. GGML is poised to play a pivotal role in shaping the future of edge computing.

The future of AI is undoubtedly headed toward inference-centric workloads. While the training of LLMs and other complex AI models gets a lot of current attention, inference makes up the vast majority of actual AI workloads.

Enterprises should begin to understand how inference works and how it will help enable better use of AI to improve their products and services.

Learn how you can join our contributor community.

Link:

AGI and jumping to the New Inference Market S-Curve - CMSWire

Apple’s big AI announcements were all about AI ‘for the rest of us’Google, Meta, Amazon and, yes, OpenAI should … – Fortune

In the end, Apples highly anticipated AI announcements were very, well, Apple-y. You could practically feel that bite in the tech giants fruit logo as the company finally announced Apple Intelligence (how deliciously on-brand to take advantage of the technologys initials), which Apples Tim Cook touted will be personal, powerful, and private and integrated across Apples app and hardware ecosystem.

Apple, of course, has always been all about being a protective walled garden that provides comprehensive security measures but also plenty of restrictions for users, and Apple Intelligence will be no different. But it is that very personal context of the user within the Apple landscape, combined with the power of generative AI, that makes Apple Intelligence something perhaps only Apple could really do.

Apple has not been first, or anywhere near the cutting edge of generative AI, but it is betting on something else: an AI for the rest of usfor the billions of users who dont care about models or APIs or datasets or GPUs or devices or the potential for artificial general intelligence (AGI). That is, the normiesas those in the tech industry like to call themwho simply want AI that is easy, useful, protective of privacy, and just works.

The laundry list of features Apple executives promised to roll out across iPhone, iPad, and Mac OS devices was long. Siri is getting an upgrade that makes the assistant natural, more contextually relevant, and more personal. If Siri cant answer a question itself, it will ask the user if its okay to tap into ChatGPTthanks to a new deal between Apple and OpenAIand it will have on-screen awareness that will eventually allow Siri to take more agent-like action on user content across apps.

There will be new systemwide writing tools in iOS 18, iPadOS 18, and macOS Sequoia, as well as new ways for AI to help prioritize everything from messages to notifications. The fun factor is well-represented as well, with on-device AI image creation and the fittingly named Genmojis, which let users create custom emojis on the fly (think a smiley face with cucumbers on the eyes to indicate youre at the spa).

But unlike Google and Metas throw-everything-at-the-wall approach to integrating generative AI into their products, Apple is taking a different tack, putting a carefully designed layer of gen AI on top of its operating system. None of it, at least in Mondays demo, seems bolted on as an afterthought (like Metas AI-is-everywhere search bar in Instagram, Facebook, and WhatsApp, for example). And none of it, in fact, really uses the word AI as in artificial intelligence.

The rebranding of AI as Apple Intelligence takes a technology consumers have heard and read about for more than a year (and which has often sounded frightening, futuristic, and kind of freaky), and serves it up as something thats soothingly safe and secure. Its the tech equivalent of a mild soap for sensitive skin, offering consumers a freshly scrubbed face with no hard-to-pronounce and potentially irritating ingredients.

Of course, Big Tech demos are notorious for big announcements that dont always deliver. And there were few details about important issues like the provenance of the data powering Apple Intelligence features, the terms of Apples deal with OpenAI for access to ChatGPT, and how Apple plans to deal with the inevitable hallucinations that will result from its AI output. After all, safe and secure does not necessarily mean accurate. When Apple Intelligence is released in the wild, so to speak, things are sure to get interesting, and messier.

The tech world is in a fierce battle to see which company will be able to take AI and turn it into the industrys next game-changer. Whether that is Apple or not remains to be seen, but the elegant simplicity of the Apple Intelligence announcements certainly puts Google, Meta, Amazon, and, yes, OpenAI on notice: AI may be complicated, but as Steve Jobs said, simple can be harder than complex: You have to work hard to get your thinking clean to make it simple. Perhaps, AI companies will finally figure out how to keep it simpleand, as Jobs said, move mountains.

View post:

Apple's big AI announcements were all about AI 'for the rest of us'Google, Meta, Amazon and, yes, OpenAI should ... - Fortune