Archive for the ‘Machine Learning’ Category

Cytokine Storm Debunked: Machine Learning Exposes the True Killer of COVID-19 Patients – SciTechDaily

Scientists at Northwestern University Feinberg School of Medicine have discovered that unresolved secondary bacterial pneumonia is a key driver of death in patients with COVID-19, affecting nearly half of the patients who required mechanical ventilation support. Their findings, published in The Journal of Clinical Investigation, also debunk the theory that COVID-19 causes a cytokine storm leading to death.

Machine learning finds no evidence of cytokine storm in critically ill patients with COVID-19.

Secondary bacterial infection of the lung (pneumonia) was extremely common in patients with COVID-19, affecting almost half the patients who required support from mechanical ventilation. By applying machine learning to medical record data, scientists at Northwestern University Feinberg School of Medicine have found that secondary bacterial pneumonia that does not resolve was a key driver of death in patients with COVID-19, results published in The Journal of Clinical Investigation.

Bacterial infections may even exceed death rates from the viral infection itself, according to the findings. The scientists also found evidence that COVID-19 does not cause a cytokine storm, so often believed to cause death.

Benjamin Singer, MD, the Lawrence Hicks Professor of Pulmonary Medicine in the Department of Medicine and a Northwestern Medicine pulmonary and critical care physician. Credit: Northwestern Medicine

Our study highlights the importance of preventing, looking for, and aggressively treating secondary bacterial pneumonia in critically ill patients with severe pneumonia, including those with COVID-19, said senior author Benjamin Singer, MD, the Lawrence Hicks Professor of Pulmonary Medicine in the Department of Medicine and a Northwestern Medicine pulmonary and critical care physician.

The investigators found nearly half of patients with COVID-19 develop a secondary ventilator-associated bacterial pneumonia.

Those who were cured of their secondary pneumonia were likely to live, while those whose pneumonia did not resolve were more likely to die, Singer said. Our data suggested that the mortality related to the virus itself is relatively low, but other things that happen during the ICU stay, like secondary bacterial pneumonia, offset that.

The study findings also negate the cytokine storm theory, said Singer, also a professor of Biochemistry and Molecular Genetics.

The term cytokine storm means an overwhelming inflammation that drives organ failure in your lungs, your kidneys, your brain and other organs, Singer said. If that were true, if cytokine storm were underlying the long length of stay we see in patients with COVID-19, we would expect to see frequent transitions to states that are characterized by multi-organ failure. Thats not what we saw.

The study analyzed 585 patients in the intensive care unit (ICU) at Northwestern Memorial Hospital with severe pneumonia and respiratory failure, 190 of whom had COVID-19. The scientists developed a new machine learning approach called CarpeDiem, which groups similar ICU patient-days into clinical states based on electronic health record data. This novel approach, which is based on the concept of daily rounds by the ICU team, allowed them to ask how complications like bacterial pneumonia impacted the course of the illness.

These patients or their surrogates consented to enroll in the Successful Clinical Response to Pneumonia Therapy (SCRIPT) study, an observational trial to identify new biomarkers and therapies for patients with severe pneumonia. As part of SCRIPT, an expert panel of ICU physicians used state-of-the-art analysis of lung samples collected as part of clinical care to diagnose and adjudicate the outcomes of secondary pneumonia events.

The application of machine learning and artificial intelligence to clinical data can be used to develop better ways to treat diseases like COVID-19 and to assist ICU physicians managing these patients, said study co-first author Catherine Gao, MD, an instructor in the Department of Medicine, Division of Pulmonary and Critical Care and a Northwestern Medicine physician.

The importance of bacterial superinfection of the lung as a contributor to death in patients with COVID-19 has been underappreciated, because most centers have not looked for it or only look at outcomes in terms of presence or absence of bacterial superinfection, not whether treatment is successful or not, said study co-author Richard Wunderink, MD, who leads the Successful Clinical Response in Pneumonia Therapy Systems Biology Center at Northwestern.

The next step in the research will be to use molecular data from the study samples and integrate it with machine learning approaches to understand why some patients go on to be cured of pneumonia and some dont. Investigators also want to expand the technique to larger datasets and use the model to make predictions that can be brought back to the bedside to improve the care of critically ill patients.

Reference: Machine learning links unresolving secondary pneumonia to mortality in patients with severe pneumonia, including COVID-19 by Catherine A. Gao, Nikolay S. Markov, Thomas Stoeger, Anna E. Pawlowski, Mengjia Kang, Prasanth Nannapaneni, Rogan A. Grant, Chiagozie Pickens, James M. Walter, Jacqueline M. Kruser, Luke V. Rasmussen, Daniel Schneider, Justin Starren, Helen K. Donnelly, Alvaro Donayre, Yuan Luo, G.R. Scott Budinger, Richard G. Wunderink, Alexander V. Misharin and Benjamin D. Singer, 27 April 2023, The Journal of Clinical Investigation.DOI: 10.1172/JCI170682

Other Northwestern authors on the paper includeNikolay Markov;Thomas Stoeger, PhD;Anna Pawlowski;Mengjia Kang, MS;Prasanth Nannapaneni;Rogan Grant;Chiagozie Pickens 14 MD 17 GME, assistant professor of Medicine in the Division of Pulmonary and Critical Care;James Walter, MD, assistant professor of Medicine in the Division of Pulmonary and Critical Care; Jacqueline Kruser, MD;Luke Rasmussen, MS;Daniel Schneider, MS;Justin Starren, MD, PhD, chief of Health and Biomedical Informatics in the Department of Preventive Medicine;Helen Donnelly;Alvaro Donayre; Yuan Luo, PhD, director of the Center for Collaborative AI in Healthcare and associate professor of Preventive Medicine;Scott Budinger, MD, chief of Pulmonary and Critical Care in the Department of Medicine; andAlexander Misharin, MD, PhD, associate professor of Medicine in the Division of Pulmonary and Critical Care.

The study was supported bythe Simpson Querrey Lung Institute for Translational Sciences and grantU19AI135964 from theNational Institute of Allergy and Infectious Diseasesof the National Institutes of Health.

Read more:
Cytokine Storm Debunked: Machine Learning Exposes the True Killer of COVID-19 Patients - SciTechDaily

A race it might be impossible to stop: how worried should we be about AI? – The Guardian

Artificial intelligence (AI)

Scientists are warning machine learning will soon outsmart humans maybe its time for us to take note

Last Monday an eminent, elderly British scientist lobbed a grenade into the febrile anthill of researchers and corporations currently obsessed with artificial intelligence or AI (aka, for the most part, a technology called machine learning). The scientist was Geoffrey Hinton, and the bombshell was the news that he was leaving Google, where he had been doing great work on machine learning for the last 10 years, because he wanted to be free to express his fears about where the technology he had played a seminal role in founding was heading.

To say that this was big news would be an epic understatement. The tech industry is a huge, excitable beast that is occasionally prone to outbreaks of irrational exuberance, ie madness. One recent bout of it involved cryptocurrencies and a vision of the future of the internet called Web3, which an astute young blogger and critic, Molly White, memorably describes as an enormous grift thats pouring lighter fluid on our already smoldering planet.

We are currently in the grip of another outbreak of exuberance triggered by Generative AI chatbots, large language models (LLMs) and other exotic artefacts enabled by massive deployment of machine learning which the industry now regards as the future for which it is busily tooling up.

Recently, more than 27,000 people including many who are knowledgeable about the technology became so alarmed about the Gadarene rush under way towards a machine-driven dystopia that they issued an open letter calling for a six-month pause in the development of the technology. Advanced AI could represent a profound change in the history of life on Earth, it said, and should be planned for and managed with commensurate care and resources.

It was a sweet letter, reminiscent of my morning sermon to our cats that they should be kind to small mammals and garden birds. The tech giants, which have a long history of being indifferent to the needs of society, have sniffed a new opportunity for world domination and are not going to let a group of nervous intellectuals stand in their way.

Which is why Hintons intervention was so significant. For he is the guy whose research unlocked the technology that is now loose in the world, for good or ill. And thats a pretty compelling reason to sit up and pay attention.

He is a truly remarkable figure. If there is such a thing as an intellectual pedigree, then Hinton is a thoroughbred.

His father, an entomologist, was a fellow of the Royal Society. His great-great-grandfather was George Boole, the 19th-century mathematician who invented the logic that underpins all digital computing.

His great-grandfather was Charles Howard Hinton, the mathematician and writer whose idea of a fourth dimension became a staple of science fiction and wound up in the Marvel superhero movies of the 2010s. And his cousin, the nuclear physicist Joan Hinton, was one of the few women to work on the wartime Manhattan Project in Los Alamos, which produced the first atomic bomb.

Hinton has been obsessed with artificial intelligence for all his adult life, and particularly in the problem of how to build machines that can learn. An early approach to this was to create a Perceptron a machine that was modelled on the human brain and based on a simplified model of a biological neuron. In 1958 a Cornell professor, Frank Rosenblatt, actually built such a thing, and for a time neural networks were a hot topic in the field.

But in 1969 a devastating critique by two MIT scholars, Marvin Minsky and Seymour Papert, was published and suddenly neural networks became yesterdays story.

Except that one dogged researcher Hinton was convinced that they held the key to machine learning. As New York Times technology reporter Cade Metz puts it, Hinton remained one of the few who believed it would one day fulfil its promise, delivering machines that could not only recognise objects but identify spoken words, understand natural language, carry on a conversation, and maybe even solve problems humans couldnt solve on their own.

In 1986, he and two of his colleagues at the University of Toronto published a landmark paper showing that they had cracked the problem of enabling a neural network to become a constantly improving learner using a mathematical technique called back propagation. And, in a canny move, Hinton christened this approach deep learning, a catchy phrase that journalists could latch on to. (They responded by describing him as the godfather of AI, which is crass even by tabloid standards.)

In 2012, Google paid $44m for the fledgling company he had set up with his colleagues, and Hinton went to work for the technology giant, in the process leading and inspiring a group of researchers doing much of the subsequent path-breaking work that the company has done on machine learning in its internal Google Brain group.

During his time at Google, Hinton was fairly non-committal (at least in public) about the danger that the technology could lead us into a dystopian future. Until very recently, he said, I thought this existential crisis was a long way off. So, I dont really have any regrets over what I did.

But now that he has become a free man again, as it were, hes clearly more worried. In an interview last week, he started to spell out why. At the core of his concern was the fact that the new machines were much better and faster learners than humans. Back propagation may be a much better learning algorithm than what weve got. Thats scary We have digital computers that can learn more things more quickly and they can instantly teach it to each other. Its like if people in the room could instantly transfer into my head what they have in theirs.

Whats even more interesting, though, is the hint that whats really worrying him is the fact that this powerful technology is entirely in the hands of a few huge corporations.

Until last year, Hinton told Metz, the Times journalist who has profiled him, Google acted as a proper steward for the technology, careful not to release something that might cause harm.

But now that Microsoft has augmented its Bing search engine with a chatbot challenging Googles core business Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop.

Hes right. Were moving into uncharted territory.

Well, not entirely uncharted. As I read of Hintons move on Monday, what came instantly to mind was a story Richard Rhodes tells in his monumental history The Making of the Atomic Bomb. On 12 September, 1933, the great Hungarian theoretical physicist Leo Szilard was waiting to cross the road at a junction near the British Museum. He had just been reading a report of a speech given the previous day by Ernest Rutherford, in which the great physicist had said that anyone who looked for a source of power in the transformation of the atom was talking moonshine.

Szilard suddenly had the idea of a nuclear chain reaction and realised that Rutherford was wrong. As he crossed the street, Rhodes writes, time cracked open before him and he saw a way to the future, death into the world and all our woe, the shape of things to come.

Szilard was the co-author (with Albert Einstein) of the letter to President Roosevelt (about the risk that Hitler might build an atomic bomb) that led to the Manhattan Project, and everything that followed.

John Naughton is an Observer columnist and chairs the advisory board of the Minderoo Centre for Technology and Democracy at Cambridge University.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Follow this link:
A race it might be impossible to stop: how worried should we be about AI? - The Guardian

The 7 Best Websites to Help Kids Learn About AI and Machine Learning – MUO – MakeUseOf

If you have kids or teach kids, you likely want them to learn the latest technologies to help them succeed in school and their future jobs. With rapid tech advancements, artificial intelligence and machine learning are essential skills you can teach young learners today.

Thankfully, you can easily access free and paid online resources to support your kids' and teens' learning journey. Here, we explore some of the best e-learning websites for students to gain experience in AI and ML technology.

Do you want to empower your child's creativity and AI skills? You might want to schedule a demo session with Kubrio. The alternative education website offers remote learning experiences on the latest technologies like ChatGPT.

Students eight to 18 years old learn about diverse subjects at their own pace. At the same time, they get to team up with learners who share their interests.

Kubrios AI Prompt Engineering Lab teaches your kids to use the best online AI tools for content creation. Theyll learn to develop captivating stories, interactive games, professional-quality movies, engaging podcasts, catchy songs, aesthetic designs, and software.

Kubrio also gamifies AI learning in the form of "Quests." Students select their Quest, complete their creative challenge, build a portfolio, and earn points and badges. This program is currently in beta, but you can sign them up for the private beta for the following Quests:

Explore the Create&Learn website if you want to introduce your kids to the latest technological advancements at an early age. The e-learning site is packed with classes that help kids discover the fascinating world of robots, artificial intelligence, and machine learning.

Depending on their grade level, your child can join AI classes such as Hello Tech!, AI Explorers, Python for AI, and AI Creators. The classes are live online, interactive, and hands-on. Students from grades two up to 12 learn how AI works and can be applied to the latest technology, such as self-driving cars, face recognition, and games.

Create&Learns award-winning curriculum was designed by experts from well-known institutions like MIT and Stanford. But if you aren't sure your kids will enjoy the sessions, you can avail of a free introductory class (this option is available for select classes only).

One of the best ways for students to learn ML and AI is through hands-on machine learning project ideas for beginners. Machine Learning for Kids gives students hands-on training with machine learning, a subfield of AI that enables computers to learn from data and experience.

Your kids will train a computer to recognize text, pictures, numbers, or sounds. For instance, you can train the model to distinguish between images of a happy person and a sad person using free photos from the internet. We tried this, and then tested the model with a new photo, and it was able to successfully recognize the uploaded image as a happy person.

Afterward, your child will try their hand at the Scratch, Python, or App Inventor coding platform to create projects and build games with their trained machine learning model.

The online platform is free, simple, and user-friendly. You'll get access to worksheets, lesson plans, and tutorials, so you can learn with your kids. Your child will also be guided through the main steps of completing a simple machine learning project.

If you and your kids are curious about how artificial intelligence and machine learning work, go through Experiments with Google. The free website explains machine learning and AI through simple, interactive projects for learners of different ages.

Experiments with Google is a highly engaging platform that will give students hours of fun and learning. Your child will learn to build a DIY sorter using machine learning, create and chat with a fictional character, conduct their own orchestra, use a camera to bring their doodles to life, and more.

Many of the experiments don't require coding. Choose the projects appropriate for your child's level. If youre working with younger kids, try Scroobly; Quick, Draw!; and LipSync with YouTube. Meanwhile, teens can learn how experts build a neural network to learn about AI or explore other, more complex projects using AI.

Do you want to teach your child how to create amazing things with AI? If yes, then AI World School is an ideal edtech platform for you. The e-learning website offers online and self-learning AI and coding courses for kids and teens seven years old and above.

AI World School courses are designed by a team of educators and technologists. The courses cover AI Novus (an introduction to AI for ages seven to ten), Virtual Driverless Car, Playful AI Explorations Using Scratch, and more.

The website also provides affordable resources for parents and educators who want to empower their students to be future-ready. Just visit the Project Hub to order $1-3 AI projects, you can filter by age group, skill level, and software.

Kids and teens can also try the free games when they click Play AI for Free. Converse with an AI model named Zhorai, teach it about animals, and let it guess where these animals live. Students can also ask an AI bot about the weather in any city, or challenge it to a competitive game of tic-tac-toe.

AIClub is a team of AI and software experts with real-world experience. It was founded by Dr. Nisha Tagala, a computer science Ph.D. graduate from UC Berkeley. After failing to find a fun and easy program to help her 11-year-old daughter learn AI, she went ahead and built her own.

AI Club's progressive curriculum is designed for elementary, middle school, and high school students. Your child will learn to create unique projects using AI and coding. Start them young, and they can flex their own AI portfolio to the world.

You can also opt to enroll your child in the one-on-one class with expert mentors. This personalized online class enables students to research topics they care about on a flexible schedule. They'll also receive feedback and advice from their mentor to improve their research.

What's more, students enrolled in one-on-one classes can enter their research in competitions or present their findings at a conference. According to the AIClub Competition Winners page, several students in the program have already been awarded in national and international competitions.

Have you ever wondered how machines can learn from data and perform tasks that humans can do? Check out Teachable Machine, a website by Google Developers that lets you create your own machine learning models in minutes.

Teachable Machine is a fun way for kids and teens to start learning the concepts and applications of machine learning. You don't need any coding skills or prior knowledge, just your webcam, microphone, or images.

Students can play with images, sounds, poses, text, and more. They'll understand how tweaking the settings and data changes the performance and accuracy of the models.

Teachable Machine is a learning tool and a creative platform that unleashes the imagination. Your child can use their models to create games, art, music, or anything else they can dream of. If they need inspiration, point them to the gallery of projects created by other users.

Artificial intelligence and machine learning are rapidly transforming the world. If you want your kids and teens to learn about these fascinating fields and develop their critical thinking skills and creativity, these websites that can help them.

Whether you want to explore Experiments with Google, AI World School, or other sites in this article, you'll find plenty of resources and fun challenges to spark your child's curiosity and imagination. There are also ways to use existing AI tools in school so that they can become more familiar with them.

Read this article:
The 7 Best Websites to Help Kids Learn About AI and Machine Learning - MUO - MakeUseOf

Google and OpenAI are Walmarts besieged by fruit stands – TechCrunch

Image Credits: Tim Boyle / Getty Images

OpenAI may be synonymous with machine learning now and Google is doing its best to pick itself up off the floor, but both may soon face a new threat: rapidly multiplying open source projects that push the state of the art and leave the deep-pocketed but unwieldy corporations in their dust. This Zerg-like threat may not be an existential one, but it will certainly keep the dominant players on the defensive.

The notion is not new by a long shot in the fast-moving AI community, its expected to see this kind of disruption on a weekly basis but the situation was put in perspective by a widely shared document purported to originate within Google. We have no moat, and neither does OpenAI, the memo reads.

I wont encumber the reader with a lengthy summary of this perfectly readable and interesting piece, but the gist is that while GPT-4 and other proprietary models have obtained the lions share of attention and indeed income, the head start theyve gained with funding and infrastructure is looking slimmer by the day.

While the pace of OpenAIs releases may seem blistering by the standards of ordinary major software releases, GPT-3, ChatGPT and GPT-4 were certainly hot on each others heels if you compare them to versions of iOS or Photoshop. But they are still occurring on the scale of months and years.

What the memo points out is that in March, a leaked foundation language model from Meta, called LLaMA, was leaked in fairly rough form. Within weeks, people tinkering around on laptops and penny-a-minute servers had added core features like instruction tuning, multiple modalities and reinforcement learning from human feedback. OpenAI and Google were probably poking around the code, too, but they didnt couldnt replicate the level of collaboration and experimentation occurring in subreddits and Discords.

Could it really be that the titanic computation problem that seemed to pose an insurmountable obstacle a moat to challengers is already a relic of a different era of AI development?

Sam Altman already noted that we should expect diminishing returns when throwing parameters at the problem. Bigger isnt always better, sure but few would have guessed that smaller was instead.

The business paradigm being pursued by OpenAI and others right now is a direct descendant of the SaaS model. You have some software or service of high value and you offer carefully gated access to it through an API or some such. Its a straightforward and proven approach that makes perfect sense when youve invested hundreds of millions into developing a single monolithic yet versatile product like a large language model.

If GPT-4 generalizes well to answering questions about precedents in contract law, great never mind that a huge number of its intellect is dedicated to being able to parrot the style of every author who ever published a work in the English language. GPT-4 is like a Walmart. No one actually wants to go there, so the company makes damn sure theres no other option.

But customers are starting to wonder, why am I walking through 50 aisles of junk to buy a few apples? Why am I hiring the services of the largest and most general-purpose AI model ever created if all I want to do is exert some intelligence in matching the language of this contract against a couple hundred other ones? At the risk of torturing the metaphor (to say nothing of the reader), if GPT-4 is the Walmart you go to for apples, what happens when a fruit stand opens in the parking lot?

It didnt take long in the AI world for a large language model to be run, in highly truncated form of course, on (fittingly) a Raspberry Pi. For a business like OpenAI, its jockey Microsoft, Google or anyone else in the AI-as-a-service world, it effectively beggars the entire premise of their business: that these systems are so hard to build and run that they have to do it for you. In fact it starts to look like these companies picked and engineered a version of AI that fit their existing business model, not vice versa!

Once upon a time you had to offload the computation involved in word processing to a mainframe your terminal was just a display. Of course that was a different era, and weve long since been able to fit the whole application on a personal computer. That process has occurred many times since as our devices have repeatedly and exponentially increased their capacity for computation. These days when something has to be done on a supercomputer, everyone understands that its just a matter of time and optimization.

For Google and OpenAI, the time came a lot quicker than expected. And they werent the ones to do the optimizing and may never be at this rate.

Now, that doesnt mean that theyre plain out of luck. Google didnt get where it is by being the best not for a long time, anyway. Being a Walmart has its benefits. Companies dont want to have to find the bespoke solution that performs the task they want 30% faster if they can get a decent price from their existing vendor and not rock the boat too much. Never underestimate the value of inertia in business!

Sure, people are iterating on LLaMA so fast that theyre running out of camelids to name them after. Incidentally, Id like to thank the developers for an excuse to just scroll through hundreds of pictures of cute, tawny vicuas instead of working. But few enterprise IT departments are going to cobble together an implementation of Stabilitys open source derivative-in-progress of a quasi-legal leaked Meta model over OpenAIs simple, effective API. Theyve got a business to run!

But at the same time, I stopped using Photoshop years ago for image editing and creation because the open source options like Gimp and Paint.net have gotten so incredibly good. At this point, the argument goes the other direction. Pay how much for Photoshop? No way, weve got a business to run!

What Googles anonymous authors are clearly worried about is that the distance from the first situation to the second is going to be much shorter than anyone thought, and there doesnt appear to be a damn thing anybody can do about it.

Except, the memo argues: embrace it. Open up, publish, collaborate, share, compromise. As they conclude:

Google should establish itself a leader in the open source community, taking the lead by cooperating with, rather than ignoring, the broader conversation. This probably means taking some uncomfortable steps, like publishing the model weights for small ULM variants. This necessarily means relinquishing some control over our models. But this compromise is inevitable. We cannot hope to both drive innovation and control it.

Visit link:
Google and OpenAI are Walmarts besieged by fruit stands - TechCrunch

Meta Platforms scoops up AI networking chip team from Graphcore – The Economic Times

Meta Platforms Inc has hired an Oslo-based team that until late last year was building artificial intelligence networking technology at British chip unicorn Graphcore. A Meta spokesperson confirmed the hirings in response to a request for comment, after Reuters identified 10 people whose LinkedIn profiles said they worked at Graphcore until December 2022 or January 2023 and subsequently joined Meta in February or March of this year.

"We recently welcomed a number of highly-specialized engineers in Oslo to our infrastructure team at Meta. They bring deep expertise in the design and development of supercomputing systems to support AI and machine learning at scale in Meta's data centers," said Jon Carvill, the Meta spokesperson.

On top of that, it is now rushing to join competitors like Microsoft Corp and Alphabet Inc's Google in releasing generative AI products capable of creating human-like writing, art and other content, which investors see as the next big growth area for tech companies.

Carvill declined to say what they would be working on at Meta.

Meta already has an in-house unit designing several kinds of chips aimed at speeding up and maximizing efficiency for its AI work, including a network chip that performs a sort of air traffic control function for servers, two sources told Reuters.

A new category of network chip has emerged to help keep data moving smoothly within those computing clusters. Nvidia, AMD and Intel Corp all make such network chips.

Graphcore, one of the UK's most valuable tech startups, once was seen by investors like Microsoft and venture capital firm Sequoia as a promising potential challenger to Nvidia's commanding lead in the market for AI chip systems.

However, it faced a setback in 2020 when Microsoft scrapped an early deal to buy Graphcore's chips for its Azure cloud computing platform, according to a report by UK newspaper The Times. Microsoft instead used Nvidia's GPUs to build the massive infrastructure powering ChatGPT developer OpenAI, which Microsoft also backs.

Sequoia has since written down its investment in Graphcore to zero, although it remains on the company's board, according to a source familiar with the relationship. The write-down was first reported by Insider in October.

The Graphcore spokesperson confirmed the setbacks, but said the company was "perfectly positioned" to take advantage of accelerating commercial adoption of AI.

Graphcore was last valued at $2.8 billion after raising $222 million in its most recent investment round in 2020.

See the original post:
Meta Platforms scoops up AI networking chip team from Graphcore - The Economic Times