The Discontents Of Artificial Intelligence In 2022 – Inventiva
The Discontents of Artificial Intelligence in 2022
Recent years have seen a boom in the use of Artificial Intelligence. This review essay is divided into two parts: part I introduces contemporary AI, and part II discusses its implications. Part-II will be dedicated to the widespread and rapid adoption of artificial intelligence and its resulting crises.
In recent years, Artificial Intelligence or AI has flooded the world with applications outside of the research laboratory. There are now a number of standard Artificial Intelligence techniques, including face recognition, keyboard suggestions, Amazon recommendations, Twitter followers, image similarity search, and text translation. Artificial intelligence is also being applied in areas such as radiological diagnostics, pharmaceutical drug development, and drone navigation far removed from the ordinary user. Therefore, artificial intelligence is the new buzzword of the day and is seen as a portal to the future.
In 1956, John McCarthy and others conceptualized a summer research project aimed at replicating human activity. It is thought that this led to the discipline of artificial intelligence. In the beginning, these pioneers worked under the premise that every aspect of learning or intelligence could be so precisely described that it could be simulated by a machine.
Although the objective was ambitious, board games have often been used to test artificial intelligence methods due to pragmatic considerations. Board games have precise rules that can be encoded into a computational framework, so playing board games with skill is a hallmark of intelligence.
Earlier this year, a program called AlphaGo created a sensation by defeating the reigning Go champion. The program was developed by DeepMind, a Google company.
Gary Kasparov, then the world chess champion, was shocked by IBMs Deep Blue in a celebrated encounter between humans and machines in 1997. Kasparovs defeat was unnerving as it was the breach of a frontier in chess, which is traditionally thought of as a cerebral game. The notion that a machine could defeat the world champion at the board game of Go was considered to be an unlikely dream at the time. Based on this belief, the number of possible move sequences in Go is very much more significant than those in chess and Go played on a much larger board than chess.
Nevertheless, in 2016 a computer program made headlines after it defeated the reigning world Go champion, Lee Sedol, using a program developed by DeepMind, a company owned by Google. By 1997, commentators celebrated this victory as the beginning of a new era in which machines would eventually surpass humans in intelligence.
The reality was completely different. By any measure, AlphaGo was a sophisticated tool, but it could not be considered intelligent. While it was able to pick the best move at any time, the software did not understand the reasoning behind its choices.
In AI, a key lesson is that machines can be endowed with abilities previously possessed only by humans, although they do not have to be intelligent in the same way as sentient beings. The case of arithmetic computation is one non-AI example. The task of multiplying two large numbers was a difficult one throughout history.
Logarithm tables had to be painstakingly produced to accomplish these tasks, which required a lot of human effort. Even the most straightforward computer can now perform such calculations efficiently and reliably for many decades now. The same can be said about virtually any human task involving routine operations that can be solved with AI.
In addition, AI is beginning to make inroads into the domains of science and engineering, where domain knowledge is required. Healthcare is one such area.
Todays AI will be able to extend the above metaphor beyond simple, routine tasks to more sophisticated ones with unprecedented advances in computing power and data availability. Millions of people are already using AI tools. Nonetheless, AI is starting to make headway in areas like science and engineering, where domain knowledge is involved.
A place of universal relevance includes healthcare, where AI tools can be used to assess a persons health, provide a diagnosis based on clinical data, or analyze large-scale study data. Using artificial intelligence for solving highly complex problems such as protein folding or fluid dynamics has been developed recently in more esoteric fields. Such advances are expected to have a multitude of practical applications in the real world.
History
Many early AI works centred around symbolic reasoning laying out a set of propositions and logically deducing their implications. However, this enterprise soon ran into trouble as enumerating all the operational rules in a specific problem context was impossible.
A competing paradigm is a connectionism, which aims to overcome the difficulty of describing rules explicitly by inferring them implicitly through data. An artificial neural network is created based on the strength (weight) of connections between neurons, loosely based on the properties of neurons and their connectivity in the brain.
A number of leading figures have claimed a definitive solution to the problem of computational intelligence is imminent, based on one paradigms success or another. In spite of progress, the challenges proved far more complex, and the hype was typically followed by a period of profound disillusionment and a significant reduction in funding for American academics-a period referred to as the AI winter.
Thus, DeepMinds recent success should serve as an endorsement of its approaches as they could help society find answers to some of the worlds most pressing and fundamental scientific problems. If the reader is interested in the critical concepts in AI, as well as the background of the field and its boom-bust cycles, two recently published popular expositions written by long-term researchers may be of interest.
These are Melanie Mitchells Artificial Intelligence: A Guide for Thinking Humans (Pelican Books, 2019) and Michael Wooldridges The Road to Conscious Machines: The Story of Artificial Intelligence (Pelican Books, 2020).
Artificial Intelligence has been confronted with two issues of profound significance since its inception. While it is impressive to defeat a world champion at their own game, the real world is a much messier environment than the one in which ironclad rules govern everything.
Due to this reason, the successful AI methods developed to solve narrowly defined problems cannot be generalized to other situations involving diverse aspects of intelligence. Developing the ability to use ones hands for delicate tasks is an essential skill that a child learns effortlessly through robotics research.
Although AlphaGo worked out the winning moves, its human representative had to reposition the stones on the board, a seemingly mundane task. Intelligence isnt defined by a single skill like winning games because intelligence is a whole lot more than the sum of its parts. It encompasses, among other things, the ability to interact with ones environment, which is one of the essentials of embodied behaviour.
One of the most essential skills that a child develops effortlessly is that of using their hands to perform delicate tasks. Robotics has yet to develop this skill.
Moreover, the question of how to define intelligence itself looms more considerable and more significant than how AI tools can overcome the technical limitations. Researchers often assume that approaches developed to tackle narrowly defined problems like winning at Go can be used to solve more general intelligence problems. There has been scepticism towards this rather brash belief, both from those within the community as well as from older disciplines like philosophy and psychology.
Intelligence has been the subject of heavy debate regarding its ability to be substantially or entirely captured in a computational paradigm or whether it is irreducible and ineffable. Hubert Dreyfus well-known 1965 report entitled Alchemy and Artificial Intelligence reveal the disdain and hostility some people feel towards AI claims. Dreyfus views were called a budget of fallacies by a well-known AI researcher.
AI is also viewed with unbridled optimism that it can transcend biological limitations, a notion known as Singularity, thereby breaking all barriers. The futurist Ray Kurzweil claims that machine intelligence will overwhelm human intelligence as the capabilities of AI systems grow exponentially. Kurzweil has attracted a fervent following despite his ridiculous argument regarding exponential growth in technology. It is best to consider Singularity as a kind of technological rapture without intellectual severe foundations.
Intelligence has been a bone of contention for decades, primarily about whether it can be wholly or essentially captured through computations or if it has an ineffable, irreducible human core.
Stuart Russell, the first author of the most widely used textbook on artificial intelligence, is an AI researcher who does not shy away from defining intelligence. Humans are intelligent to the extent that they can be expected to reach their objectives (Russell, Human Compatible, 9). Machine intelligence can be defined in the same way. An approach such as this does help pin down the elusive notion of intelligence, but as anyone who has read about utility in economics can attest, it falls back on an accurate description of our goals to provide the meaning.
The style of Russell differs significantly from the writing of Mitchell and Wooldridge: he is terse and expects his readers to keep engaged; he gives no quarter. Although Human Compatible is a highly thought-provoking book, it also possesses a personal narrative that jumps from flowing argument to the abstruse hypothesis.
A recent study found that none of the hundreds of AI tools developed for detecting Covid was effective.
Additionally, Human Compatible differs significantly from other AI expositions by examining the dangers of future AI surpassing human capabilities. While Russell avoids evoking dystopian Hollywood imagery, he does argue that AI agents might combine to cause harm and accidents in the future. He points to the story of Leo Szilard, who figured out the physics of nuclear chains after Ernest Rutherford had argued that the idea of atomic power was moonshine and warned against the belief that such an eventuality was highly unlikely or impossible.
After that, nuclear warfare unleashed its horrors. Human Compatible focuses on guarding against the possibility of AI robots taking over the world. Wooldridges argument is not convincing here. The decades of AI research suggest that human-level AI differs from a nuclear chain reaction that can be described as a simple mechanism (Wooldridge, The Road to Consciousness, 244).
It is enriching but ultimately undecidable to debate the nature of intelligence and the fate of humanity in philosophy. Most researchers in AI research are focused on specific problems and are indifferent to more significant debates due to the two distinct tracks of cognitive science and engineering. Unfortunately, the objectives and claims of these two approaches are often conflated in the public discourse, leading to much confusion.
Relevantly, terms like neurons and learning have a mathematical meaning within the discipline. However, they are immediately associated with their commonsense connotation, leading to severe misunderstandings about the entire enterprise. The concept of a neural network is not the same as the concept of the human brain, and learning is a broad set of statistical principles and methods that are essentially sophisticated curve fitting and decision rule algorithms.
It has almost completely replaced other methods of machine learning since deep learning was discovered nearly a decade ago.
It was considered ineffective a few decades ago to develop neural networks that could learn from data. With the development of deep learning, neural networks garnered renewed interest in 2012, leading to significant improvements in image and speech recognition methods. Currently, successful AI methods such as AlphaGo and its successors and widely used tools such as Google Translate employ deep learning, in which the adjective does not signify profundity but rather a multiple layering of the network.
Deep understanding has been sweeping many disciplines since it was introduced over a decade ago, and it is now nearly wholly replacing other methods of machine learning. Three of its pioneers received the Turing Award in 2018, the highest honour in the field of computer science, anointing their paradigmatic dominance.
Success in AI is accompanied by hype and hubris. In 2016, Geoff Hinton, one of the Turing trio, stated: We should have ceased training radiologists by now, because it will become clear in five years that deep learning will provide better outcomes than radiologists. The failure to deliver us from flawed radiologists and other problems with the method did not hinder Hinton from stating in 2020 that deep learning will be able to do everything. In addition, a recent study concluded that none of the hundreds of AI tools developed for finding Covid was effective.
AI follows success with hype and hubris as an iron law.
Our understanding of the nature of contemporary learning-based AI tools will be enhanced by looking at how they are developed. As an example, consider detecting chairs from images. Various components of a chair can be observed: legs, backrests, armrests, cushions, etc. All of these combinations are recognizable as chairs, so there are potentially countless combinations of such elements.
Other things, such as bean bags, can trump any rule we may formulate about what a chair should contain. Methods such as deep learning seek to overcome precisely the limitations of symbolic, rule-based deduction. We may collect a number of images of chairs and other objects instead of trying to define rules that cover all of their varieties and feed these into a neural network along with the correct output (output of chair vs non-chair).
A deep learning approach would then modify the weights of the connections in the network in the training phase to mimic as best as possible the desired input-output relationships. Basically, the network will now be able to answer the question of whether previously unseen test images contain chairs if it has been done correctly.
For a chair-recognizer of this nature, many images of chairs of different shapes and sizes are needed. As an extension of that analogy, one may now consider any number of categories one can imagine, including chairs, tables, trees, people, and so on, all of which appear in the world in a variety of glorious but maddening variety. As a result, it is essential to acquire adequately representative images of objects.
It has been shown that deep learning methods can work extraordinarily well, but they are often unreliable and unpredictable.
A number of significant advances were made in 2012 in automatic image recognition thanks to the combination of relatively cheap and powerful hardware, as well as the rapid expansion of the internet, which enabled researchers to build a large dataset, known as ImageNet, containing millions of images labelled with thousands of categories.
Despite working well, deep learning methods are unreliable when it comes to their behaviour. Suppose, for example; an American school bus is mistaken for an ostrich due to tiny changes in images that cannot be seen by the human eye. Additionally, it is recognized that sometimes incorrect results can arise from spurious and unreliable statistical correlations rather than from any deep understanding.
When a boat is shown in an image that is surrounded by water, it is correctly recognized. A ship is not modelled or envisioned in the method. The limitations and problems of AI may have typically been academic concerns in the past. In this case, however, it is different since a number of AI tools have been taken from the laboratory and deployed into real life, often with grave consequences.
Due to a relentless push towards automation, a number of data-driven methods have already been developed and deployed locally, including in India, well before deep learning became a fad. Among the tools that have achieved extraordinary notoriety is COMPAS, which is used by US courts to determine the appropriate sentence length based upon the risk of recidivism.
A tool such as this uses statistics from existing criminal records to determine a defendants chances of committing a crime in the future. The device, even without explicitly biasing itself against black people, resulted in racial bias in a well-known investigation. When judges use artificial intelligence to predict sentence length, they discriminate based on race.
For biometric identification and authentication, fingerprints and face images are even more valuable. Many law enforcement agencies and other state agencies have adopted face recognition tools due to their utility in surveillance and forensics. Affective computing and other dubious techniques for detecting emotion have also been used in a number of contexts, including employment decisions as well as more intrusive surveillance methods.
A number of necessary studies have shown that many face recognition programs available in the commercial sector are profoundly flawed and discriminatory. A recent audit of commercially available tools revealed that black women could experience face recognition error rates as high as 35% higher than white women, causing growing calls for their halt. In India and China especially, face and emotion recognition is becoming more widespread and is having tremendous implications for human rights and welfare. This deserves a much more thorough discussion than the one presented here.
Various sources of bias result from relying on real-world data for decision making. Many of these sources can be grouped under the heading of bias. Face recognition suffers from a bias caused by the low number of people of colour in many datasets used to develop the tools. Another limitation is the limited relevance of the past for defining the contours of the society we want to build. If an AI algorithm relies on past records, as is done in the US recidivism modelling, it would disparately harm the poor since they have historically experienced higher incarceration rates.
Additionally, if one were to consider automating the hiring process for a professional position in India, models based on past hirings would automatically lead to caste bias, even if caste was not explicitly considered. As Cathy ONeil details in her famous book, Weapons of Math Destruction: How Big Data is Increasing Inequality and Threatening Democracy (Penguin Books, 2016), which details a number of such incidents in the American context, her argument here can be summarized as follows:
Likewise, models based on past hires in India would automatically result in caste bias if one were to automate hiring people for, say, a professional position.
Artificial intelligence methods do not learn from the world directly but from a dataset as a proxy. A lack of ethical oversight and the lack of design of data collection have long plagued AI research in academia. Scholars from a range of disciplines have put a great deal of effort into developing discussions of bias in AI tools and datasets, including their ramifications in society, particularly among those who are poor and traditionally discriminated against.
Additionally, many modern AI tools are impossible to reason about or interpret, in addition to bias. Since those who are affected by a decision often have a right to know the reasoning used to arrive at a conclusion, the problem of explainability has profound implications for transparency.
Within the computer science community broadly, there has been an interest in formalizing these problems, which has led to academic conferences and an online textbook in preparation. An essential result of this exercise has been a theoretical understanding of the impossibility of fairness, which is a result of multiple notions of fairness not all being possible to satisfy simultaneously.
Research and practice in AI should also consider the trade-offs involved in designing software and the societal implications of these choices. The second part of this essay will show, however, that these considerations are seldom adequate as the rapid expansion of contemporary AI technology from the research lab into daily life has unleashed a wide range of problems.
Like Loading...
Related
Read the original:
The Discontents Of Artificial Intelligence In 2022 - Inventiva
- AlphaGo led Lee 4-1 in March 2016. One round Lee Se-dol won remains the last round in which a man be.. - - December 5th, 2024 [December 5th, 2024]
- Koreans picked Google Artificial Intelligence (AI) AlphaGo as an image that comes to mind when they .. - MK - - March 16th, 2024 [March 16th, 2024]
- DeepMind AI rivals the world's smartest high schoolers at geometry - Ars Technica - January 20th, 2024 [January 20th, 2024]
- Why top AI talent is leaving Google's DeepMind - Sifted - November 20th, 2023 [November 20th, 2023]
- Who Is Ilya Sutskever, Meet The Man Who Fired Sam Altman - Dataconomy - November 20th, 2023 [November 20th, 2023]
- Microsoft's LLM 'Everything Of Thought' Method Improves AI ... - AiThority - November 20th, 2023 [November 20th, 2023]
- Absolutely, here's an article on the impact of upcoming technology - Medium - November 20th, 2023 [November 20th, 2023]
- AI: Elon Musk and xAI | Formtek Blog - Formtek Blog - November 20th, 2023 [November 20th, 2023]
- Rise of the Machines Exploring the Fascinating Landscape of ... - TechiExpert.com - November 20th, 2023 [November 20th, 2023]
- What can the current EU AI approach do to overcome the challenges ... - Modern Diplomacy - November 20th, 2023 [November 20th, 2023]
- If I had to pick one AI tool... this would be it. - Exponential View - November 20th, 2023 [November 20th, 2023]
- For the first time, AI produces better weather predictions -- and it's ... - ZME Science - November 20th, 2023 [November 20th, 2023]
- Understanding the World of Artificial Intelligence: A Comprehensive ... - Medium - October 17th, 2023 [October 17th, 2023]
- On AI and the soul-stirring char siu rice - asianews.network - October 17th, 2023 [October 17th, 2023]
- Nvidias Text-to-3D AI Tool Debuts While Its Hardware Business Hits Regulatory Headwinds - Decrypt - October 17th, 2023 [October 17th, 2023]
- One step closer to the Matrix: AI defeats human champion in Street ... - TechRadar - October 17th, 2023 [October 17th, 2023]
- The Vanishing Frontier - The American Conservative - October 17th, 2023 [October 17th, 2023]
- Alphabet: The complete guide to Google's parent company - Android Police - October 17th, 2023 [October 17th, 2023]
- How AI and ML Can Drive Sustainable Revenue Growth by Waleed ... - Digital Journal - October 9th, 2023 [October 9th, 2023]
- The better the AI gets, the harder it is to ignore - BSA bureau - October 9th, 2023 [October 9th, 2023]
- What If the Robots Were Very Nice While They Took Over the World? - WIRED - September 27th, 2023 [September 27th, 2023]
- From Draughts to DeepMind (Scary Smart) | by Sud Alogu | Aug, 2023 - Medium - August 5th, 2023 [August 5th, 2023]
- The Future of Competitive Gaming: AI Game Playing AI - Fagen wasanni - August 5th, 2023 [August 5th, 2023]
- AI's Transformative Impact on Industries - Fagen wasanni - August 5th, 2023 [August 5th, 2023]
- Analyzing the impact of AI in anesthesiology - INDIAai - August 5th, 2023 [August 5th, 2023]
- Economic potential of generative AI - McKinsey - June 20th, 2023 [June 20th, 2023]
- The Intersection of Reinforcement Learning and Deep Learning - CityLife - June 20th, 2023 [June 20th, 2023]
- Chinese AI Giant SenseTime Unveils USD559 Robot That Can Play ... - Yicai Global - June 20th, 2023 [June 20th, 2023]
- Cyber attacks on AI a problem for the future - Verdict - June 20th, 2023 [June 20th, 2023]
- Taming AI to the benefit of humans - Asia News NetworkAsia News ... - asianews.network - May 20th, 2023 [May 20th, 2023]
- Evolutionary reinforcement learning promises further advances in ... - EurekAlert - May 20th, 2023 [May 20th, 2023]
- Commentary: AI's successes - and problems - stem from our own ... - CNA - May 20th, 2023 [May 20th, 2023]
- Machine anxiety: How to reduce confusion and fear about AI technology - Thaiger - May 20th, 2023 [May 20th, 2023]
- We need more than ChatGPT to have true AI. It is merely the first ingredient in a complex recipe - Freethink - May 20th, 2023 [May 20th, 2023]
- Taming AI to the benefit of humans - Opinion - Chinadaily.com.cn - China Daily - May 16th, 2023 [May 16th, 2023]
- To understand AI's problems look at the shortcuts taken to create it - EastMojo - May 16th, 2023 [May 16th, 2023]
- Terence Tao Leads White House's Generative AI Working Group ... - Pandaily - May 16th, 2023 [May 16th, 2023]
- Why we should be concerned about advanced AI - Epigram - May 16th, 2023 [May 16th, 2023]
- Purdue President Chiang to grads: Let Boilermakers lead in ... - Purdue University - May 16th, 2023 [May 16th, 2023]
- 12 shots at staying ahead of AI in the workplace - pharmaphorum - May 16th, 2023 [May 16th, 2023]
- Hypotheses and Visions for an Intelligent World - Huawei - May 16th, 2023 [May 16th, 2023]
- Cloud storage is the key to unlocking AI's full potential for businesses - TechRadar - May 16th, 2023 [May 16th, 2023]
- The Quantum Frontier: Disrupting AI and Igniting a Patent Race - Lexology - April 19th, 2023 [April 19th, 2023]
- Putin and Xi seek to weaponize Artificial Intelligence against America - FOX Bangor/ABC 7 News and Stories - April 19th, 2023 [April 19th, 2023]
- The Future of Generative Large Language Models and Potential ... - JD Supra - April 19th, 2023 [April 19th, 2023]
- A Chatbot Beat the SAT. What Now? - The Atlantic - March 23rd, 2023 [March 23rd, 2023]
- Exclusive: See the cover for Benjamn Labatut's new novel, The ... - Literary Hub - March 23rd, 2023 [March 23rd, 2023]
- These companies are creating ChatGPT alternatives - Tech Monitor - March 23rd, 2023 [March 23rd, 2023]
- Google's AlphaGo AI Beats Human Go Champion | PCMag - February 24th, 2023 [February 24th, 2023]
- AlphaGo: using machine learning to master the ancient game of Go - Google - February 10th, 2023 [February 10th, 2023]
- AI Behind AlphaGo: Machine Learning and Neural Network - February 10th, 2023 [February 10th, 2023]
- Google AlphaGo: How a recreational program will change the world - February 10th, 2023 [February 10th, 2023]
- Computer Go - Wikipedia - November 22nd, 2022 [November 22nd, 2022]
- AvataGo's Metaverse AR Environment will be Your Eternal Friend - Digital Journal - September 17th, 2022 [September 17th, 2022]
- This AI-Generated Artwork Won 1st Place At Fine Arts Contest And Enraged Artists - Bored Panda - September 3rd, 2022 [September 3rd, 2022]
- The best performing from AI in blockchain games, a new DRL model published by rct AI based on training AI in Axie Infinity, AI surpasses the real... - September 3rd, 2022 [September 3rd, 2022]
- Three Methods Researchers Use To Understand AI Decisions - RTInsights - August 20th, 2022 [August 20th, 2022]
- What is my chatbot thinking? Nothing. Here's why the Google sentient bot debate is flawed - Diginomica - August 7th, 2022 [August 7th, 2022]
- Opinion: Can AI be creative? - Los Angeles Times - August 2nd, 2022 [August 2nd, 2022]
- AI predicts the structure of all known proteins and opens a new universe for science - EL PAS USA - August 2nd, 2022 [August 2nd, 2022]
- What is Ethereum Gray Glacier? Should you be worried? - Cryptopolitan - June 24th, 2022 [June 24th, 2022]
- How AI and human intelligence will beat cancer - VentureBeat - June 19th, 2022 [June 19th, 2022]
- Race-by-race tips and preview for Newcastle on Monday - Sydney Morning Herald - June 19th, 2022 [June 19th, 2022]
- A gentle introduction to model-free and model-based reinforcement learning - TechTalks - June 13th, 2022 [June 13th, 2022]
- The role of 'God' in the 'Matrix' - Analytics India Magazine - June 3rd, 2022 [June 3rd, 2022]
- The Powerful New AI Hardware of the Future - CDOTrends - June 3rd, 2022 [June 3rd, 2022]
- The 50 Best Documentaries of All Time 24/7 Wall St. - 24/7 Wall St. - June 3rd, 2022 [June 3rd, 2022]
- How Could AI be used in the Online Casino Industry - Rebellion Research - April 12th, 2022 [April 12th, 2022]
- 5 Times Artificial Intelligence Have Busted World Champions - Analytics Insight - April 2nd, 2022 [April 2nd, 2022]
- The Guardian view on bridging human and machine learning: its all in the game - The Guardian - April 2nd, 2022 [April 2nd, 2022]
- How to Strengthen America's Artificial Intelligence Innovation - The National Interest - April 2nd, 2022 [April 2nd, 2022]
- Why it's time to address the ethical dilemmas of artificial intelligence - Economic Times - April 2nd, 2022 [April 2nd, 2022]
- About - Deepmind - March 18th, 2022 [March 18th, 2022]
- Experts believe a neuro-symbolic approach to be the next big thing in AI. Does it live up to the claims? - Analytics India Magazine - March 18th, 2022 [March 18th, 2022]
- Measuring Attention In Science And Technology - Forbes - March 18th, 2022 [March 18th, 2022]
- Is AI the Future of Sports? - Built In - March 5th, 2022 [March 5th, 2022]
- This is the reason Demis Hassabis started DeepMind - MIT Technology Review - February 28th, 2022 [February 28th, 2022]
- Sony's AI system outraces some of the world's best e-sports drivers | The Asahi Shimbun: Breaking News, Japan News and Analysis - Asahi Shimbun - February 28th, 2022 [February 28th, 2022]
- SysMoore: The Next 10 Years, The Next 1,000X In Performance - The Next Platform - February 28th, 2022 [February 28th, 2022]
- The World's Shortest List Of Technologies To Watch In 2022 - Forbes - February 3rd, 2022 [February 3rd, 2022]