Hypotheses and Visions for an Intelligent World – Huawei
As we move towards an intelligent world, information sensing, connectivity, and computing are becoming key. The better knowledge and control of matter, phenomena, life, and energy that result from these technologies are also becoming increasingly important. This makes rethinking approaches to networks and computing critical in the coming years.
In terms of networks, about 75 years ago Claude Shannon proposed his theorems based on three hypotheses: discrete memoryless sources, classical electromagnetic fields, and simple propagation environments. But since then, the industry has continued to push the boundaries of his work.
In 1987, Jim Durnin discovered self-healing non-diffracting beams that could continue to propagate when encountering an obstruction.
In 1992, L. Allen et. al. postulated that the spin and orbital angular momentum of an electromagnetic field has infinite orthogonal quantum states along the same propagation direction, and each quantum state can have one Shannon capacity.
After AlphaGo emerged in 2016, people realized how well foundation models can be used to describe a world with prior knowledge. This means that much information is not discrete or memoryless.
With the large-scale deployment of 5G Massive MIMO in 2018, it has become possible to have multiple independent propagation channels in complex urban environments with tall buildings, boosting communications capacity.
These new phenomena, knowledge, and environments are helping us break away from the hypotheses that shaped Shannon theorems. With them, I believe we can achieve more than 100-fold improvement in network capabilities in the next decade.
In computing, intelligent applications are developing rapidly, and AI models in particular are likely to help solve the fragmentation problems that are currently holding AI application development back. This is driving an exponential growth in model size. Academia and industry have already begun exploring the use of AI in domains like software programming, scientific research, theorem verification, and theorem proving. With more powerful computing models, more abundant computing power, and higher-quality data, AI will be able to better serve social progress.
AI capabilities are improving rapidly, and so we need to consider how to ensure AI development progresses in a way that benefits all people and ensures that AI execution is accurate and efficient. In addition to ethics and governance, AI also faces three big challenges from a theoretical and technical perspective: AI goal definition, accuracy and adaptability, and efficiency.
The first challenge AI faces is that there is no agreed upon definition of its goals. What kind of intelligence do we need?
If there is no clear definition, it is difficult to ensure that the goals of AI and humanity will be aligned and to make reasonable measurements and classifications and scientific computations. Professor Adrian Bejan, a physicist at Duke University, summarizes more than 20 goals for intelligence in his book The Physics of Life, including understanding and cognitive ability, learning and adaptability, and abstract thinking and problem-solving ability. There are many schools of AI, which are poorly integrated. One important reason for this is there are no commonly agreed upon goals for AI.
The second challenge AI faces is accuracy and adaptability. Learning based on statistical rules extracted from big data often results in non-transparent processes, unstable results, and bias. For example, when recognizing a banana using statistical and correlation-based algorithms, an AI system can be easily affected by background combinations and tiny noises. If other pictures are put next to it, the banana may be recognized as an oven or a slug. These pictures can be easily recognized by people, but AI makes these mistakes and it is difficult to explain or debug them.
The third challenge for AI is efficiency. According to the 60th TOP500 published in 2022, the fastest supercomputer is Frontier, which can achieve 1,102 PFLOPS while using 21 million watts of energy. Human brains, in contrast, can deliver about 30 PFLOPS with just 20 watts. These numbers show that the human brain is about 30,000 to 100,000 times more energy efficient than a supercomputer.
In addition to energy efficiency, data efficiency is also a major challenge for AI. It is true that we can better understand the world by extracting statistical laws from big data. But can we find logic and generate concepts from small data, and abstract them into principles and rules?
We have come up with several hypotheses to address these three challenges:
Starting from these hypotheses, we can begin to take more practical steps to develop knowledge and intelligence.
At Huawei, our first vision is to combine systems engineering with AI to develop accurate, autonomous, and intelligent systems. In recent years, there has been a lot of research in academia about new AI architectures that go beyond transformers.
We can build upon these thoughts by focusing on three parts: perception and modeling, automatic knowledge generation, and solutions and actions. From there, we can develop more accurate, autonomous, and intelligent systems through multimodal perception fusion and modeling, as well as knowledge and data-driven decision-making.
Perception and modeling are about representations and abstractions of the external environment and ourselves. Automatic knowledge generation means systems will need to integrate the existing experience of humans into strategy models and evaluation functions to increase accuracy. Solutions can be directly deduced based on existing knowledge as well as internal and external information, or through trial-and-error and induction. We hope that these technologies will be incorporated into future autonomous systems, so that they can better support domains like autonomous driving networks, autonomous vehicles, and cloud services.
Our second vision is to create better computing models, architectures, and components to continuously improve the efficiency of intelligent computing. I once spoke with Fields Medalist Professor Laurent Lafforgue about whether invariant object recognition could be made more accurate and efficient by using geometric manifolds for object representation and computing in addition to pixels, which are now commonly used in visual and spatial computing.
In their book Neuronal Dynamics, co-authors Gerstner, Kistler, Naud, and Paninski at cole Polytechnique Fdrale de Lausanne (EPFL) explain the concept of functional columns in the cerebral cortex and the six-layer connections between these functional columns. It makes me wonder: Can such a shallow neural network be more efficient than a deep neural network?
A common bottleneck for today's AI computing is the memory wall. Reading, writing, and migrating data often takes 100-times more time than computing itself. So, can we possibly bypass conventional processors, instruction sets, buses, logic components, and memory components under von Neumann architecture, and redefine architectures and components based on advanced AI computing models instead?
Huawei has been exploring this idea by looking into the practical uses of AI. First, we have worked on "AI for Industry", which uses industry-specific large models to create more value. Industries face many challenges when it comes to AI application development. They need to invest a huge amount of manpower to label samples, find it difficult to maintain models, and lack the necessary capabilities in model generalization. Most simply they do not have the resources to do this.
To address these challenges, Huawei has developed L1 industry-specific large models based on its L0 large foundation models dedicated to computer vision, natural language processing, graph neural networks, and multi-modal interactions. These large models lower the barrier to AI development, improve model generalization, and address application fragmentation. The models are already being used to improve operational efficiency and safety in major industries like electric power, coal mining, transportation, and manufacturing.
Huawei's Aviation & Rail Business Unit, for example, is working with customers and partners in Hohhot, Wuhan, Xi'an, Shenzhen, and Hongkong to explore the digital transformation of urban rail, railways, and airports. This has improved operational safety and efficiency, as well as user experience and satisfaction. The Shenzhen Airport has realized smart stand allocation with the support of cloud, big data, and AI, reducing airside transfer bus passenger flow by 2.6 million every year. The airport has become a global benchmark in digital transformation.
"AI for Science" is another initiative that will be able to greatly empower scientific computing. One example of this in action is the Pangu meteorology model we developed using a new 3D transformer-based coding architecture for geographic information and a hierarchical time-domain aggregation method. With a prior knowledge of global meteorological phenomena, the Pangu model uses more accurate and efficient learning and reasoning to replace time series solutions of hyperscale partial differential equations using traditional scientific computing methods. The Pangu model can produce 1-hour to 7-day weather forecasts in just a few seconds, and its results are 20% more accurate than forecasts from the European Centre for Medium-Range Weather Forecasts.
AI can also support software programming. In addition to using AI to do traditional retrieval and recommendation in a large amount of existing code, Huawei is developing new model-driven and formal methods. This is especially important for large-scale parallel processing, where many tasks are intertwined and correlated. Huawei has developed a new approach called Vsync which realizes automatic verification and concurrent code optimization of operating system kernels, and improves performance without undermining reliability. The Linux Community once discovered a difficult memory barrier bug which took community experts more than two years to fix. With Huawei's Vsync method, however, it would have taken just 20 minutes to discover and fix the bug.
We have also been studying new computing models for automated theorem proving. Topos theory, for example, can be used to research category proving, congruence reasoning systems, and automated theorem derivation to improve the automation level of theorem provers. In doing this, we want to solve state explosion and automatic model abstraction problems and improve formal verification capabilities.
Finally, we are also exploring advanced computing components. We can use the remainder theorem to address conversion efficiency and overflow problems in real-world applications. We hope to implement basic addition and multiplication functions in chips and software to improve the efficiency of intelligent computing.
As we move towards the intelligent world, networks and computing are two key cornerstones that underpin our shift from narrow AI towards general-purpose AI and super AI. To get there, we will need to take three key steps. First, we will need to develop AI theories and technologies, as well as related ethics and governance, so that we can deliver ubiquitous intelligent connectivity and drive social progress. Second, we will need to continue pushing our cognitive limits to improve our ability to understand and control intelligence. Finally, we need to define the right goals and use the right approaches to guide AI development in a way that truly helps overcome human limitations, improve lives, create matter, control energy, and transcend time and space. This is how we will succeed in our adventure into the future.
The rest is here:
Hypotheses and Visions for an Intelligent World - Huawei