Archive for the ‘Quantum Computer’ Category

The 5 Most Promising AI Hardware Technologies – MUO – MakeUseOf

Artificial Intelligence (AI) has made remarkable advancements since the end of 2022. Increasingly sophisticated AI-based software applications are revolutionizing various sectors by providing inventive solutions. From seamless customer service chatbots to stunning visual generators, AI is enhancing our daily experiences. However, behind the scenes, AI hardware is pivotal in fueling these intelligent systems.

AI hardware refers to specialized computer hardware designed to perform AI-related tasks efficiently. This includes specific chips and integrated circuits that offer faster processing and energy-saving capabilities. In addition, they provide the necessary infrastructure to execute AI algorithms and models effectively.

The role of AI hardware in machine learning is crucial as it aids in the execution of complex programs for deep learning models. Furthermore, compared to conventional computer hardware like central processing units (CPUs), AI hardware can accelerate numerous processes, significantly reducing the time and cost required for algorithm training and execution.

Furthermore, with the growing popularity of AI and machine learning models, there has been an increased demand for acceleration solutions. As a result, companies like Nvidia, the world's leading GPU manufacturer, have witnessed substantial growth. In June 2023, The Washington Post reported that Nvidia's market value surpassed $1 trillion, surpassing the worth of Tesla and Meta. Nvidia's success highlights the significance of AI hardware in today's technology landscape.

If you're familiar with what edge computing is, you likely have some understanding of edge computing chips. These specialized processors are designed specifically to run AI models at the network's edge. With edge computing chips, users can process data and perform crucial analytical operations directly at the source of the data, eliminating the need for data transmission to centralized systems.

The applications for edge computing chips are diverse and extensive. They find utility in self-driving cars, facial recognition systems, smart cameras, drones, portable medical devices, and other real-time decision-making scenarios.

The advantages of edge computing chips are significant. Firstly, they greatly reduce latency by processing data near its source, enhancing the overall performance of AI ecosystems. Additionally, edge computing enhances security by minimizing the amount of data that needs to be transmitted to the cloud.

Here are some of the leading manufacturers of AI hardware in the field of edge computing chips:

Some might wonder, "What is quantum computing, and is it even real?" Quantum computing is indeed a real and advanced computing system that operates based on the principles of quantum mechanics. While classical computers use bits, quantum computing utilizes quantum bits (qubits) to perform computations. These qubits enable quantum computing systems to process large datasets more efficiently, making them highly suitable for AI, machine learning, and deep learning models.

The applications of quantum hardware have the potential to revolutionize AI algorithms. For example, in drug discovery, quantum hardware can simulate the behavior of molecules, aiding researchers in accurately identifying new drugs. Similarly, in material science, it can contribute to climate change predictions. The financial sector can benefit from quantum hardware by developing price prediction tools.

Below are the significant benefits of quantum computing for AI:

Application Specific Integrated Circuits (ASICs) are designed for targeted tasks like image processing and speech recognition (though you may have heard about ASICs through cryptocurrency mining). Their purpose is to accelerate AI procedures to meet the specific needs of your business, providing an efficient infrastructure that enhances overall speed within the ecosystem.

ASICs are cost-effective compared to traditional central processing units (CPUs) or graphics processing units (GPUs). This is due to their power efficiency and superior task performance, surpassing CPUs and GPUs. As a result, ASICs facilitate AI algorithms across various applications.

These integrated circuits can handle substantial volumes of data, making them instrumental in training artificial intelligence models. Their applications extend to diverse fields, including natural language processing of texts and speech data. Furthermore, they simplify the deployment of complex machine-learning mechanisms.

Neuromorphic hardware represents a significant advancement in computer hardware technology, aiming to mimic the functioning of the human brain. This innovative hardware emulates the human nervous system and adopts a neural network infrastructure, operating with a bottom-up approach. The network comprises interconnected processors, referred to as neurons.

In contrast to traditional computing hardware that processes data sequentially, neuromorphic hardware excels at parallel processing. This parallel processing capability enables the network to simultaneously execute multiple tasks, resulting in improved speed and energy efficiency.

Furthermore, neuromorphic hardware offers several other compelling advantages. It can be trained with extensive datasets, making it suitable for a wide range of applications, including image detection, speech recognition, and natural language processing. Additionally, the accuracy of neuromorphic hardware is remarkable, as it rapidly learns from vast amounts of data.

Here are some of the most notable neuromorphic computing applications:

A Field Programmable Gate Array (FPGA) is an advanced integrated circuit that offers valuable benefits for implementing AI software. These specialized chips can be customized and programmed to meet the specific requirements of the AI ecosystem, earning them the name "field-programmable."

FPGAs consist of configurable logic blocks (CLBs) that are interconnected and programmable. This inherent flexibility allows for a wide range of applications in the field of AI. In addition, these chips can be programmed to handle operations of varying complexity levels, adapting to the system's specific needs.

Operating like a read-only memory chip but with a higher gate capacity, FPGAs offer the advantage of re-programmability. This means they can be programmed multiple times, allowing for adjustments and scalability per the evolving requirements. Furthermore, FPGAs are more efficient than traditional computing hardware, offering a robust and cost-effective architecture for AI applications.

In addition to their customization and performance advantages, FPGAs also provide enhanced security measures. Their complete architecture ensures robust protection, making them reliable for secure AI implementations.

AI hardware is on the cusp of transformative advancements. Evolving AI applications demand specialized systems to meet computational needs. Innovations in processors, accelerators, and neuromorphic chips prioritize efficiency, speed, energy savings, and parallel computing. Integrating AI hardware into edge and IoT devices enables on-device processing, reduced latency, and enhanced privacy. Convergence with quantum computing and neuromorphic engineering unlocks the potential for exponential power and human-like learning.

The future of AI hardware holds the promise of powerful, efficient, and specialized computing systems that will revolutionize industries and reshape our interactions with intelligent technologies.

Originally posted here:
The 5 Most Promising AI Hardware Technologies - MUO - MakeUseOf

Quantum computing: The five biggest breakthroughs – Engineers Ireland

Quantum computing is a revolutionary technology already making waves in many industries, such as drug discovery, cryptography, finance, and logistics. It works by exploiting quantum mechanical phenomena to perform complex computations in a fraction of the time classical computers require. Two main quantum mechanical phenomena drive quantum computers' speed and computational prowess superposition and entanglement.

Unlike classical computers, which operate on binary bits (0 and 1), quantum computers operate on quantum bits or qubits. Qubits can exist in a state of superposition. This means that any qubit has some probability of existing simultaneously in the 0 and 1 states, exponentially increasing the computational power of quantum computers.

Another unique property that qubits have is their ability to become entangled. This means that two qubits, no matter how physically far, are correlated so that knowing the state of one particle automatically tells us something about its companion, even when they are far apart. This correlation can be harnessed for processing vast amounts of data and solving complex problems that classical computers cannot.

Classical computers only have the power to simulate phenomena based on classical physics, making it more difficult or slower to solve problems that rely on quantum phenomena. This is where the true importance of quantum computers lies.

Since quantum computers are based on qubits, they can solve challenging problems using classical computers and revolutionise many industries. For example, quantum computers can rapidly simulate molecules and chemical reactions, discovering new drugs and materials with exceptional properties.

Although significant breakthroughs have been made in quantum computing, we are still in the nascent stages of its development.

The objective of quantum supremacy is to demonstrate that a quantum computer can solve a problem that no classical computer can solve in any reasonable length of time, despite the usefulness of the problem. Achieving this goal demonstrates the power of a quantum computer over a classical computer in complex problem-solving.

InOctober 2019, Google confirmedthat it had achieved quantum supremacy using its fully programmable 54-qubit processor called Sycamore. They solved a sampling problem in 200 seconds which would take a supercomputer nearly 10,000 years to solve. This marked a significant achievement in the development of quantum computing.

Richard Feynman first theorised the idea of using quantum mechanics to perform calculations impossible for classical computers. Image:Unknown/Wikimedia Commons

Since then, many researchers have demonstrated quantum supremacy by solving various sampling problems. The impact of achieving quantum supremacy cannot be overstated. It validates the potential of quantum computing to solve problems beyond the capabilities of classical computers, as first theorised by Richard Feynman in the 1980s.

Apart from sampling problems, other applications have been proposed for demonstrating quantum supremacy, such as Shor's algorithm for factoring integers which are extremely important in encryption. However, implementing Shor's algorithm for large numbers is not feasible with existing technology and is hence not the preferred oversampling algorithm for demonstrating supremacy.

The most pressing concern with quantum computers is their sensitivity to errors induced by environmental noise and imperfect control. This hinders their practical usability, as data stored on a quantum computer can become corrupted.

Classical error correction relies on redundancy, ie, repetition. However, quantum information cannot be cloned or copied due to the no-cloning theorem (which states thatit is impossible to create an independent and identical copy of an arbitrary unknownquantum state). Therefore, a new error correction method is required for quantum computing systems.

QEC for a single qubit. Image:Self/Wikimedia Commons

Quantum error correction (QEC) is a way to mitigate these errors and ensure that the data stored on a quantum computer is error-free, thus improving the reliability and accuracy of quantum computers.

The principle of QEC is to encode the data stored on a quantum computer such that the errors can be detected and corrected without disrupting the computation being performed on it.

This is done using quantum error-correction codes (QECCs). QECCs work by encoding the information onto a larger state space. They further correct the error without measuring the quantum state, thereby preventing the collapse of the quantum state.

The first experimental demonstration of QEC was done in 1998with nuclear magnetic resonance qubits. Since then, several experiments to demonstrate QEC have been performed using, for example, linear optics and trapped ions, among others.

A significant breakthrough camein 2016 when researchers extended the lifespan of a quantum bit using QEC. Their research showed the advantage of using hardware-efficient qubit encoding over traditional QEC methods for improving the lifetime of a qubit.

The detection and elimination of errors is critical to developing realistic quantum computers. QEC handles errors in the stored quantum information, but what about the errors after performing operations? Is there a way to correct those errors and ensure that the computations are not useless?

Fault-tolerant quantum computing is a method to ensure that these errors are detected and corrected using a combination of QECCs and fault-tolerant gates. This ensures that errors arising during the computations don't accumulate and render them worthless.

Quantum computing features. Image:Akash Sain/iStock

The biggest challenge in achieving fault-tolerant quantum computing is the need for many qubits. QECCs themselves require a lot of qubits to detect and correct errors.

Additionally, fault-tolerant gates also require a large number of qubits. However, two independent theoretical studies published in1998and2008proved that fault-tolerant quantum computers can be built. This has come to be known as the threshold theorem, which states that if the physical error rates of a quantum computer are below a certain threshold, the logical error rate can be suppressed to arbitrarily low values.

No experimental findings have proven fault-tolerant quantum computing due to the high number of qubits needed. The closest we've come to an experimental realisation is a2022 study published in Nature,demonstrating fault-tolerant universal quantum gate operations.

We have seen teleportation one too many times in science fiction movies and TV shows. But are any researchers close to making it a reality? Well, yes and no. Quantum teleportation allows for transferring one quantum state from one physical location to another without physically moving the quantum state itself. It has a wide range of applications, from secure quantum communication to distributed quantum computing.

Quantum teleportation wasfirst investigated in 1993by scientists who were using it as a way to send and receive quantum information. It was experimentally realised only four years later, in 1997, by two independent research groups. The basic principle behind quantum teleportation is entanglement (when two particles remain connected even when separated by vast distances).

Since 1997, many research groups have demonstrated the quantum teleportation of photons, atoms, and other quantum particles. It is the only real form of teleportation that exists.

In fact, the 2022 Nobel Prize in Physics was awarded to three scientists Alain Aspect, John Clauser, and Anton Zeilinger for experiments with entangled photons. The work demonstrated that teleportation between photons was possible. Their work demonstrated quantum entanglement and showed it could be used to teleport quantum information from one photon to another.

Quantum teleportation is the cornerstone for building a quantum internet. This is because it enables the distribution of entanglement over long distances.

Another important application of quantum teleportation is enabling remote quantum operations, meaning that a quantum computation can be performed on a distant processor without transmitting the qubits. This could be useful for secure communication and for performing quantum computations in inaccessible or hostile environments.

Topology is a branch of mathematics concerned with studying the properties of shapes and spaces preserved when deformed. But what does it have to do with quantum computing?

In essence, topological quantum computing is a theoretical model that uses quasiparticles called anyons (quasiparticles in two-dimensional space) for encoding and manipulating qubits.

The method is founded on the topological properties of matter, and in the case of anyons, the world lines (the path that an object traces in four-dimensional spacetime) of these particles form braids. These braids then make up the logic gates which are the building blocks of computers.

No experimental studies demonstrate topological quantum computing. Image:FMNLab/Wikimedia Commons

Topological qubits are protected against local perturbations and can be manipulated with high precision, making them less susceptible to decoherence. Additionally, topological quantum computing is more resistant to errors due to its inherent redundancy and topological protection, making it a promising candidate for fault-tolerant quantum computing.

Most topological quantum computing research is theoretical; currently, no studies provide substantial experimental support for the same. But, developments in this area of research are vital for building practical and scalable quantum computers.

With a mix of theoretical and experimental demonstrations, quantum computing is still in the early stages of research and development. These developments can potentially revolutionise several industries and academic disciplines, including financial services, materials science, cryptography, and artificial intelligence.

Even if there is still more study, the implications for quantum computing's future are promising. We may anticipate further developments and innovations in the years to come.

Continued here:
Quantum computing: The five biggest breakthroughs - Engineers Ireland

Accelerating the Accelerator: Scientist Speeds CERN’s HPC With … – Nvidia

Editors note: This is part of a series profiling researchers advancing science with high performance computing.

Maria Girone is expanding the worlds largest network of scientific computers with accelerated computing and AI.

Since 2002, the Ph.D. in particle physics has worked on a grid of systems across 170 sites in more than 40 countries that support CERNs Large Hadron Collider (LHC), itself poised for a major upgrade.

A high-luminosity version of the giant accelerator (HL-LHC) will produce 10x more proton collisions, spawning exabytes of data a year. Thats an order of magnitude more than it generated in 2012 when two of its experiments uncovered the Higgs boson, a subatomic particle that validated scientists understanding of the universe.

Girone loved science from her earliest days growing up in Southern Italy.

In college, I wanted to learn about the fundamental forces that govern the universe, so I focused on physics, she said. I was drawn to CERN because its where people from different parts of the world work together with a common passion for science.

Tucked between Lake Geneva and the Jura mountains, the European Organization for Nuclear Research is a nexus for more than 12,000 physicists.

Its 27-kilometer ring is sometimes called the worlds fastest racetrack because protons careen around it at 99.9999991% the speed of light. Its superconducting magnets operate near absolute zero, creating collisions that are briefly millions of times hotter than the sun.

In 2016, Girone was named CTO of CERN openlab, a group that gathers academic and industry researchers to accelerate innovation and tackle future computing challenges. It works closely with NVIDIA through its collaboration with E4 Computer Engineering, a specialist in HPC and AI based in Italy.

In one of her initial acts, Girone organized the CERN openlabs first workshop on AI.

Industry participation was strong and enthusiastic about the technology. In their presentations, physicists explained the challenges ahead.

By the end of the day we realized we were from two different worlds, but people were listening to each other, and enthusiastically coming up with proposals for what to do next, she said.

Today, the number of publications on applying AI across the whole data processing chain in high-energy physics is rising, Girone reports. The work attracts young researchers who see opportunities to solve complex problems with AI, she said.

Meanwhile, researchers are also porting physics software to GPU accelerators and using existing AI programs that run on GPUs.

This wouldnt have happened so quickly without the support of NVIDIA working with our researchers to solve problems, answer questions and write articles, she said. Its been extremely important to have people at NVIDIA who appreciate how science needs to evolve in tandem with technology, and how we can make use of acceleration with GPUs.

Energy efficiency is another priority for Girones team.

Were working on experiments on a number of projects like porting to lower power architectures, and we look forward to evaluating the next generation of lower power processors, she said.

To prepare for the HL-LHC, Girone, named head of CERN openlab in March, seeks new ways to accelerate science with machine learning and accelerated computing. Other tools are on the near and far horizons, too.

The group recently won funding to prototype an engine for building digital twins. It will provide services for physicists, as well as researchers in fields from astronomy to environmental science.

CERN also launched a collaboration among academic and industry researchers in quantum computing. The technology could advance science and lead to better quantum systems, too.

In another act of community-making, Girone was among four co-founders of a Swiss chapter of the Women in HPC group. It will help define specific actions to support women in every phase of their careers.

Im passionate about creating diverse teams where everyone feels they contribute and belong its not just a checkbox about numbers, you want to realize a feeling of belonging, she said.

Girone was among thousands of physicists who captured some of that spirit the day CERN announced the Higgs boson discovery.

She recalls getting up at 4 a.m. to queue for a seat in the main auditorium. It couldnt hold all the researchers and guests who arrived that day, but the joy of accomplishment followed her and others watching the event from a nearby hall.

I knew the contribution I made, she said. I was proud being among the many authors of the paper, and my parents and my kids felt proud, too.

Check out other profiles in this series:

More:
Accelerating the Accelerator: Scientist Speeds CERN's HPC With ... - Nvidia

Daily briefing: Quantum computers are all ‘terrible’ but researchers aren’t worried – Nature.com

Hello Nature readers, would you like to get this Briefing in your inbox free every day? Sign up here.

LIGO can detect gravitational waves that are generated when two black holes collide.Credit: The SXS Project

The Laser Interferometer Gravitational-Wave Observatory (LIGO) is back after a three-year hiatus and a multimillion-dollar upgrade. The first detection of gravitational waves ripples in spacetime from colliding black holes and other cosmic cataclysms was made at LIGO in 2015. Improvements to the detectors sensitivity mean that LIGO could pick up signals of colliding black holes every few days, compared with once a week during its previous run. Scientists hope to detect the gravitational signal of a collapsing star before it manifests as a supernova explosion, as well as the continuous gravitational waves produced by a pulsar.

Nature | 6 min read

A wireless connection between the brain and the spinal cord allows a paralysed man to walk using his thoughts. Gert-Jan Oskam, whose legs were paralysed after a cycling accident, received a spinal implant in 2018 that generated robotic movement through pre-programmed electrical stimulation. He has now received head implants that detect brain activity and transmit the signal to a backpack computer, which decodes the information and activates the spinal pulse generator. This brainspine interface gives Oskam full control over the stimulation, so he can walk and climb stairs. The stimulation before was controlling me and now I am controlling stimulation by my thought, he says.

Nature | 4 min read

Reference: Nature paper

A component in their mothers milk triggers a diet switch in baby mices heart cells. Mouse embryos heart-muscle cells burn sugar and lactic acid, but within 24 hours of birth, they shift to fatty acids as their fuel. After seven years of experiments, some of which involved milking mice by hand, researchers now zeroed in on -linolenic acid as a key compound that drives the switch, and identified the receptor and genes involved. Human breast milk also contains -linolenic acid, and a precursor is found in baby formula, although its unclear whether it has the same role in humans.

Nature | 4 min read

Go deeper with an analysis by heart development experts in the Nature News & Views article (6 min read, Nature paywall)

Reference: Nature paper

Chinas new data restrictions have strengthened privacy but are concerning researchers globally. The signal has been very clear that China does not want its scientists to collaborate as freely as they used to with foreigners, says sociologist Joy Zhang. Chinas largest academic database has partially suspended foreign access, and institutions that send, for example, clinical-trial data abroad must now undergo a security assessment. Unlike the European Unions data protection regulation, the law has no exemption for scientists. The Chinese government has also proposed adding CRISPR gene editing, crop breeding and photovoltaics techniques to its list of technologies whose export is prohibited or restricted.

Nature | 7 min read

Japans government is drawing fresh ire from researchers over plans to privatize the countrys influential science council (SCJ). The government has already backed away from plans to reform the councils constitution and its process for appointing members. Observers predict that the council will ultimately be forced to forge a new relationship with the government: I think the SCJ will have to find a way of existing as an organ within the government, while being independent, says policy researcher Hiroshi Nagano.

Nature | 5 min read

Even the scientists who have made quantum computers their lifes work say they cant do anything useful yet. Theyre all terrible, says physicist Winfried Hensinger of the five he owns (hes working on a new large-scale, modular type). But enthusiasts arent concerned and researchers say development is proceeding better than expected. The devices have the potential to accelerate drug discovery, crack encryption, speed up decision-making in financial transactions, improve machine learning, develop revolutionary materials and even address climate change and that barely scratches the surface, researchers say. The short-term hype is a bit high, says computational mathematician Steve Brierley, a founder of a quantum-computing firm. But the long-term hype is nowhere near enough.

Nature | 10 min read

Across Africa, 43% of people still do not have electricity and one of the causes is that highly indebted countries cant invest in research. Many countries find it impossible to pay off debts and protect public spending, which excludes them from expanding their scientific capabilities. Creditors should consider a debt-for-science swap, argues a Nature editorial: agree to waive some debt for countries that spend more on research.

Nature | 5 min read

The research system still tends to put power in the hands of just a handful of people. Universities should look to industry to learn how to better reflect how research is done today, argues a Nature editorial. (5 min read)

More:
Daily briefing: Quantum computers are all 'terrible' but researchers aren't worried - Nature.com

From self-driving cars to military surveillance: quantum computing can help secure the future of AI systems – The Conversation

Artificial intelligence algorithms are quickly becoming a part of everyday life. Many systems that require strong security are either already underpinned by machine learning or soon will be. These systems include facial recognition, banking, military targeting applications, and robots and autonomous vehicles, to name a few.

This raises an important question: how secure are these machine learning algorithms against malicious attacks?

In an article published today in Nature Machine Intelligence, my colleagues at the University of Melbourne and I discuss a potential solution to the vulnerability of machine learning models.

We propose that the integration of quantum computing in these models could yield new algorithms with strong resilience against adversarial attacks.

Machine learning algorithms can be remarkably accurate and efficient for many tasks. They are particularly useful for classifying and identifying image features. However, theyre also highly vulnerable to data manipulation attacks, which can pose serious security risks.

Data manipulation attacks which involve the very subtle manipulation of image data can be launched in several ways. An attack may be launched by mixing corrupt data into a training dataset used to train an algorithm, leading it to learn things it shouldnt.

Manipulated data can also be injected during the testing phase (after training is complete), in cases where the AI system continues to train the underlying algorithms while in use.

People can even carry out such attacks from the physical world. Someone could put a sticker on a stop sign to fool a self-driving cars AI into identifying it as a speed-limit sign. Or, on the front lines, troops might wear uniforms that can fool AI-based drones into identifying them as landscape features.

Read more: AI to Z: all the terms you need to know to keep up in the AI hype age

Either way, the consequences of data manipulation attacks can be severe. For example, if a self-driving car uses a machine learning algorithm that has been compromised, it may incorrectly predict there are no humans on the road when there are.

In our article, we describe how integrating quantum computing with machine learning could give rise to secure algorithms called quantum machine learning models.

These algorithms are carefully designed to exploit special quantum properties that would allow them to find specific patterns in image data that arent easily manipulated. The result would be resilient algorithms that are safe against even powerful attacks. They also wouldnt require the expensive adversarial training currently used to teach algorithms how to resist such attacks.

Beyond this, quantum machine learning could allow for faster algorithmic training and more accuracy in learning features.

Todays classical computers work by storing and processing information as bits, or binary digits, the smallest unit of data a computer can process. In classical computers, which follow the laws of classical physics, bits are represented as binary numbers specifically 0s and 1s.

Quantum computing, on the other hand, follows principles used in quantum physics. Information in quantum computers is stored and processed as qubits (quantum bits) which can exist as 0, 1, or a combination of both at once. A quantum system that exists in multiple states at once is said to be in a superposition state. Quantum computers can be used to design clever algorithms that exploit this property.

However, while there are significant potential benefits in using quantum computing to secure machine learning models, it could also be a double-edged sword.

On one hand, quantum machine learning models will provide critical security for many sensitive applications. On the other, quantum computers could be used to generate powerful adversarial attacks, capable of easily deceiving even state-of-the-art conventional machine learning models.

Moving forward, well need to seriously consider the best ways to protect our systems; an adversary with access to early quantum computers would pose a significant security threat.

The current evidence suggests were still some years away from quantum machine learning becoming a reality, due to limitations in the current generation of quantum processors.

Todays quantum computers are relatively small (with fewer than 500 qubits) and their error rates are high. Errors may arise for several reasons, including imperfect fabrication of qubits, errors in the control circuitry, or loss of information (called quantum decoherence) through interaction with the environment.

Still, weve seen enormous progress in quantum hardware and software over the past few years. According to recent quantum hardware roadmaps, its anticipated quantum devices made in coming years will have hundreds to thousands of qubits.

These devices should be able to run powerful quantum machine learning models to help protect a large range of industries that rely on machine learning and AI tools.

Worldwide, governments and private sectors alike are increasing their investment in quantum technologies.

This month the Australian government launched the National Quantum Strategy, aimed at growing the nations quantum industry and commercialising quantum technologies. According to the CSIRO, Australias quantum industry could be worth about A$2.2 billion by 2030.

Read more: Australia has a National Quantum Strategy. What does that mean?

Read more:
From self-driving cars to military surveillance: quantum computing can help secure the future of AI systems - The Conversation