Archive for the ‘Quantum Computer’ Category

Push-Button Entanglement: Scientists Achieve Reliable Quantum Entanglement Between Resting and Flying Qubits – The Quantum Insider

Insider Brief

PRESS RELEASE Entanglement, Einsteins spooky action at a distance, today is THE tool of quantum information science. It is the essential resource for quantum computers and used to transmit quantum information in a future quantum network. But it is highly sensitive and it is an enormous challenge to entangle resting quantum bits (qubits) with flying qubits in the form of photons at the push of a button.

However, a team led by Gerhard Rempe, Director at the Max Planck Institute of Quantum Optics in Garching, Germany, has now succeeded in doing exactly that with atoms connected in parallel. The atoms are sandwiched between two almost perfect mirrors. This setup guarantees reliable interaction with photons as flying qubits a technique pioneered by Gerhard Rempe. Using optical tweezers, the team was able to individually control up to six atoms and entangle each with a photon.

Using a multiplexing technique, the scientists demonstrated an atom-photon entanglement generation with almost 100 percent efficiency, a groundbreaking achievement for distributing entanglement over a quantum network. The work is published today in the journal Science.

Interfaces between resting qubits and flying qubits come into play whenever quantum information needs to be transmitted over long distances.

One aspect is the communication of quantum information over long distances in a future quantum internet, explains Emanuele Distante, who supervised the experiment as a postdoctoral researcher, and is now a researcher at ICFO in Barcelona: The second aspect is the goal of connecting many qubits in a distributed network to form a more powerful quantum computer. Both applications require efficient interfaces between qubits at rest and qubits in motion. This is why many groups around the world are feverishly researching quantum mechanical light-matter interfaces.

Several different technical approaches are being pursued.

Gerhard Rempe and his team in Garching have been working for many years on a method that uses ultracold rubidium atoms trapped between two almost perfect mirrors as an optical resonator.

The focus is on a future quantum internet.

This approach has an inherent advantage because it allows a trapped atom to interact highly efficiently with a photon, which bounces back and forth between the two mirrors about twenty thousand times like a ping-pong ball. Whats more, because one of the two mirrors is slightly more transparent than the other, the photon leaves in a precisely predetermined direction. This means that it is not lost, but can be reliably coupled into an optical fiber. If this photon is entangled with the atom using a specific protocol of laser pulses, this entanglement is maintained as the photon travels.

Multiplexing to overcome transmission losses

In 2012, the Garching team succeeded in entangling an atom in one resonator with a second atom in another resonator via photon radio through a 60-metre-long glass fiber. With the help of the transmitted photon, they formed an extended entangled quantum object from the two atoms. However, the photon must not get lost in the glass fiber along the way, and this is precisely the problem with a longer journey. The solution, at least for medium distances of a few kilometers, is called multiplexing. Multiplexing is a standard method used in classical information technology to make transmission more robust. Think of it as a radio link through a noisy area: If you send the radio signal along several parallel channels, the probability that it will reach the receiver via at least one channel increases.

Without multiplexing, even our current Internet would not work, explains Distante: But transferring this method to quantum information systems is a particular challenge.

Multiplexing is not only interesting for more secure transmission over longer distances in a future quantum internet, but also for a local quantum network. One example is the distributed quantum computer, which consists of several smaller processors that are connected via short optical fibers. Its resting qubits could be entangled more reliably by multiplexing with flying qubits to form a distributed, more powerful quantum computer.

Laser tweezers for handling atoms

The challenge for the Garching team was to load several atoms into a resonator as resting qubits and to address them individually. Only if the position of the atoms is known can they be entangled in parallel with one photon each in order to achieve multiplexing. Hence, the team developed a technique for inserting optical tweezers into the narrow resonator.

The mirrors are only about half a millimeter apart, explains Lukas Hartung, PhD student and first author of the paper in Science.

The optical tweezers consist of fine laser beams that are strong enough to capture an atom in their focus and move it precisely to the desired position. Using up to six such tweezers, the team was able to arrange a corresponding number of floating rubidium atoms in the cavity to form a neat qubit lattice. Since the atoms can easily remain in the trap for a minute a little eternity in quantum physics they could easily be entangled with one photon each. This works almost one hundred percent of the time, says Distante, emphasizing the key advantage of this technique: the entanglement distribution works almost deterministically, i.e. at the push of a button.

Scalable to considerably more qubits

In order to achieve this, the team used a microscope lens objective positioned above the resonator with micrometer precision in order to focus the individual beams of the light tweezers into the narrow mirror cabinet. The tweezer beams are generated via so-called acousto-optical deflectors and can therefore be controlled individually. Precise adjustment of the laser tweezers in the optics requires a great deal of dexterity. Mastering this challenge was the cornerstone for the success of the experiment, summarizes Stephan Welte, who helped develop the technology as part of the team and is now a researcher at ETH Zurich.

The current experiment gives hope that the method can be scaled up to considerably more qubits without losses: the team estimates that up to 200 atoms could be controlled in such a resonator. As these quantum bits can be controlled very well in the resonator, this would be a huge step forward. And as the interface even feeds one hundred percent of the entangled photons into the optical fiber, a network of many resonators, each with 200 atoms as resting qubits, would be thinkable. This would result in a powerful quantum computer. It is still a dream of the future. But with the laser tweezers, the Garching team now has a considerable part of this future firmly under control.

See more here:
Push-Button Entanglement: Scientists Achieve Reliable Quantum Entanglement Between Resting and Flying Qubits - The Quantum Insider

New quantum chip ‘can be produced at scale in standard fab’ – evertiq.com

The UK firm is a specialist in trapped ions tech. It says that this process is best suited to building a stable, high-performance quantum computer. However, until now, trapped ions have been difficult to scale as they are typically controlled by lasers.

Oxford Ionics claims it has now developed a way to eliminated the use of lasers with a technique that integrates everything needed to control trapped ions into a silicon chip. It says this technique has set industry records in both two-qubit gate and single-qubit gate performance (fidelity), without needing error correction.

The company will now build a scalable 256 qubit chip that can be manufactured on existing semiconductor production lines.

Dr Tom Harty, co-founder and CTO at Oxford Ionics, said in an official release: When you build a quantum computer, performance is as important as size - increasing the number of qubits means nothing if they do not produce accurate results. We have now proven that our approach has delivered the highest level of performance in quantum computing to date, and is now at the level required to start unlocking the commercial impact of quantum computing. This is an incredibly exciting moment for our team, and for the positive impact that quantum computing will have on society at large.

Continue reading here:
New quantum chip 'can be produced at scale in standard fab' - evertiq.com

Worlds highest performing quantum chip unveiled by Oxford Ionics – Interesting Engineering

A new high-performance quantum chip built by Oxford Ionics, a spinoff from the University of Oxford, has broken previous records in the quantum computing domain. The achievement is commendable since error correction was not used during the process, and the chip can also be manufactured at existing semiconductor fabs. The company expects a useful quantum computer to be available to the world in the next three years.

Quantum computing is the next frontier of computing, where computers will be able to rapidly compute results by consuming information that would take todays fastest supercomputers years to process.

Research institutes and private enterprises are now locked in a race to build the worlds first usable quantum computer. However, the basic data storage unit or quantum bit (quantum bit) can only be worked with in highly specialized conditions. Researchers need to find simpler ways to process qubits to make the technology more mainstream.

Founded in 2019 by eminent Oxford scientists, Oxford Ionics uses a trapped ion approach to quantum computing. Compared to other approaches, trapped ions can help in precise measurements while staying in superposition for longer durations.

Controlling trapped ions for computation is typically achieved with lasers. However, Oxford Ionics has eliminated the use of lasers and developed an electronic way to achieve the same effect. They call it Electronic Qubit Control.

The team at Oxford Ionics has integrated everything needed to control the trapped ions onto a silicon chip. This chip can be manufactured at any existing semiconductor fabrication facility, making it possible to scale trapped-ion-based quantum computers.

In a press release sent to Interesting Engineering, Oxford Ionics confirmed that it achieved industry records in two-qubit and single-qubit gate performance.

The industrys biggest players have taken different paths towards the goal of making quantum computing a reality, said Chris Ballance, co-founder and CEO of Oxford Ionics, in the statement.

From the outset, we have taken a rocket ship approach focusing on building robust technology by solving the really difficult challenges first. This has meant using novel physics and smart engineering to develop scalable, high-performance qubit chips that do not need error correction to get to useful applications and can be controlled on a classic semiconductor chip, Ballance added.

A major challenge in adopting quantum computers is how easily the system accumulates errors, given its fast computing rates. Researchers, therefore, use large numbers of qubits to build logical qubits that give more coherent answers and deploy error correction to the computations.

Oxford Ionics says its high-performance qubits eliminate the need for error correction, allowing commercial applications without the associated costs of error correction. The company is confident that, thanks to the scalability of its Electronic Qubit Control system, it can build a 256-qubit chip in the next few years.

When you build a quantum computer, performance is as important as size increasing the number of qubits means nothing if they do not produce accurate results, said Tom Harty, CTO at Oxford Ionics. We have now proven that our approach has delivered the highest level of performance in quantum computing to date, and is now at the level required to start unlocking the commercial impact of quantum computing.

This is an incredibly exciting moment for our team, and for the positive impact that quantum computing will have on society at large, Harty concluded.

NEWSLETTER

Stay up-to-date on engineering, tech, space, and science news with The Blueprint.

Ameya Paleja Ameya is a science writer based in Hyderabad, India. A Molecular Biologist at heart, he traded the micropipette to write about science during the pandemic and does not want to go back. He likes to write about genetics, microbes, technology, and public policy.

Continue reading here:
Worlds highest performing quantum chip unveiled by Oxford Ionics - Interesting Engineering

Quantum Computing Accelerates Drug Discovery from Years to Weeks – The Quantum Insider

Were extracting valuable actionable results, decreasing our cost and time in delivering value for our customers, stated Bill Shipman, CTO and co-founder of POLARISqb, setting the stage for a revolutionary approach to drug discovery using quantum computing at the D-WaveQubits 2024 event.

POLARISqb, a biotech company founded in 2020, is leveraging D-Waves quantum annealing technology to dramatically speed up the drug discovery process. Their innovative approach compresses what traditionally takes years into mere weeks, potentially transforming the pharmaceutical industry.

Shipman explaind the current challenges: Drug discovery and drug designas I think were all aware from the mediais a long and expensive process. He noted that the conventional method takes on average three years and $4 million and is limited by the chemicals available in labs or catalogs.

POLARISqbs quantum-powered solution, however, opens up vast new possibilities.

Were able to look at billions of molecules in contrast to the current industry process which is looking at thousands of molecules, Shipman revealed. This exponential increase in the chemical space explored could lead to more effective drugs being discovered faster.

Maurice Benson, Principal Software Engineer at POLARISqb, got stuck into the technical aspects of their approach. They use fragment-based drug design, breaking molecules into pieces and recombining them in novel ways.

Weve turned this into a constraint satisfaction problem, Benson explained, which they then map onto D-Waves quantum annealer.

The quantum advantage becomes clear when Benson compares their results to classical methods: We found is even though given the same amount of time that D-Waves annealer constantly gave us molecules that had a higher Pharmacophore score. This indicates that the quantum-derived molecules better fit the desired constraints and optimization criteria.

Moreover, the quantum approach yields more diverse results.

The D-Wave annealer used fifty more fragments than the classical search, said Benson. This diversity allows researchers to explore a wider chemical space, potentially uncovering unexpected and valuable drug candidates.

Shipman emphasized the real-world impact of their technology: We are producing results that are relevant for our customers and leading to repeat customers. In an industry where physical validation is crucial, this repeat business signals that POLARISqbs quantum-derived molecules are showing promise in actual laboratory tests.

The potential of this quantum-powered approach extends beyond just speed. It could enable the exploration of previously inaccessible chemical spaces, leading to entirely new classes of drugs. As computing power increases, the advantages may become even more pronounced.

Shipman concluded with a call to action for the industry: Quantum utility is available today. Please dont get lost in the definitions go with technology that drives measurable outcomes. With POLARISqb demonstrating tangible results, its clear that quantum computing is no longer a future promise for drug discoveryits making an impact now.

Read this article:
Quantum Computing Accelerates Drug Discovery from Years to Weeks - The Quantum Insider

Simulating the universes most extreme environments with utility-scale quantum computation – IBM

The Standard Model of Particle Physics encapsulates nearly everything we know about the tiny quantum-scale particles that make up our everyday world. It is a remarkable achievement, but its also incomplete rife with unanswered questions. To fill the gaps in our knowledge, and discover new laws of physics beyond the Standard Model, we must study the exotic phenomena and states of matter that dont exist in our everyday world. These include the high-energy collisions of particles and nuclei that take place in the fiery heart of stars, in cosmic ray events occurring all across earths upper atmosphere, and in particle accelerators like the Large Hadron Collider (LHC) at CERN or the Relativistic Heavy Ion Collider at Brookhaven National Laboratory.

Computer simulations of fundamental physics processes play an essential role in this research, but many important questions require simulations that are much too complex for even the most powerful classical supercomputers. Now that utility-scale quantum computers have demonstrated the ability to simulate quantum systems at a scale beyond exact or brute force classical methods, researchers are exploring how these devices might help us run simulations and answer scientific questions that are inaccessible to classical computation. In two recent papers published in PRX Quantum (PRX)1 and Physical Review D (PRD)2, our research group did just that, developing scalable techniques for simulating the real-time dynamics of quantum-scale particles using the IBM fleet of utility-scale, superconducting quantum computers.

The techniques weve developed could very well serve as the building blocks for future quantum computer simulations that are completely inaccessible to both exact and even approximate classical methods simulations that would demonstrate what we call quantum advantage over all known classical techniques. Our results provide clear evidence that such simulations are potentially within reach of the quantum hardware we have today.

We are a team of researchers from the University of Washington and Lawrence Berkeley National Laboratory who have spent years investigating the use of quantum hardware for simulations of quantum chromodynamics (QCD).

This work was supported, in part, by the U.S. Department of Energy grant DE-FG02-97ER-41014 (Farrell), by U.S. Department of Energy, Office of Science, Office of Nuclear Physics, InQubator for Quantum Simulation (IQuS) under Award Number DOE (NP) Award DE-SC0020970 via the program on Quantum Horizons: QIS Research and Innovation for Nuclear Science (Anthony Ciavarella, Roland Farrell, Martin Savage), the Quantum Science Center (QSC) which is a National Quantum Information Science Research Center of the U.S. Department of Energy (DOE) (Marc Illa), and by the U.S. Department of Energy (DOE), Office of Science under contract DE-AC02-05CH11231, through Quantum Information Science Enabled Discovery (QuantISED) for High Energy Physics (KA2401032) (Anthony Ciavarella).

This work is also supported, in part, through the Department of Physics and the College of Arts and Sciences at the University of Washington.

This research used resources of the Oak Ridge Leadership Computing Facility (OLCF), which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

We acknowledge the use of IBM Quantum services for this work.

This work was enabled, in part, by the use of advanced computational, storage and networking infrastructure provided by the Hyak supercomputer system at the University of Washington.

This research was done using services provided by the OSG Consortium, which is supported by the National Science Foundation awards #2030508 and #1836650.

One prominent example of these challenges comes from the field of collider physics. Physicists use colliders like the LHC to smash beams of particles and atomic nuclei into each other at extraordinarily high energies, recreating the kinds of collisions that take place in stars and cosmic ray events. Collider experiments give physicists the ability to observe how matter behaves in the universes most extreme environments. The data we collect from these experiments help us tighten the constraints of the Standard Model and can also help us discover new physics beyond the Standard Model.

Lets say we want to use the data from collider experiments to identify new physics theories. To do this, we must be able to accurately predict the way known physics theories like QCD contribute to the exotic physics processes that occur in collider runs, and we must be able to quantify the uncertainties of the corresponding theoretical calculations. Performing these tasks requires detailed simulations of systems of fundamental particles. These simulations are impossible to achieve with classical computation alone, but should be well within reach for a sufficiently capable quantum computer.

Quantum computing hardware is making rapid progress toward the day when it will be capable of simulating complex systems of fundamental particles, but we cant just sit back and wait for quantum technology to reach maturity. When that day comes, well need to be ready with scalable techniques for executing each step of the simulation process.

The research community is already beginning to make significant progress in this field, with most efforts today focused on simulations of simplified, low-dimensional models of QCD and other fundamental physics theories. This is exactly what our research group has been working on, with our experiments primarily centering on simulations of the widely used Schwinger model, a one-dimensional model of QCD that describes how electrons and positrons behave and interact through the exchange of photons.

In a paper submitted to arXiv in 2023, and published in PRX Quantum this past April, we used the Schwinger model to demonstrate the first essential step in building future simulations of high-energy collisions of matter: preparing a simulation of the quantum vacuum state in which particle collisions would occur. Our follow-up to that paper, published in PRD in June, shows techniques for performing the next step in this process preparing a beam of particles in the quantum vacuum.

More specifically, that follow-up paper shows how to prepare hadron wavepackets in a 1-dimensional quantum simulation and evolve them forward in time. In this context, you can think of a hadron as a composite particle made up of a positron and electron and bound together by something analogous to the strong force that binds neutrons and protons together in nuclei.

Due to the uncertainty principle, it is impossible to precisely know both the position and momentum of a particle. The best you can do is to create a wavepacket, a region of space over which a particle will appear with some probability and with a range of different momenta. The uncertainty in momentum causes the wavepacket to spread out or propagate across some area of space.

By evolving our hadron wavepacket forward in time, we effectively create a simulation of pulses or beams of hadrons moving in this 1-dimensional system, just like the beams of particles we smash into each other in particle colliders. The wavepacket we create has an equal probability of propagating in any direction. However, since were working in 1-dimensional space, essentially a straight line, its more accurate to say the particle is equally likely to propagate to the left or to the right.

Weve established that our primary goal is to simulate the dynamics of a composite hadron particle moving through the quantum vacuum in one-dimensional space. To achieve this, well need to prepare an initial state with the hadron situated on a simplified model of space made up of discrete points also known as a lattice. Then, well have to perform what we call time evolution so we can see the hadron move around and study its dynamics.

Our first step is to determine the quantum circuits well need to run on the quantum computer to prepare this initial state. To do this, we developed a new state preparation algorithm, Scalable Circuits ADAPT-VQE. This algorithm uses the popular ADAPT-VQE algorithm as a subroutine, and is able to find circuits for preparing the state with the lowest energy i.e., the ground state as well as a hadron wavepacket state. A key feature of this technique is the use of classical computers to determine circuit blocks for preparing a desired state on a small lattice that can be systematically scaled up to prepare the desired state on a much larger lattice. These scaled circuits cannot be executed exactly on a classical computer and are instead executed on a quantum computer.

Once we have the initial state, our next step is to apply the time evolution operator. This is a mathematical tool that allows us to take a quantum state as it exists at one point in time and evolve it into the state that corresponds to some future point in time. In our experiment, we use the conventional Trotterized time evolution, where you split up the different mathematical terms representing the Hamiltonian energy equation that describes the quantum system and convert each term into quantum gates in your circuit.

This, however, is where we run into a problem. Even the simplified Schwinger model states that interactions between individual matter particles in our system are all-to-all. In other words, every matter particle in the system must interact with every other particle in the system, meaning every qubit in our circuit needs to interact with every other qubit.

This poses a few challenges. For one thing, an all-to-all interaction causes the number of quantum gates required for time evolution to scale quadratically with the simulation volume, making these circuits much too large to run on current quantum hardware. Another key challenge is that, as of today, even the most advanced IBM Quantum processor allows only for native interactions between neighboring qubits so, for example, the fifth qubit in an IBM Quantum Heron processor can technically interact only with qubits 4 and 6. While there are special techniques that let us get around this linear connectivity and simulate longer range interactions, doing this in an all-to-all setting would make it so the required two-qubit gate depth also scales quadratically in the simulation volume.

To get around this problem, we used the emergent phenomenon of confinement one of the features that the Schwinger model also shares with QCD. Confinement tells us that interactions are significant only over distances around the size of the hadron. This motivated our use of approximate interactions, where the qubits need to interact only with at most next-to-next-to-nearest neighbor qubits, e.g., qubit 5 needs to interact only with qubits 2, 3, 4, 6, and 7. We established a formalism for constructing a systematically improvable interaction and turned that interaction into a sequence of gates that allowed us to perform the time evolution.

Once the time evolution is complete, all we need to do is measure some observable in our final state. In particular, we wanted to see the way our simulated hadron particle propagates on the lattice, so we measured the particle density. At the beginning of the simulation (t=0), the hadron is localized in a specific area. As it evolves forward in time it propagates with a spread that is bounded by the speed of light (a 45 angle).

This figure depicts the results of our simulation of hadron dynamics. The time direction is charted on the lefthand-side Y-axis, and the points on the lattice qubits 0 to 111 are charted on the X-axis. The colors correspond to the particle density, with higher values (lighter colors) corresponding to having a higher probability of finding a particle at that location. The left-half of this figure shows the results of error-free approximate classical simulation methods, while the right half shows the results obtained from performing simulations on real Quantum hardware (specifically, the `ibm_torino` system). In an error free simulation, the left and right halves would be mirror images of each other. Deviations from this are due to device errors.

Keeping in mind that this is a simplified simulation in one spatial dimension, we can say this behavior mimics what we would expect to see from a hadron propagating through the vacuum, such as the hadrons produced by a device like the Large Hadron Collider.

Utility-scale IBM quantum hardware played an essential role in enabling our research. Our experiment used 112 qubits on the IBM Quantum Heron processor ibm_torino to run circuits that are impossible to simulate with brute force classical methods. However, equally important was the Qiskit software stack, which provided a number of convenient and powerful tools that were absolutely critical in our simulation experiments.

Quantum hardware is extremely susceptible to errors caused by noise in the surrounding environment. In the future, IBM hopes to develop quantum error correction, a capability that allows quantum computers to correct errors as they appear during quantum computations. For now, however, that capability remains out of reach.

Instead, we rely on quantum error suppression methods to anticipate and avoid the effects of noise, and we use quantum error mitigation post-processing techniques to analyze the quantum computers noisy outputs and deduce estimates of the noise-free results.

In the past, leveraging these techniques for quantum computation could be enormously difficult, often requiring researchers to hand-code error suppression and error mitigation solutions specifically tailored to both the experiments they wanted to run, and the device they wanted to use. Fortunately, the recent advent of software tools like the Qiskit Runtime primitives have made it much easier to get meaningful results out of quantum hardware while taking advantage of built-in error handling capabilities.

In particular, we relied heavily on the Qiskit Runtime Sampler primitive, which calculates the probabilities or quasi-probabilities of bitstrings being output by quantum circuits, and makes it easy to compute physical observables like the particle density.

Sampler not only simplified the process of collecting these outputs, but also improved their fidelity by automatically inserting an error suppression technique known as dynamical decoupling into our circuits and by automatically applying quantum readout error mitigation to our results.

Obtaining accurate, error-mitigated results required running many variants of our circuits. In total, our experiment involved roughly 154 million "shots" on quantum hardware, and we couldn't have achieved this by running our circuits one by one. Instead, we used Qiskit execution modes, particularly Session mode, to submit circuits to quantum hardware in efficient multi-job workloads. The sequential execution of many circuits meant that the calibration and noise on the device was correlated between runs facilitating our error mitigation methods.

Sending circuits to IBM Quantum hardware while taking advantage of the Sampler primitive and Session mode required just a few lines of code, truly as simple as:

Our team did several runs both with and without Qiskit Runtimes built-in error mitigation, and found that methods offered natively via the Sampler primitive significantly improved the quality and accuracy of our results. In addition, the flexibility of Session and Sampler allowed us to add additional, custom layers of error mitigation like Pauli twirling and operator decoherence renormalization. The combination of all these error mitigation techniques enabled us to successfully perform a quantum simulation with 13,858 CNOTs and a CNOT depth of 370!

What is CNOT depth? CNOT depth is an important measure of the complexity of quantum circuits. A CNOT gate, or controlled NOT gate, is a quantum logic gate that takes two qubits as input, and performs a NOT operation that flips the value of the second (target) qubit depending on the value of the first (control) qubit. CNOT gates are an important building block in many quantum algorithms and are the noisiest gate on current quantum computers. CNOT depth of a quantum simulation refers to the number of layers of CNOT gates across the whole device that have to be executed (each layer can have multiple CNOT gates acting on different qubits, but they can be applied at the same time, i.e., in parallel). Without the use of quantum error handling techniques like those offered by the Qiskit software stack, reaching a CNOT depth of 370 would be impossible.

Over the course of two research papers, we have demonstrated techniques for using utility-scale quantum hardware to simulate the quantum vacuum, and to simulate the dynamics of a beam of particles on top of that vacuum. Our research group is already hard at work on the logical next step in this progression simulating collisions between two particle beams.

If we can simulate these collisions at high enough energy, we believe we can demonstrate the long-sought goal of quantum computational advantage. Today, no classical computing method is capable of accurately simulating the collision of two particles at the energies weve set our sights on, even using simplified physics theories like the Schwinger model. However, our research so far indicates that this task could be within reach for near-term utility-scale quantum hardware. This means that, even without achieving full quantum error correction, we may soon be able to use quantum hardware to build simulations of systems of fundamental particles that were previously impossible, and use those simulations to seek answers to some of the most enduring mysteries in all of physics.

At the same time, IBM hasnt given up hope for quantum error correction, and neither have we. Indeed, weve poured tremendous effort into ensuring that the techniques weve developed in our research are scalable, such that we can transition them from the noisy, utility-scale processors we have today to the hypothetical error-corrected processors of the future. If achieved, the ability to perform error correction in quantum computations will make quantum computers considerably more powerful, and open the door to rich, three-dimensional simulations of incredibly complex physics processes. With those capabilities at our fingertips, who knows what well discover?

More:
Simulating the universes most extreme environments with utility-scale quantum computation - IBM