Archive for the ‘Quantum Computer’ Category

New quantum computer smashes ‘quantum supremacy’ record by a factor of 100 and it consumes 30,000 times less power – Livescience.com

A new quantum computer has broken a world record in "quantum supremacy," topping the performance of benchmarking set by Google's Sycamore machine by 100-fold.

Using the new 56-qubit H2-1 computer, scientists at quantum computing company Quantinuum ran various experiments to benchmark the machine's performance levels and the quality of the qubits used. They published their results June 4 in a study uploaded to the preprint database arXiv. The study has not been peer-reviewed yet.

To demonstrate the potential of the quantum computer, the scientists at Quantinuum used a well-known algorithm to measure how noisy, or error-prone, qubits were.

Quantum computers can perform calculations in parallel thanks to the laws of quantum mechanics and entanglement between qubits, meaning the fates of different qubits can instantly change each other. Classical computers, by contrast, can work only in sequence.

Adding more qubits to a system also scales up the power of a machine exponentially; scientists predict that quantum computers will one day perform complex calculations in seconds that a classical supercomputer would have taken thousands of years to solve.

The point where quantum computers overtake classical ones is known as "quantum supremacy," but achieving this milestone in a practical way would need a quantum computer with millions of qubits. The largest machine today has only about 1,000 qubits.

Related: Quantum computing breakthrough could happen with just hundreds, not millions, of qubits using new error-correction system

Get the worlds most fascinating discoveries delivered straight to your inbox.

The reason we would need so many qubits for "quantum supremacy" is that they are inherently prone to error, so many would be needed to correct those errors. That's why many researchers are now focusing on building more reliable qubits, rather than simply adding more qubits to machines.

The team tested the fidelity of H2-1's output using what's known as the linear cross entropy benchmark (XEB). XEB spits out results between 0 (none of the output is error-free) and 1 (completely error-free), Quantinuum representatives said in a statement.

Scientists at Google first tested the company's Sycamore quantum computer using XEB in 2019, demonstrating that it could complete a calculation in 200 seconds that would have taken the most powerful supercomputer at the time 10,000 years to finish. They registered an XEB result of approximately 0.002 with the 53 superconducting qubits built into Sycamore.

But in the new study, Quantinuum scientists in partnership with JPMorgan, Caltech and Argonne National Laboratory achieved an XEB score of approximately 0.35. This means the H2 quantum computer can produce results without producing an error 35% of the time.

"We are entirely focused on the path to universal fault tolerant quantum computers," Ilyas Khan, chief product officer at Quantinuum and founder of Cambridge Quantum Computing, said in the statement. "This objective has not changed, but what has changed in the past few months is clear evidence of the advances that have been made possible due to the work and the investment that has been made over many, many years."

Quantinuum previously collaborated with Microsoft to demonstrate "logical qubits" that had an error rate 800 times lower than physical qubits.

In the study, published in April, scientists demonstrated they could run experiments with the logical qubits with an error rate of just 1 in 100,000 which is much stronger than the 1-in-100 error rate of physical qubits, Microsoft representatives said.

"These results show that whilst the full benefits of fault tolerant quantum computers have not changed in nature, they may be reachable earlier than was originally expected," added Khan.

Go here to read the rest:
New quantum computer smashes 'quantum supremacy' record by a factor of 100 and it consumes 30,000 times less power - Livescience.com

Quantum Computings Next Frontier, A Conversation with Jeremy OBrien – The Quantum Insider

Jeremy OBrien, CEO of PsiQuantum, is developing the worlds first utility-scale, fault-tolerant quantum computer. At the Third Annual Commercialising Quantum Global event hosted by The Economist, OBrien discussed quantum computing, detailing the path PsiQuantum is taking and the exciting potential of their technology.

OBrien explained that fault-tolerant quantum computers are essential because errors are inevitable in quantum systems.

Things go wrong in a regular computer as well, but they go wrong at a rate thats so low that we typically dont have to worry about error correction, OBrien said.

In quantum computing, however, the error rates are higher, necessitating robust error correction methods to ensure useful computations. PsiQuantums approach diverges from many in the field by focusing on building a large-scale, fault-tolerant system from the outset.

OBrien underlined this: We took that approach because it was our belief that all of the utility, all of the commercial value, would come with those large-scale systems with error correction. He added: There will be no utility in those small noisy systems that we have back then and indeed today.

PsiQuantum is leveraging photonics and the existing semiconductor manufacturing industry to achieve their ambitious goals. OBrien described their strategy: We spent 20 years in the University Research environment trying to figure out if there was a path whereby, we could use the semiconductor industry and the computer systems industry in full to make a quantum computer. He noted that their conviction is based on the extraordinary manufacturing capabilities developed over decades, which produce a trillion chips a year, each containing billions of components.

The companys first major project is the development of a fault-tolerant quantum computer in Brisbane, scheduled for completion in 2027. OBrien detailed the setup: Its a system with of order 100 cabinets, each filled with hundreds of silicon chips, half of them photonic, half of them electronic, all wired up electrically as well as optically using conventional telecommunication fibers. This system, when operational, is expected to address significant problems across various industries, particularly in sustainability.

OBrien highlighted the potential impact on battery technology.

Although everyone as far as I can tell has a lithium-ion battery in their hand right now, we dont understand how those things work, he said, while explaining that understanding and simulating the chemistry of these batteries is beyond the capability of conventional computers. Quantum computers, however, could unlock new insights, leading to the design of better batteries and other advanced materials.

The pharmaceutical industry is another area poised to benefit.

We have drugs that we consume which we dont know how they work, OBrien said, pointing out the limitations of current simulation capabilities. Quantum computers could revolutionize drug development by accurately simulating molecular interactions, significantly speeding up the discovery process and improving drug efficacy.

PsiQuantums use of photonics on silicon chips is a key factor in their accelerated timeline. OBrien explained: Photonics is an approach that enables you to scale in large part because of the leverage of the manufacturing but also the connectivity and the cooling and control electronics. This innovative approach allows for rapid development and deployment of their quantum systems.

As PsiQuantum moves towards their 2027 goal, OBrien is already looking ahead.

We have plans for the next generation of systems that will be bigger and more capable, he said, which indicates a future of continuous improvement and expansion in quantum computing capabilities.

Go here to read the rest:
Quantum Computings Next Frontier, A Conversation with Jeremy OBrien - The Quantum Insider

IQM Quantum Computers Advances Quantum Processor Quality with New Benchmarks – HPCwire

ESPOO, Finland, July 15, 2024 IQM Quantum Computers, a global leader in building quantum computers, has demonstrated improvements in two key metrics characterizing the quality of quantum computers.

A record low error rate for two-qubit operations was achieved by demonstrating a CZ gate between two qubits with (99.91 +- 0.02) % fidelity, which was validated by interleaved randomized benchmarking. Achieving high two-qubit gate fidelity is the most fundamental and hardest to achieve characteristic of a quantum processor, essential for generating entangled states between qubits and executing quantum algorithms.

Furthermore, qubit relaxation time T1 of 0.964 +- 0.092 milliseconds and dephasing time T2 echo of 1.155 +- 0.188 milliseconds was demonstrated on a planar transmon qubit on a silicon chip fabricated in IQMs own fabrication facilities. The coherence times, characterized by the relaxation time T1 and the dephasing time T2 echo, are among the key metrics for assessing the performance of a single qubit, as they indicate how long quantum information can be stored in a physical qubit.

These major results show that IQMs fabrication technology has matured and is ready to support the next generation of IQMs high-performance quantum processors. The results follow IQMs recent benchmark announcements and indicate significant potential for further advancements on gate fidelities essential for fault-tolerant quantum computing and processors with higher qubit counts.

The improvements in the two characteristics, two-qubit gate fidelity and coherence time, allow the quantum computer to be developed for more complex use cases. The significance of these results stems from the fact that only very few organizations have achieved comparable performance numbers before.

The results were achieved through innovations in materials and fabrication technology and required top-notch performance across all components of the quantum computer, including QPU design, control optimization, and system engineering.

This achievement cements our tech leadership in the industry. Our quantum processor quality is world-class, and these results show that we have a good opportunity of going beyond that, said Dr. Juha Hassel, the Vice President of Engineering at IQM Quantum Computers.

Hassel explained that the company is on track with its technology roadmap and is actively exploring potential use cases in machine learning, cybersecurity, route optimization, quantum sensor simulation, chemistry, and pharmaceutical research.

This announcement comes on the heels of the launch of Germanys first hybrid quantum computer at the Leibniz Supercomputing Centre in Munich, for which IQM led the integration with its 20-qubit quantum processing unit, and the opening of the IQM quantum data center in Munich.

About IQM Quantum Computers

IQM is a global leader in designing, building, and selling superconducting quantum computers. IQM provides both on-premises full-stack quantum computers and a cloud platform to access its computers anywhere in the world. IQM customers include the leading supercomputing centres, enterprises, and research labs which have full access to IQMs software and hardware. IQM has over 300 employees with offices in Espoo, Munich, Paris, Warsaw, Madrid, Singapore, and Palo Alto.

Source: IQM Quantum Computers

Original post:
IQM Quantum Computers Advances Quantum Processor Quality with New Benchmarks - HPCwire

Northeastern professor achieves major breakthrough in the manufacture of quantum computing components – Northeastern University

Quantum computers have to be kept cold to function very cold. These machines generally run at just a few degrees above absolute zero, says Yoseob Yoon, assistant professor of mechanical and industrial engineering at Northeastern University. Its colder than outer space.

Yoons research focuses on controlling material properties using lasers, he says.

In other words, he shoots light at atomically thin materials to get them moving in novel ways.

One of his principal materials is something called graphene, a two-dimensional surface whose discoverers received the Nobel Prize in Physics in 2010, Yoon says.

Yoon produces graphene through what he calls the Scotch Tape method. We use a few millimeter-wide and -thick bulk materials of, for example, graphite, he says, the same carbon derivative found in pencils. We basically use Scotch Tape literally to peel off ultra-thin samples from the bulk material.

These samples are the thickness of a single atom, smoother than most other materials, he says.

There already existed a field studying thermal transport using thin metallic films, Yoon says. By firing lasers at very thin metals, researchers can induce controlled oscillations like acoustic waves in drums.

However, this has been limited to gigahertz regimes, because these metals are very heavy, and they cannot be controlled down to monolayer thickness.

And then there is another field, basically a 2D-material field, he continues. They exfoliate these atomically thin layers.

Yoons breakthrough came in combining these two fields. By aligning atomically thin structures with the study of laser-based thermal transport, theres a new regime that we couldnt achieve before.

Now, in a new paper published in Nature, Yoon and his collaborators have identified novel van der Waals heterostructures (created by combining layers of these atomically thin materials, including graphene and other varieties) that allow control at terahertz frequencies.

Heres what that means. Yoon notes that temperature is really just atoms in motion. The faster the atoms move, the higher the temperature. In a quantum computer, this motion translates to random noise, inhibiting the computers function. Cooling a quantum computer, therefore, increases controllability.

Current transducers in quantum computers are limited to the gigahertz range. That limits the range of temperatures that can be operated, Yoon says. They can operate only at low temperatures. Colder than outer space, remember.

Because of this frequency limit, he continues, increasing the range of these transducers into terahertz frequencies an increase by a factor of a thousand we will be able to run [quantum computers] at room temperatures.

In other words, a machine that runs close to negative 460 degrees Fahrenheit can suddenly be run at room temperature.

At least this particular component, Yoon is quick to point out. There are some disadvantages of going to higher temperatures, [for instance,] quantum signals will decay much faster.

So this isnt the ultimate solution in room temperature quantum computing, but it is one major step toward that goal. Additionally, this discovery also allows for the design of more efficient heat management components in classical computers, he wrote in a follow-up.

What comes next? Weve pushed in terms of frequency bandwidth, and how high the frequency can be, he says. But we didnt push to the amplitude limits.

We want to push the limit.

Read the rest here:
Northeastern professor achieves major breakthrough in the manufacture of quantum computing components - Northeastern University

Realization of higher-order topological lattices on a quantum computer – Nature.com

Mapping higher-dimensional lattices to 1D quantum chains

While small quasi-1D and 2D systems have been simulated on digital quantum computers27,28, the explicit simulation of higher-dimensional lattices remains elusive. Directly simulating a d-dimensional lattice of width L along each dimension requires ~Ld qubits. For large dimensionality d or lattice size L, this quickly becomes infeasible on NISQ devices, which are significantly limited by the number of usable qubits, qubit connectivity, gate errors, and decoherence times.

To overcome these hardware limitations, we devise an approach to exploit the exponentially large many-body Hilbert space of an interacting qubit chain. The key inspiration is that most local lattice models only access a small portion of the full Hilbert space (particularly non-interacting models and models with symmetries), and an Ld-site lattice can be consistently represented with far fewer than Ld qubits. To do so, we introduce an exact mapping that reduces d-dimensional lattices to 1D chains hosting d-particle interactions, which is naturally simulable on a quantum computer that accesses and operates on the many-body Hilbert space of a register of qubits.

At a general level, we consider a generic d-dimensional n-band model ({{{{{{{mathcal{H}}}}}}}}={sum}_{{{{{{{{bf{k}}}}}}}}}{{{{{{{{bf{c}}}}}}}}}_{{{{{{{{bf{k}}}}}}}}}^{{{{dagger}}} }{{{{{{{mathcal{H}}}}}}}}({{{{{{{bf{k}}}}}}}}){{{{{{{{bf{c}}}}}}}}}_{{{{{{{{bf{k}}}}}}}}}) on an arbitrary lattice. In real space,

$${{{{{{{mathcal{H}}}}}}}}={sum}_{{{{{{{{bf{r}}}}}}}}{{{{{{{{bf{r}}}}}}}}}^{{prime} }}{sum}_{gamma {gamma }^{{prime} }}{h}_{{{{{{{{bf{r}}}}}}}}{{{{{{{{bf{r}}}}}}}}}^{{prime} }}^{gamma {gamma }^{{prime} }}{c}_{{{{{{{{bf{r}}}}}}}}gamma }^{{{{dagger}}} }{c}_{{{{{{{{{bf{r}}}}}}}}}^{{prime} }{gamma }^{{prime} }},$$

(1)

where we have associated the band degrees of freedom to a sublattice structure , and ({h}_{{{{{{{{bf{r}}}}}}}}{{{{{{{{bf{r}}}}}}}}}^{{prime} }}^{gamma {gamma }^{{prime} }}=0) for (| {{{{{{{bf{r}}}}}}}}-{{{{{{{{bf{r}}}}}}}}}^{{prime} }|) outside the coupling range of the model, i.e., adjacent sites for a nearest-neighbor (NN) model, next-adjacent for next-NN, etc. The operator cr annihilates particle excitations on sublattice of site r.

To take advantage of the degrees of freedom in the many-body Hilbert space, our mapping is defined such that the hopping of a single particle on the original d-dimensional lattice from (({{{{{{{{bf{r}}}}}}}}}^{{prime} },;{gamma }^{{prime} })) to (r, ) becomes the simultaneous hopping of d particles, each of a distinct species, from locations (({r}_{1}^{{prime} },ldots,{r}_{d}^{{prime} })) to (r1,, rd) and sublattice ({gamma }^{{prime} }) to on a 1D interacting chain. Explicitly, this map is given by

$${{{{{{{{bf{c}}}}}}}}}_{{{{{{{{bf{r}}}}}}}}gamma }^{{{{dagger}}} } , mapsto {prod}_{alpha=1}^{d}{left[{omega }_{{r}_{alpha }gamma }^{alpha }right]}^{{{{dagger}}} },qquad {{{{{{{{bf{c}}}}}}}}}_{{{{{{{{bf{r}}}}}}}}gamma } , mapsto {prod}_{alpha=1}^{d}{omega }_{{r}_{alpha }gamma }^{alpha },$$

(2)

where r is the th component of r, and ( { omega^{alpha}_{ell gamma} }_{alpha = 1}^{d}) represents d excitation species hosted on sublattice of site on the interacting chain, yielding

$${{{{{{{mathcal{H}}}}}}}}mapsto {{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{1D}}}}}}}}}={sum}_{{{{{{{{bf{r}}}}}}}}{{{{{{{{bf{r}}}}}}}}}^{{prime} }}{sum}_{gamma {gamma }^{{prime} }}{h}_{{{{{{{{bf{r}}}}}}}}{{{{{{{{bf{r}}}}}}}}}^{{prime} }}^{gamma {gamma }^{{prime} }}{prod}_{alpha=1}^{d}{left[{omega }_{{r}_{alpha }gamma }^{alpha }right]}^{{{{dagger}}} }{omega }_{{r}_{alpha }^{{prime} }{gamma }^{{prime} }}^{alpha }.$$

(3)

In the single-particle context, exchange statistics is unimportant, and {} can be taken to be commuting. This mapping framework accommodates any lattice dimension and geometry, and any number of bands or sublattice degrees of freedom. As the mapping is performed at the second-quantized level, any one-body Hamiltonian expressed in second-quantized form can be treated, which encompasses a wide variety of single-body topological phenomena of interest. We refer readers to Supplementary Note1 for a more expansive technical discussion. With slight modifications, this mapping can also be extended to admit interaction terms in the original d-dimensional lattice Hamiltonian, although we do not explore them further in this work.

For concreteness, we specialize our Hamiltonian to HOT systems henceforth and shall detail how our mapping enables them to be encoded on quantum processors. The simplest square lattice with HOT corner modes21 may be constructed from the paradigmatic 1D Su-Schrieffer Heeger (SSH) model29. To allow for sufficient degrees of freedom for topological localization, we minimally require a 2D mesh of two different types of SSH chains in each direction, arranged in an alternating fashion

$${{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{lattice}}}}}}}}}^{{{{{{{{rm{2D}}}}}}}}}={sum}_{(x,y)in {[1,L]}^{2}}left[{u}_{xy}^{x}{c}_{(x+1)y}^{{{{dagger}}} }+{u} , _{yx}^{y}{c}_{x(y+1)}^{{{{dagger}}} }right]{c}_{xy}+,{{mbox{h.c.}}},,$$

(4)

where cxy is the annihilation operator acting on site (x, y) of the lattice and ({u}_{{r}_{1}{r}_{2}}^{alpha }) takes values of either ({v}_{{r}_{1}{r}_{2}}^{alpha }) for intra-cell hopping (odd r2) or ({w}_{{r}_{1}{r}_{2}}^{alpha }) for inter-cell hopping (even r2), {x, y}. Conceptually, we recognize that the 2D lattice momentum space can be equivalently interpreted as the joint configuration momentum space of two particles, specifically, the (1+1)-body sector of a corresponding 1D interacting chain. We map cxyxy, where and annihilate hardcore bosons of two different species at site on the chain. In the notation of Eq. (2), we identify ({omega }_{ell }^{1},=,{omega }_{ell }^{x},=,{mu }_{ell }) and ({omega }_{ell }^{2},=,{omega }_{ell }^{y},=,{nu }_{ell }), and the sublattice structure has been absorbed into the (parity of) spatial coordinates. This yields an effective 1D, two-boson chain described by

$${{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{{{{{{{{rm{2D}}}}}}}}},=, {sum}_{x=1}^{L}{sum}_{y=1}^{L}left[{u}_{xy}^{x}{mu }_{x+1}^{{{{dagger}}} }{mu }_{x}{n}_{y}^{nu },+,{u}_{yx}^{y}{nu }_{y+1}^{{{{dagger}}} }{nu }_{y}{n}_{x}^{mu }right],+,,{{mbox{h.c.}}},,$$

(5)

where ({n}_{ell }^{omega }) is the number operator for species at site of the chain. As written, each term in ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{{{{{{{{rm{2D}}}}}}}}}) represents an effective SSH model for one particular species or , with the other species not participating in hopping but merely present (hence its number operator). These two-body interactions arising in ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{{{{{{{{rm{2D}}}}}}}}}) appear convoluted, but can be readily accommodated on a quantum computer, taking advantage of the quantum nature of the platform. To realize ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{{{{{{{{rm{2D}}}}}}}}}) on a quantum computer, we utilize 2qubits to represent each site of the chain, associating the unoccupied, -occupied, -occupied and both , -occupied boson states to qubit states (leftvert 00rightrangle), (leftvert 01rightrangle), (leftvert 10rightrangle), and (leftvert 11rightrangle) respectively. Thus 2L qubits are needed for the simulation, a significant reduction from L2 qubits without the mapping, especially for large lattice sizes. We present simulation results on IBM quantum computers for lattice size (L sim {{{{{{{mathcal{O}}}}}}}}(10)) inthe Two-dimensional HOT square lattice section.

Our methodology naturally generalizes to higher dimensions. Specifically, ad-dimensional HOT lattice maps onto a d-species interacting 1D chain, and d qubits are employed to represent each site of the chain, providing sufficient many-body degrees of freedom to encode the 2d occupancy basis states of each site. We write

$${{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{lattice}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}={sum}_{{{{{{{{bf{r}}}}}}}}in {[1,L]}^{d}}{sum}_{alpha=1}^{d}{u}_{{{{{{{{bf{r}}}}}}}}}^{alpha }{c}_{{{{{{{{bf{r}}}}}}}}+{hat{{{{{{{{bf{e}}}}}}}}}}_{alpha }}^{{{{dagger}}} }{c}_{{{{{{{{bf{r}}}}}}}}}+,{{mbox{h.c.}}},,$$

(6)

where enumerates the directions along which hoppings occur and ({hat{{{{{{{{bf{e}}}}}}}}}}_{alpha }) is the unit vector along . As before, the hopping coefficients alternate between inter- and intra-cell values that can be different in each direction. Compactly, ({u}_{{{{{{{{bf{r}}}}}}}}}^{alpha }=[1-pi ({r}_{alpha })]{v}_{{{{{{{{boldsymbol{pi }}}}}}}}({{{{{{{{bf{r}}}}}}}}}_{alpha })}^{alpha }+pi ({r}_{alpha }){w}_{{{{{{{{boldsymbol{pi }}}}}}}}({{{{{{{{bf{r}}}}}}}}}_{alpha })}^{alpha }) for parity function , intra- and inter-cell hopping coefficients ({v}_{{{{{{{{boldsymbol{pi }}}}}}}}({{{{{{{{bf{r}}}}}}}}}_{alpha })}^{alpha }) and ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}({{{{{{{{bf{r}}}}}}}}}_{alpha })}^{alpha }), and r are spatial coordinates in non- directionssee Supplementary Table1 for details of the hopping parameter values used in this work. Using d hardcore boson species {} to represent the d dimensions, we map onto an interacting chain via ({c}_{{{{{{{{bf{r}}}}}}}}}mapsto {prod}_{alpha=1}^{d}{omega }_{{r}_{alpha }}^{alpha }), giving

$${{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}={sum}_{{{{{{{{bf{r}}}}}}}}in {[1,L]}^{d}}{sum}_{alpha=1}^{d}{u}_{{{{{{{{bf{r}}}}}}}}}^{alpha }left[{left({omega }_{{r}_{alpha }+1}^{alpha }right)}^{{{{dagger}}} }{omega }_{{r}_{alpha }}^{alpha } prod_{beta=1 atop beta neq alpha}^d {n}_{{r}_{beta }}^{beta }right]+,{{mbox{h.c.}}},,$$

(7)

where ({omega }_{ell }^{alpha }) annihilates a hardcore boson of species at site of the chain and ({n}_{ell }^{alpha }) is the number operator of species . In the d=2 square lattice above, we had r=(x, y) and {}={, }. The highest dimensional HOT lattice we shall examine is the d=4 tesseract, for which r=(x, y, z, w) and {}={, , , }. In total, a d-dimensional HOT lattice Hamiltonian has d2d distinct hopping coefficients, since there are d different lattice directions and 2d1 distinct edges along each direction, each comprising two distinct hopping amplitudes for inter- and intra-cell hopping. Appropriately tuning these coefficients allows the manifestation of robust HOT modes along the boundaries (corners, edges, etc.) of the latticesschematics of the various lattice configurations investigated in our experiments are shown in later sections.

Accordingly, the equivalent interacting 1D chain requires dL qubits to realize, an overwhelming reduction from the Ld otherwise needed in a direct simulation of ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{lattice}}}}}}}}}^{dD}) without the mapping. We remark that such a significant compression is possible because HOT is inherently a single-particle phenomenon. See Methods for further details and optimizations of our mapping scheme on the HOT lattices considered, and Supplementary Note1 for an extended general discussion, including examples of other lattices and models.

With our mapping, a d-dimensional HOT lattice ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{lattice}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) with Ld sites is mapped onto an interacting 1D chain ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) with dL number of qubits, which can be feasibly realized on existing NISQ devices for (L sim {{{{{{{mathcal{O}}}}}}}}(10)) and d4. While the resultant interactions in ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) are inevitably complicated, below we describe how ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) can be viably simulated on quantum hardware.

A high-level overview of our general framework for simulating HOT time-evolution is illustrated in Fig.2. To evolve an initial state (leftvert {psi }_{0}rightrangle), it is necessary to implement the unitary propagator (U(t)=exp (-i{{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}t)) as a quantum circuit, such that the circuit yields (leftvert psi (t)rightrangle=U(t)leftvert {psi }_{0}rightrangle) and desired observables can be measured upon termination. A standard method to implement U(t) is Trotterization, which decomposes ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) in the spin-1/2 basis and splits time-evolution into small steps (see Methods for details). However, while straightforward, such an approach yields deep circuits unsuitable for present-generation NISQ hardware. To compress the circuits, we utilize a tensor network-aided recompilation technique30,31,32,33. We exploit the number-conserving symmetries of ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) in each boson species, arising from ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{lattice}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) and the nature of our mapping (see Methods), to enhance circuit construction performance and quality at large circuit breadths (up to 32qubits). Moreover, to improve data quality amidst hardware noise, we employ a suite of error mitigation techniques, in particular, readout errormitigation (RO) that approximately corrects bit-flip errors during measurement34, a post-selection (PS) technique that discards results in unphysical Fock-space sectors30,35, and averaging across machines and qubit chains (see Methods).

a, b Mapping of a higher-dimensional lattice to a 1D interacting chain to facilitate quantum simulation on near-term devices. Concretely, a two-dimensional single-particle lattice can be represented by a two-species interacting chain; a three-dimensional lattice can be represented by a three-species chain with three-body interactions. c Overview of quantum simulation methodology: higher-dimensional lattices are first mapped onto interacting chains, then onto qubits; various techniques, such as d Trotterization and e ansatz-based recompilation, enable the construction of quantum circuits for dynamical time-evolution, or IQPE for probing the spectrum. The quantum circuits are executed on the quantum processor, and results are post-processed with RO and PS error mitigationsto reduce effects of hardware noise. See Methods for elaborations on the mapping procedure, and quantum circuit construction and optimization.

After acting on (leftvert {psi }_{0}rightrangle) by the quantum circuit that effects U(t), terminal computational-basis measurements are performed on the simulation qubits. We retrieve the site-resolved occupancy densities (rho ({{{{{{{bf{r}}}}}}}})=langle {c}_{{{{{{{{bf{r}}}}}}}}}^{{{{dagger}}} }{c}_{{{{{{{{bf{r}}}}}}}}}rangle=langle {prod}_{alpha=1}^{d}{n}_{{r}_{alpha }}^{alpha }rangle) on the d-dimensional lattice, and the extent of evolution of (leftvert psi (t)rightrangle) away from (leftvert {psi }_{0}rightrangle), whose occupancy densities are 0(r), is assessed via the occupancy fidelity

$$0le {{{{{{{{mathcal{F}}}}}}}}}_{rho }=frac{{left[{sum}_{{{{{{{{bf{r}}}}}}}}}rho ({{{{{{{bf{r}}}}}}}}){rho }_{0}({{{{{{{bf{r}}}}}}}})right]}^{2}}{left[mathop{sum}_{{{{{{{{bf{r}}}}}}}}}rho ({{{{{{{bf{r}}}}}}}})^2right] left[mathop{sum}_{{{{{{{{bf{r}}}}}}}}}{rho }_{0} ({{{{{{{bf{r}}}}}}}})^2right]} le 1.$$

(8)

Compared to the state fidelity ({{{{{{{mathcal{F}}}}}}}}=| langle {psi }_{0}| psi rangle {| }^{2}), the occupancy fidelity ({{{{{{{{mathcal{F}}}}}}}}}_{rho }) is considerably more resource-efficient to measure on quantum hardware.

In addition to time evolution, we can also directly probe the energy spectrum of our simulated Hamiltonian ({{{{{{{{mathcal{H}}}}}}}}}_{{{{{{{{rm{chain}}}}}}}}}^{d{{{{{{{rm{D}}}}}}}}}) through iterative quantum phase estimation (IQPE)36see Methods. Specifically, to characterize the topology of HOT systems, we use IQPE to probe the existence of midgap HOT modes at exponentially suppressed (effectively zero for L1) energies. In contrast to quantum phase estimation37,38, IQPE circuits are shallower and require fewer qubits, and are thus preferable for implementation on NISQ hardware. As our interest is in HOT modes, we initiate IQPE with maximally localized boundary states that are easily constructed a priori, which exhibit good overlap (>80% state fidelity) with HOT eigenstates, and examine whether IQPE converges consistently towards zero energy. These states are listed in Supplementary Table2.

As the lowest-dimensional incarnation of HOT lattices, the d=2 staggered square lattice harbors only one type of HOT modezero-dimensional corner modes (Fig.1a). Previously, such HOT corner modes on 2D lattices have been realized in various metamaterials39,40 and photonic waveguides41, but not in a purely quantum setting to-date. Our equivalent 1D hardcore boson chain can be interpreted as possessing interaction-induced topology that manifests in the joint configuration space of the d bosons hosted on the many-body chain. Here, the topological localization is mediated not due to physical SSH-like couplings or band polarization but due to the combined exclusion effects from all its interaction terms. We emphasize that our physically realized 1D chain contains highly non-trivial interaction terms involving multiple sitesthe illustrative example in Fig.3f for an L=6 chain already contains a multitude of interactions, even though it is much smaller than the L=10 and L=16 systems we simulated on quantum hardware. As evident, the (d times 2^d = 8) unique types of interactions, corresponding to the 8 different couplings on the lattice, are mostly non-local; but this does not prohibit their implementation on quantum circuits. Indeed, the versatility of digital quantum simulators in realizing effectively arbitrary interactions allows the implementation of complex interacting Hamiltonian terms, and is critical in enabling our quantum device simulations.

a Ordered eigenenergies on a 1010 lattice for the topologically trivial C0 and nontrivial C2 and C4 configurations. They correspond to 0, 2, and 4 midgap zero modes (red diamonds), as measured via IQPE on a 20-qubit quantum chain plus an additional ancillary qubit; the shaded red band indicates the IQPE energy resolution. The corner state profiles (right insets) and other eigenenergies (black and gray dots) are numerically obtained via ED. Time-evolution of four initial states on a 1616 lattice mapped onto a 32-qubit chainb, c localized at corners to highlight topological distinction, d localized along an edge, and e delocalized in the vicinity of a corner. Left plots show occupancy fidelity for the various lattice configurations, obtained from ED and quantumhardware (labeled HW), with insets showing the site-resolved occupancy density (x, y) of the initial states (darker shading represents higher density). The right grid shows occupancy density measured on hardware at two later times. States with good overlap with robust corners exhibit minimal evolution. Error bars represent standard deviation across repetitions on different qubit chains and devices. In general, the heavy overlap between an initial state and a HOT eigenstate confers topological robustness, resulting in significantly slowed decay. f Schematic of the interacting chain Hamiltonian, mapped from the parent 2D lattice, illustrated for a smaller 66 square lattice. The physical sites of the interacting boson chain are colored black, with their many-body interactions represented by colored vertices. Intra- and inter-cell hoppings, mapped onto interactions, are respectively denoted ({v}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) and ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) for axes (in){x, y} and parities ({{{{{{{boldsymbol{pi }}}}}}}}in {{mathbb{Z}}}_{2}^{1}).

In our experiments, we consider three different scenarios: C0, having no topological corner modes; C2, having two corner modes at corners (x, y)=(1, 1) and (L, 1); and C4, having corner modes on all four corners. These scenarios can be obtained by appropriately tuning the eight coupling parameters in the Hamiltonian (Eq. (4))see Supplementary Table1 for parameter values42.

We first show that the correct degeneracy of midgap HOT modes can be measured on each of the configurations C0, C2, and C4 on IBM transmon-based quantum computers, as presented in Fig.3a. For a start, we used a 20-qubit chain, which logically encodes a 1010 HOT lattice, with an additional ancillary qubit for IQPE readout. The number of topological corner modes in each case is accurately obtained through the degeneracy of midgap states of exponentially suppressed energy (red), as measured through IQPE executed on quantum hardwaresee Methods for details. That these midgap modes are indeed corner-localized is verified via numerical (classical) diagonalization, as in the insets of Fig.3a.

Next, we demonstrate highly accurate dynamical state evolution on larger 32-qubit chains on quantum hardware. We time-evolve various initial states on 1616 HOT lattices in the C0, C2, and C4 configurations and measure their site-resolved occupancy densities (x, y), up to a final time t=0.8 when fidelity trends become unambiguous. The resultant occupancy fidelity plots (Fig.3be) conform to the expectation that states localized on topological corners survive the longest, and are also in excellent agreement with reference data from ED. For instance, a localized state at the corner (x0, y0)=(1, 1) is robust on C2 and C4 lattice configurations (Fig.3b), whereas one localized on the (x0, y0)=(1, L) corner is robust only on the C4 configuration (Fig.3c). These fidelity decay trends are corroborated with the measured site-resolved occupancy density (x, y): low occupancy fidelity is always accompanied by a diffused (x, y) away from the initial state, whereas strongly localized states have high occupancy fidelity. In general, the heavy overlap between an initial state and a HOT eigenstate confers topological robustness, resulting in significantly slowed decay; this is apparent from the occupancy fidelities, which remain near unity over time. In comparison, states that do not enjoy topological protection, such as the (1, L)-localized state on the C2 configuration and all initial states on the C0 configuration, rapidly delocalize and decay quickly.

Our experimental runs remain accurate even for initial states that are situated away from the lattice corners, such that they cannot enjoy full topological protection. In Fig.3d, the initial state at (x0, y0)=(2, 1), which neighbors the corner (1, 1), loses its fidelity much sooner than the corner initial state of Fig.3b, even for the C2 and C4 topological corner configurations. That said, its fidelity evolution still agrees well with ED reference data. In a similar vein, an initial state that is somewhat delocalized at a corner (Fig.3e) is still conferred a degree of stability when the corner is topological.

Next, we extend our investigation to the staggered cubic lattice in 3D, which hosts third-order HOT corner modes (Fig.1a). These elusive corner modes have to date only been realized in classical platforms43 or in synthetic electronic lattices44. Compared to the 2D cases, the implementation of the 3D HOT lattice (Eq. (6)) as a 1D interacting chain (Eq. (7)) on quantum hardware is more sophisticated. The larger dimensionality of the staggered cubic lattice, in comparison to the square lattice, is reflected by a larger density of multi-site interaction terms on the interacting chain. This is illustrated in Fig.4b for the minimal 444 lattice, where the combination of the various d=3-body interactions gives rise to emergent corner robustness (which appears as up to 3-body boundary clustering as seen on the 1D chain).

a The header row displays energy spectra for the topologically trivial C0 and inequivalent nontrivial C4a, C4b, and C8 configurations. The configurations host 0, 4, and 8 midgap zero modes (red diamonds), as measured via IQPE on an 18-qubit chain plus an ancillary qubit; the shaded red band indicates the IQPE energy resolution. Schematics illustrating the locations of topologically robust corners are shown on the right. Subsequent rows depict the time-evolution of five initial states on a 666 lattice mapped onto an 18-qubit chainlocalized at a corner, on an edge, on a face, and in the bulk of the cube, and delocalized in the vicinity of a corner. The leftmost column plots occupancy fidelity for the various lattice configurations, obtained from ED and quantum hardware (labeled HW), with insets showing the site-resolved occupancy density (x, y, z) of the initial state (darker shading represents higher density). The central grid shows occupancy density measured on hardware at a later time (t=0.6), for the corresponding initial state (row) and lattice configuration (column). Error bars represent standard deviation across repetitions on different qubit chains and devices. Again, initial states localized close to topological corners exhibit higher occupational fidelity. b Hamiltonian schematic of the interacting chain realizing a minimal 444 cubic lattice. Sites on the chain are colored black; colored vertices connecting to multiple sites on the chain denote interaction terms. Intra- and inter-cell hoppings, mapped onto interactions, are respectively denoted ({v}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) and ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) for axes ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }){x, y, z} and parities ({{{{{{{boldsymbol{pi }}}}}}}}in {{mathbb{Z}}}_{2}^{2}).

On quantum hardware, we implemented 18-qubit chains representing 666 cubic lattices in four configurations, specifically, the trivial lattice (C0), two geometrically inequivalent configurations hosting four topological corners (C4a, C4b), and a configuration with all 23=8 topological corners (C8). Similar to the 2D HOT lattice, we first present the degeneracy of zero-energy topological modes (header row of Fig.4a) with low-energy spectral data (red diamonds) accurately obtained via IQPE.

From the first row of Fig.4a, it is apparent that initial states localized on topological corners enjoy significant robustness. Namely, the measured site-resolved occupancy densities (x, y, z) (four right columns) indicate that the localization of (x0, y0, z0)=(1, 1, 1) corner initial states on C4a, C4b, and C8 configurations are maintained, and measured occupancy fidelities remain near unity. In comparison, an initial corner-localized state on the C0 configuration, which hosts no topological corner modes, delocalizes quickly. Moving away from the corners, an edge-localized state adjacent to a topological corner is conferred slight, but nonetheless present, stability (second row of Fig.4a), as observed from the slower decay of the (x0, y0, z0)=(2, 1, 1) state on C4a, C4b, and C8 configurations in comparison to the C0 topologically trivial lattice. This conferred robustness is diminished for states localized further from topological corners, for instance, surface-localized states (third row), and is virtually unnoticeable for states localized in the bulk (fourth row), which decay rapidly for all topological configurations. Initial states that are slightly delocalized near a corner enjoy some protection when the corner is topological, but are unstable when the corner is trivial (fifth row of Fig.4a). We again highlight the quantitative agreement of our quantum hardware simulation results with theoretical ED predictions.

We now turn to our key resultsthe NISQ quantum hardware simulation of four-dimensional staggered tesseract HOT lattices. A true 4D lattice is difficult to simulate on most experimental platforms, and with a few exceptions45, most works to date have relied on using synthetic dimensions18,46. In comparison, utilizing our exact mapping (Eqs. (6) and (7)) that exploits the exponentially large many-body Hilbert space accessible by a quantum computer, a tesseract lattice can be directly simulated on a physical 1D spin (qubit) chain, with the number of spatial dimensions only limited by the number of qubits. The tesseract unit cell can be visualized as two interlinked three-dimensional cubes (spanned by x, y, z axes) living in adjacent w-slices (Fig.5). The full tesseract lattice of side length L is then represented as successive cubes with different w coordinates, stacked successively from inside out, with the inner and outer wireframe cubes being w=1 and w=L slices. Being more sophisticated, the 4D HOT lattice features various types of HOT corner, edge, and surface modes (Fig.1a); we presently focus on the fourth-order (hexadecapolar) HOT corner modes, as well as the third-order (octopolar) HOT edge modes.

A L=6 tesseract lattice is illustrated as six cube slices indexed by w and highlighted on a color map. The header row displays energy spectra computed numerically for the topologically trivial C0 and nontrivial C4, C8, and C16 configurations. The configurations host 0, 4, 8, and 16 midgap zero modes (black circles). Schematics on the right illustrate the locations of the topologically robust corners. Subsequent rows depict the time-evolution of three initial states on a 6666 lattice mapped onto a 24-qubit chainlocalized on a a corner, b an edge, and c a face. The leftmost column plots occupancy fidelity for the various lattice configurations, obtained from ED and quantum hardware (labeled HW), with insets showing the site-resolved occupancy density (x, y, z, w) of the initial state. Central grid shows occupancy density measured on hardware at the final simulation time (t=0.6), for the corresponding initial state (row) and lattice configuration (column). The color of individual sites (spheres) denotes their w-coordinate and color saturation denotes occupancy of the site; unoccupied sites are translucent. Error bars represent standard deviation across repetitions on different qubit chains and devices. Initial states with less overlap with topological corners exhibit slightly lower stability than their lower dimensional counterparts, as these states diffuse into the more spacious 4D configuration space. d Hamiltonian schematic of the interacting chain realizing a minimal 4444 tesseract lattice. Sites on the chain are colored black; colored vertices connecting to multiple sites on the chain denote interaction terms. Intra- and inter-cell hoppings, mapped onto interactions, are respectively denoted ({v}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) and ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) for axes (in){x, y, z, w} and parities ({{{{{{{boldsymbol{pi }}}}}}}}in {{mathbb{Z}}}_{2}^{3}). To limit visual clutter, only ({v}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) intra-cell couplings are shown; a corresponding set of ({w}_{{{{{{{{boldsymbol{pi }}}}}}}}}^{alpha }) inter-cell couplings are present in the Hamiltonian but have been omitted from the diagram.

To start, we realized a dL=46=24-qubit chain on the quantum processor, which encodes a 6666 HOT tesseract. The 4-body (8-operator) interactions now come in d2d=64 typeshalf of them are illustrated in Fig.5d, which depicts only the minimal L=4 case. As discussed inthe Mapping higher-dimensional lattices to 1D quantum chains section, these interactions are each a product of d1 density terms and a hopping process, the latter acting on the particle species that encodes the coupling direction on the HOT tesseract. In generic models with non-axially aligned hopping, these interactions could be a product of up to d hopping processes. As we shortly illustrate, despite the complexity of the interactions, the signal-to-noise ratio in our hardware simulations (Fig.5a) remains reasonably good.

In Fig.5, we consider the configurations C0, C4, C8, and C16, which correspond respectively to the topologically trivial scenario and lattice configurations hosting four, eight, and all sixteen HOT corner modes, as schematically sketched in the header row. Similar to the 2D and 3D HOT lattices, site-resolved occupancy density (x, y, z, w) and occupancy fidelities measured on quantum hardware reveal strong robustness for initial states localized at topological corners, as illustrated by the strongly localized final states in the C4, C8, and C16 cases (Fig.5a). However, their stability is now slightly lower, partly due to the more spacious 4D configuration space into which the state can diffuse, as seen from the colored clouds of partly occupied sites after time evolution. Evidently, the stability diminishes as we proceed to the edge- and surface-localized initial states (Fig.5b and c).

Next, we investigate a lattice configuration that supports HOT edge modes (or commonly referred to as topological hinge states in literature22). So far we have seen topological robustness only from topological corner sites (Fig.5); but with appropriate parameter tuning (see Supplementary Table1), topological modes can be made to lie along entire edges. This is illustrated in the header row of Fig.6, where topological modes lie along the y-edges. As our HOT lattices are constructed from a mesh of alternating SSH chains, we expect the topological edges to have wavefunction support (nonzero occupancy) only on alternate sites, consistent with the cumulative occupancy densities of the midgap zero-energy modes. This is corroborated by site-resolved occupancy densities and occupancy fidelities measured on quantum hardware, which demonstrate that initial states localized on sites with topological wavefunction support are significantly more robust (Fig.6a, b), i.e., (x0, y0, z0, w0)=(1, 3, 1, L) overlaps with the topological mode on (1, y, 1, L), y{1, 3, 5} sites and is hence robust, but (1, 2, 1, L) is not. The stability of the initial state is reduced as we move farther from the corner, as can be seen, for instance, by comparing occupancy fidelities and the size of the final occupancy cloud for (1, 1, 1, L) and (1, 3, 1, L) in Fig.6a, b, which is expected from the decaying y-profile of the topological edge mode. Finally, our measurements verify that surface-localized states do not enjoy topological protection (Fig.6c) as they are localized far away from the topological edges. It is noteworthy that such measurements into the interior of the 4D lattice can be made without additional difficulty on our 1D qubit chain, but doing so can present significant challenges on other platforms, even electrical (topolectrical) circuits.

Our mapping facilitates the realization of any desired HOT modes, beyond the aforementioned corner mode examples. The header row on the left displays the energy spectrum for a configuration of the tesseract harboring topologically non-trivial edges (midgap mode energies in black). Accompanying schematic highlights alternating sites with topological edge wavefunction support. Subsequent columns present site-resolved occupancy density (x, y, z, w) for a 6666 lattice mapped onto a 24-qubit chain, measured on quantum hardware at t=0 (first row) and final simulation time t=0.6 (second row), for three different experiments. a A corner-localized state along a topological edge is robust, compared to one along a non-topological edge. b On a topologically non-trivial edge, a state localized on a site with topological wavefunction support is robust, compared to one localized on a site without support. c A surface-localized state far away from the topological edges diffuses into a large occupancy cloud. The bottom leftmost summarizes occupancy fidelities for the various initial states, obtained from ED and hardware (labeled HW). Error bars represent standard deviation across repetitions on different qubit chains and devices.

Our approach of mapping a d-dimensional HOT lattice onto an interacting 1D chain enabled a drastic reduction in the number of qubits required for simulation, and served a pivotal role in enabling the hardware realizations presented in this work. Here, we further illustrate that employing this mapping for simulation on quantum computers can provide a resource advantage over ED on classical computers, particularly at large lattice dimensionality d or linear size L. For this discussion, we largely leave aside tensor network methods, as their advantage over ED is unclear in the generic setting of lattice dimensionality d>1, with arbitrary initial states and evolution time (which may generate large entanglement).

To be concrete, we consider simulation tasks of the following broad type: given an initial state (leftvert {psi }_{0}rightrangle), we wish to perform time-evolution to (leftvert psi (t)rightrangle), and extract the expectation value of an observable O that is local, that is, O is dependent on ({{{{{{{mathcal{O}}}}}}}}({l}^{d})) number of sites on the lattice for a fixed neighborhood of radius l independent of L. State preparation or initialization resources for (leftvert {psi }_{0}rightrangle) are excluded from our considerations, as there can be significant variations in costs depending on the choice of specification of the state for both classical and quantum methods. Measurement costs for computing O, however, are considered. To ensure a meaningful comparison, we assume first-order Pauli-basis Trotterization for the construction of quantum circuits, such that circuit preparation is algorithmically straightforward given a lattice Hamiltonian. As a baseline, classical ED of a d-dimensional, length L system with a single particle generally requires ({{{{{{{mathcal{O}}}}}}}}({L}^{3d})) run-time and ({{{{{{{mathcal{O}}}}}}}}({L}^{2d})) dense classical storage to complete a task of such a type47.

A direct implementation of a generic Hamiltonian using our mapping gives ({{{{{{{mathcal{O}}}}}}}}(d{L}^{d}cdot {2}^{d})) Pauli strings per Trotter step (see Methods), where hoppings along each edge of the lattice, extensive in number, are allowed to be independently tuned. However, physically relevant lattices typically host only a systematic subset of hopping processes, described by a sub-extensive number of parameters. In particular, in the HOT lattices we considered, the hopping amplitude ({u}_{{{{{{{{bf{r}}}}}}}}}^{alpha }) along each axis is dependent only on and the parities of coordinates r. Noting the sub-extensive number of distinct hoppings, the lattice Hamiltonian can be written in a more favorable factorized form, yielding ({{{{{{{mathcal{O}}}}}}}}(dLcdot {2}^{2d})) Pauli strings per Trotter step (see Methods). Decomposing into a hardware gate set, the total number of gates in a time-evolution circuit scales as ({{{{{{{mathcal{O}}}}}}}}({d}^{2}{L}^{2}cdot {2}^{2d}/epsilon )) in the worst-case for simulation precision , assuming all-to-all connectivity between qubits. Imposing linear NN connectivity on the qubit chain does not alter this bound. Crucially, there is no exponential scaling of d in L (of form ~Ld), unlike classical ED.

For large L and d, the circuit preparation and execution time can be lower than the ({{{{{{{mathcal{O}}}}}}}}({L}^{3d})) run-time of classical ED. We illustrate this in Fig.7, which shows a qualitative comparison of run-time scaling between the quantum simulation approach and ED. We have assumed execution time on hardware to scale as the number of gates in the circuit ({{{{{{{mathcal{O}}}}}}}}({d}^{2}{L}^{2}cdot {2}^{2d}/epsilon )), which neglects speed-ups afforded by parallelization of single- or two-qubit gates acting on disjoint qubits48. The difference in asymptotic complexities implies a crossover at large L or d beyond which quantum simulation exhibits a growing advantage. The exact crossover boundary is sensitive to platform-specific details such as gate times and control capabilities; given the large spread in gate timescales (3 orders of magnitude) across present-day platforms49,50, and uncertain overheads from quantum error correction or mitigation, we avoid giving definite numerical promises on breakeven L and d values. Classical memory usage is similarly bounded during circuit construction, straightforwardly reducible to ({{{{{{{mathcal{O}}}}}}}}(dL)) by constructing and executing gates in a streaming fashion51, and worst-case ({{{{{{{mathcal{O}}}}}}}}({2}^{ld})) during readout to compute O, reducible to a constant supposing basis changes to map components of O onto the computational basis of a fixed number of measured qubits can be implemented on the quantum circuits52.

Comparison of asymptotic computational time required for the dynamical simulation of d-dimensional, size-L lattice Hamiltonians of similar complexity as our HOT lattices. a With fixed lattice dimension d and increasing lattice size L, the time taken with our approach on a quantum computer (labeled QC) scales with L2, rather than the higher power of L3d through classical ED. b For fixed L and varying d, our approach scales promisingly, scaling like 4d instead of ({({L}^{3})}^{d}) for ED. We assume conventional Trotterization for circuit construction, and at large L and d, our mapping and quantum simulation approach can provide a resource advantage over classical numerical methods (e.g., ED).

The favorable resource scaling (run-time and memory), in combination with the modest dL qubits required, suggests promising scalability of our mapped quantum simulation approach, especially in realizing larger and higher-dimensional HOT lattices. We re-iterate, however, that Trotterized circuits without additional optimization remain largely too deep for present-generation NISQ hardware to execute feasibly. The use of qudit hardware architectures in place of qubits can allow shallower circuits53; in particular, using a qudit of local Hilbert space dimension 2d instead of a group of d qubits avoids, to a degree, decomposition of long-range multi-site gates, assuming the ability to efficiently and accurately perform single- and two-qudit operations54. Nonetheless, for the quantum simulation of sophisticated topological lattices as described to be achieved in their full potential, fault-tolerant quantum computation, at the least quantum devices with vastly improved error characteristics and decoherence times, will likely be needed.

Read the rest here:
Realization of higher-order topological lattices on a quantum computer - Nature.com