Archive for the ‘Quantum Computer’ Category

UNI’s Begeman Lecture to explore how quantum computing is revolutionizing our world – Cedar Valley Daily Times

Quantum computing, and how its revolutionizing our world, is the focus of this years Begeman Lecture in Physics at the University of Northern Iowa.

The lecture, titled Building a Quantum Computer, One Atom at a Time, will be presented by UNI Department of Physics alum Justin Bohnet on Wednesday, April 3 at 7 p.m. in the Lang Hall Auditorium. The event is free and open to the public.

Justin is in the vanguard of efforts to develop quantum computers for widespread use, said Paul Shand, head of the UNI Department of Physics. Were excited for him to share more about quantum computers and how they will turbocharge computing in the future.

Bohnet is the research & development manager at Quantinuum a quantum computing company whose mission is to accelerate quantum computing and use its power to achieve unprecedented breakthroughs in drug discovery, health care, materials science, cybersecurity, energy transformation and climate change.

In this lecture, Bohnet will share his personal journey from a student at UNI to building the worlds most powerful quantum computers, powered by control over single atoms. Along the way, youll get a crash course on quantum computers what they are, how they work and why were standing on the brink of a technological revolution that will let us explore uncharted territories of science and technology.

If you need a reasonable accommodation in order to participate in this event, please contact the UNI Department of Physics by calling 319-273-2420 or by emailing physics@uni.edu prior to the event.

See more here:
UNI's Begeman Lecture to explore how quantum computing is revolutionizing our world - Cedar Valley Daily Times

Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization – HPCwire

Optimization problems cover a wide range of applications and are often cited as good candidates for quantum computing. However, the execution time for constrained combinatorial optimization applications on quantum devices can be problematic. Researchers from Waseda University report developing a new algorithm post-processing variationally scheduled quantum algorithm (pVSQA) that speeds performance.

Therea brief account of the work posted today on the Waseda University website. Constrained combinatorial problems (COP) are common in logistics, supply chain management, machine learning, material design, and drug discovery. The researchers report the novelty of their algorithm is its use of a post-processing technique combined with variational scheduling to achieve high-quality solutions to COPs in a short time.

The two main methods for solving COPs with quantum devices are variational scheduling and post-processing. Our algorithm combines variational scheduling with a post-processing method that transforms infeasible solutions into feasible ones, allowing us to achieve near-optimal solutions for constrained COPs on both quantum annealers and gate-based quantum computers, said Tatsuhiko Shirai, a leader in the work, which was published in EEE Transactions on Quantum Engineering this month.

Heres a brief excerpt from the article:

The innovative pVSQA algorithm uses a quantum device to first generate a variational quantum state via quantum computation. This is then used to generate a probability distribution function which consists of all the feasible and infeasible solutions that are within the constraints of the COP. Next, the post-processing method transforms the infeasible solutions into feasible ones, leaving the probability distribution with only feasible solutions. A classical computer is then used to calculate an energy expectation value of the cost function using this new probability distribution. Repeating this calculation results in a near-optimal solution.

The researchers analyzed the performance of this algorithm using both a simulator and real quantum devices such as a quantum annealer and a gate-type quantum device. The experiments revealed that pVSQA achieves a near-optimal performance within a predetermined time on the simulator and outperforms conventional quantum algorithms without post-processing on real quantum devices.

Given the limits of current quantum devices (adiabatic annealers and gate-based systems), the researchers suggest the new algorithm is a significant step forwards particularly given the wider applicability of constrained combinatorial optimization.

They note in the papers abstract:

COPs are typically transformed into ground-state search problems of the Ising model on a quantum annealer or gate-based quantum device. Variational methods are used to find an optimal schedule function that leads to high-quality solutions in a short amount of time. Post-processing techniques convert the output solutions of the quantum devices to satisfy the constraints of the COPs.

pVSQA combines the variational methods and the post-processing technique. We obtain a sufficient condition for constrained COPs to apply pVSQA based on a greedy post-processing algorithm. We apply the proposed method to two constrained NP-hard COPs: the graph partitioning problem and the quadratic knapsack problem. pVSQA on a simulator shows that a small number of variational parameters is sufficient to achieve a (near-)optimal performance within a predetermined operation time. Then building upon the simulator results, we implement pVSQA on a quantum annealer and a gate-based quantum device. The experimental results demonstrate the effectiveness of our proposed method.

Link to Waseda University article, https://www.waseda.jp/top/en/news/80146

Link to IEEE paper, https://ieeexplore.ieee.org/document/10472069

Go here to see the original:
Waseda U. Researchers Reports New Quantum Algorithm for Speeding Optimization - HPCwire

Exploring the potential of quantum reservoir computing in forecasting the intensity of tropical cyclones – Moody’s Analytics

What is the problem?

Accurately predicting the intensity of tropical cyclones, defined as the maximum sustained windspeed over a period of time, is a critical yet challenging task. Rapid intensification (RI) events are still a daunting problem for operational intensity forecasting.

Better forecasts and simulation of tropical cyclone (TC) intensities and their track can significantly improve the quality of Moodys RMS tropical cyclone modeling suite. RMS has helped clients manage their risk during TC events in the North Atlantic for almost 20 years. Real time TCs can significantly impact a companys financial, operational, and overall solvency state. Moodys RMS Hwind product helps (re)insurers, brokers, and capital markets understand the range of potential losses across multiple forecast scenarios, capturing the uncertainty in of how track and intensity will evolve.

With the advances in Numerical Weather Prediction (NWP) and new meteorological observations, forecasts of TC movement have progressively improved in global and regional models. However, the model accuracy in forecasting the intensities of TCs remains challenging for operational weather forecasting and consequential assessment of weather impacts such as high winds, storm surges, and heavy rainfall.

Since the current spatial resolution of the NWP model is insufficient for resolving convective scale processes and inner core dynamics of the cyclone, forecast intensities of TCs from operational models are mostly underestimated or low biased. Yet, accurate TC intensity guidance is crucial not only for assessing the impact of the TC, but also for generating realistic projections of storms and their associated hazards. This is essential for effective risk evaluation. Conventional TC intensity forecasting mainly relies on three approaches: statistical, dynamical, and statistical-dynamical methods.

Dynamical models, also known as numerical models, are the most complex and use high performance computing (HPC) to solve the physical equations of motion governing the atmosphere. While statistical models do not explicitly consider the physics of the atmosphere, they are based on historical relationships between storm behavior and storm-specific details such as location and intensity.

The rise of Machine Learning (ML) and Deep Learning (DL) has led to attempts to create breakthroughs in climate modeling and weather forecasting. Recent advances in computational capabilities and the availability of extensive reanalysis of observational or numerical datasets have reignited interest in developing various ML methods for predicting and understanding the dynamics of complex systems.

One of our key objectives is to build a quantum reservoir computing-based model, capable of processing climate model outputs and storm environment parameters, to provide more accurate forecasting, will improve short-term and real-time TC risk analysis.

Official modeling centers use consensus or ensemble-based dynamical models and represent the state of the art in tropical cyclone forecasting. However, these physics-based models may be subject to bias derived from high wind shear, low sea surface temperatures, or the storms location in the basin. By learning from past forecasting errors, we may be able to identify and correct past model biases, thereby greatly enhancing the quality of future forecasting and risk modeling products. The long-term aim is to integrate ML-based elements into coarse global climate models to improve their resolution and include natural dynamical processes currently absent in these models.

Reservoir Computing (RC) is a novel machine-learning algorithm particularly suited to quantum computers and has shown promising results in early non-linear time series prediction tests. In a classical setting, RC is stable and computationally simple. It works by mapping input time series signals into a higher dimensional computational space through the dynamics of a fixed, non-linear system known as a reservoir. This method is efficient, trainable, and has a low computational cost, making it a valuable tool for large-scale climate modeling.

While quantum machine learning has been considered a promising application for near-term quantum computers, current quantum machine learning methods require large quantum resources and suffer from gradient vanishing issues. Quantum Reservoir Computing (QRC) has the potential to combine the efficient machine learning of classical RC with the computing power of complex and high-dimensional quantum dynamics. QRC takes RC a step further by leveraging the unique capabilities of quantum processing units (QPUs) and their exponentially large state space, resulting in rich dynamics that cannot be simulated on a conventional computer. In particular, the flexible atom arrangements and tunability of optical controls within QuEras neutral atom QPU enable the realization of a rich class of Hamiltonians acting as the reservoir.

Recent studies on quantum computing simulators and hardware suggest that certain quantum model architectures used for learning on classical data can achieve results similar to that of classical machine learning models while using significantly fewer parameters. Overall, QRC offers a promising approach to resource-efficient, noise-resilient, and scalable quantum machine learning.

In this project, we are collaborating with QuEra Computing, the leading provider of quantum computers based on neutral-atoms , to explore the benefits of using quantum reservoir computing in climate science and to investigate the potential advantages that the quantum layer from QuEra can bring. QuEra's neutral atom QPU and the types of quantum simulations it can perform give rise to different quantum reservoirs. This unique capability can potentially enhance the modeling of tropical cyclone intensity forecasts and data.

This collaboration involves multiple stakeholders and partners, including QuEra Computing Inc., Moodys RMS technical team, and Moodys Quantum Taskforce. The work is supported by a DARPA grant award, underscoring its significance and potential impact in tropical cyclone modeling and forecasting.

In summary, combining quantum machine learning methods, reservoir computing, and the quantum capabilities of QuEra's technology offers a promising approach to addressing the challenges in predicting tropical cyclone intensity. This collaboration aims to enhance the quality and efficiency of tropical cyclone modeling, ultimately aiding in better risk assessment and decision making in the face of these natural disasters.

See the original post:
Exploring the potential of quantum reservoir computing in forecasting the intensity of tropical cyclones - Moody's Analytics

Demonstration of hypergraph-state quantum information processing – Nature.com

Silicon-photonic quantum chip

The chip is fabricated by standard complementary metal-oxide-semiconductor processes. The waveguide circuit patterns are defined on an 8 inches silicon-on-insulator (SOI) wafer through the 248 nm deep ultraviolet (DUV) photolithography processes and the inductively coupled plasma (ICP) etching processes. Once the waveguides layer is fabricated, a layer of silicon dioxide (SiO2) of 1m thickness was deposited by plasma-enhanced chemical vapor deposition (PECVD). Finally, thermal-optical phase-shifters are patterned by a layer of 50-nm-thick titanium nitride (TiN) deposited on top of waveguides. Single photons were generated and guided in silicon waveguides with a cross-section of 450nm220nm. The photon-pair sources were designed with a length of 1.2cm. Multimode interferometers (MMIs) with a width of 2.8m and length of 27m were used as balanced beamsplitters. The chip was wired-bounded on a PCB and each phase-shifter was individually controlled by an electronic driver. An optical microscopy image of the chip is shown in Fig.2a.

In our experiment, we used a tunable continuous wave (CW) laser at the wavelength of 1550.12 nm to pump the nonlinear sources, which was amplified to 100mW power using an erbium-doped fiber amplifier (EDFA). Photon-pairs of different frequencies were generated in integrated sources by the spontaneous four wave mixing (SFWM) process, and then spatially separated by on-chip asymmetric Mach-Zehnder interferometers (MZIs). The signal photon was chosen at the wavelength of 1545.32nm and the idler photon at 1554.94nm. Single-photons were routed off-chip for detection by an array of fiber-coupled superconducting nanowire single-photon detectors (SNSPDs) with an averaged efficiency of 85%, and photon coincidence counts were recorded by a multichannel time interval analyzer (TIA). The rate of photons is dependent on the choice of projective measurement bases. In the typical setting of our experiments, for example, when the state is projected to the eigenbasis, the two-photon coincidence rate was measured to be ~kHz, and the integration time in the projective measurement was chosen as 5s.

Our quantum photonic chip is shown in Fig.2a, which integrates more than 400 photonic components, allowing arbitrary on-chip preparation, operation, and measurement of four-qubit hypergraph states. Key ability includes the multiqubit-controlled unitary operations CmU, where U represents the arbitrary unitary operation (e.g., U=Z in our experiment) and m is the number of control qubits. The realization of multi-qubit CmU gates relies on the transformation from the entanglement sources to the entangling operations, by using the process of entanglement generationspace expansionlocal operationcoherent compression"28.

Firstly, the four-dimensional Bell state is created by coherently exciting an array of four spontaneous four-wave mixing (SFWM) sources. A pair of photons with different frequencies are then separated by on-chip asymmetric Mech-Zehnder interferometers and routed to different paths, resulting in the four-dimensional Bell state29:

$${leftvert {{{{{{{rm{Bell}}}}}}}}rightrangle }_{4}=frac{{leftvert 0rightrangle }_{{{{{{{{rm{qudit}}}}}}}}}^{s}{leftvert 0rightrangle }_{{{{{{{{rm{qudit}}}}}}}}}^{i}+{leftvert 1rightrangle }_{{{{{{{{rm{qudit}}}}}}}}}^{s}{leftvert 1rightrangle }_{{{{{{{{rm{qudit}}}}}}}}}^{i}+{leftvert 2rightrangle }_{{{{{{{{rm{qudit}}}}}}}}}^{s}{leftvert 2rightrangle }_{{{{{{{{rm{qudit}}}}}}}}}^{i}+{leftvert 3rightrangle }_{{{{{{{{rm{qudit}}}}}}}}}^{s}{leftvert 3rightrangle }_{{{{{{{{rm{qudit}}}}}}}}}^{i}}{2},$$

(3)

where (leftvert krightrangle) (k=0,1,2,3) represents the logical bases of qudits, and the superscripts of s,i represent the signal and idler single-photon, respectively. The two-qubit states are mapped to the four-dimensional qudit state in both of the signal and idler single-photon as the following:

$$left{begin{array}{c}leftvert 00rightrangle_{{{{{{{{rm{qubit}}}}}}}}}to leftvert 0rightrangle_{{{{{{{{rm{qudit}}}}}}}}}hfill\ leftvert 01rightrangle_{{{{{{{{rm{qubit}}}}}}}}}to leftvert 1rightrangle_{{{{{{{{rm{qudit}}}}}}}}}hfill\ leftvert 10rightrangle_{{{{{{{{rm{qubit}}}}}}}}}to leftvert 2rightrangle_{{{{{{{{rm{qudit}}}}}}}}}hfill\ {leftvert 11rightrangle }_{{{{{{{{rm{qubit}}}}}}}}}to {leftvert 3rightrangle }_{{{{{{{{rm{qudit}}}}}}}}}hfillend{array}right.$$

(4)

This results in the four-qubit state as:

$$leftvert Phi rightrangle= frac{{leftvert 00rightrangle }_{{{{{{{{rm{qubit}}}}}}}}}^{s}{leftvert 00rightrangle }_{{{{{{{{rm{qubit}}}}}}}}}^{i}+{leftvert 01rightrangle }_{{{{{{{{rm{qubit}}}}}}}}}^{s}{leftvert 01rightrangle }_{{{{{{{{rm{qubit}}}}}}}}}^{i}}{2}\ +frac{{leftvert 10rightrangle }_{{{{{{{{rm{qubit}}}}}}}}}^{s}{leftvert 10rightrangle }_{{{{{{{{rm{qubit}}}}}}}}}^{i}+{leftvert 11rightrangle }_{{{{{{{{rm{qubit}}}}}}}}}^{s}{leftvert 11rightrangle }_{{{{{{{{rm{qubit}}}}}}}}}^{i}}{2},$$

(5)

where (leftvert krightrangle) (k=0,1) represents the logical bases of qubits. For clarity, we omit the subscript of qubit in the following.

Secondly, we expand the Hilbert space of the idler-photonic qubit into a 4-dimensional space. After the space expansion process, we add two ancillary qubits ({leftvert phi rightrangle }^{i}) (third ququart) into the state:

$${leftvert Phi rightrangle }_{1}=frac{{leftvert 00rightrangle }^{s}{leftvert 00rightrangle }^{i}{leftvert phi rightrangle }^{i}+{leftvert 01rightrangle }^{s}{leftvert 01rightrangle }^{i}{leftvert phi rightrangle }^{i}+{leftvert 10rightrangle }^{s}{leftvert 10rightrangle }^{i}{leftvert phi rightrangle }^{i}+{leftvert 11rightrangle }^{s}{leftvert 11rightrangle }^{i}{leftvert phi rightrangle }^{i}}{2}.$$

(6)

Thirdly, the ancillary two-qubit ({leftvert phi rightrangle }^{i}) are locally operated using arbitrary two-qubit unitary gates represented by Uij. We apply different unitary operations U00, U01, U10, and U11 on the ({leftvert phi rightrangle }^{i}) (marked by different colors in Fig.2a). This returns a state:

$${leftvert Phi rightrangle }_{2}= frac{{leftvert 00rightrangle }^{s}{leftvert 00rightrangle }^{i}{leftvert {phi }_{R}rightrangle }^{i}+{leftvert 01rightrangle }^{s}{leftvert 01rightrangle }^{i}{leftvert {phi }_{Y}rightrangle }^{i}}{2}\ +frac{{leftvert 10rightrangle }^{s}{leftvert 10rightrangle }^{i}{leftvert {phi }_{G}rightrangle }^{i}+{leftvert 11rightrangle }_{1}{leftvert 11rightrangle }^{s}{leftvert {phi }_{B}rightrangle }^{i}}{2},$$

(7)

where subscripts of {R(ed), Y(ellow), G(reen), B(lue)} represent the state after Uij. The Uij are realized by universal linear-optical circuits30.

Finally, to preserve quantum coherence, the which-process information is erased in the coherent compression process. This swaps the state information of the idler qubits as:

$${leftvert Phi rightrangle }_{3}= frac{{leftvert 00rightrangle }^{s}{leftvert {phi }_{R}rightrangle }^{i}{leftvert 00rightrangle }^{i}+{leftvert 01rightrangle }^{s}{leftvert {phi }_{Y}rightrangle }^{i}{leftvert 01rightrangle }^{i}}{2}\ +frac{{leftvert 10rightrangle }^{s}{leftvert {phi }_{G}rightrangle }^{i}{leftvert 10rightrangle }^{i}+{leftvert 11rightrangle }^{s}{leftvert {phi }_{B}rightrangle }^{i}{leftvert 11rightrangle }^{i}}{2},$$

(8)

Through the post-selection procedure of projecting the last two qubits into the superposition state ((leftvert 00rightrangle+leftvert 01rightrangle+leftvert 10rightrangle+leftvert 11rightrangle )/2), we coherently compress the 16-dimensional space back into the 4-dimensional space with a success probability of 1/4, and we obtain:

$${leftvert Phi rightrangle }_{4}=frac{{leftvert 00rightrangle }^{s}{leftvert {phi }_{R}rightrangle }^{i}+{leftvert 01rightrangle }^{s}{leftvert {phi }_{Y}rightrangle }^{i}+{leftvert 10rightrangle }^{s}{leftvert {phi }_{G}rightrangle }^{i}+{leftvert 11rightrangle }^{s}{leftvert {phi }_{B}rightrangle }^{i}}{2}.$$

(9)

In short, the process of entanglement generation-space expansion-local operation-coherent compression" results in the multi-qubit entangling gate as:

$$leftvert 00rightrangle leftlangle 00rightvert {U}_{00}+leftvert 01rightrangle leftlangle 01rightvert {U}_{01}+leftvert 10rightrangle leftlangle 10rightvert {U}_{10}+leftvert 11rightrangle leftlangle 11rightvert {U}_{11}.$$

(10)

By reprogramming the linear-optical circuits for local unitary operations Uij, we can realize different multi-qubits controlled unitary gates such as CmZ, m3. For example, the triple-controlled CCCZ gate can be obtained by setting the configuration as U00=U01=U10=II and U11=CZ. The quantum chip thus enables the generation, operation and measurement of arbitrary four-qubit hypergraph states.

We here adopt the method proposed in ref. 31 to characterize the CCCZ gate. Since the CCCZ gate is invariant with respect to the permutation of the controlled and target qubits, we can characterize the gate by measuring the input-output truth tables for four complementary product bases. In these bases, three of the qubits are prepared and measured in the computational basis states {(leftvert 0rightrangle,leftvert 1rightrangle)} while the fourth qubit is prepared and measured in the Hadamard basis states {(leftvert+rightrangle,leftvert -rightrangle)}. Inputting the product state (vert {psi }_{i,j}rangle) returns a product state of (vert {psi }_{i,j}^{{{{{{{{rm{(out)}}}}}}}}}rangle={U}_{CCCZ}vert {psi }_{i,j}rangle). The measured truth tables are shown in Fig.2. We define the average statistic classical state fidelity as ({{{{{{{{rm{F}}}}}}}}}_{{{{{{{{rm{c}}}}}}}}(j)}=mathop{sum }nolimits_{i=1,k=1}^{16}{p}_{ik}{q}_{ik}/16), where pik and qik are the theoretical and measured distribution. According to the Choi-Jamiolkowski isomorphism, we define the Choi matrix of an ideal CCCZ gate as 0, and the experimental Choi matrix as , from which the quantum process fidelity for the CCCZ gate can be written as ({{{{{{{{rm{F}}}}}}}}}_{chi }={{{{{{{rm{Tr}}}}}}}}[chi {chi }_{0}]/({{{{{{{rm{Tr}}}}}}}}[{chi }_{0}]{{{{{{{rm{Tr}}}}}}}}[chi ])), where ({{{{{{{rm{Tr}}}}}}}}[{chi }_{0}]=16) accounts for the normalization. We obtain the generalized Hodmann bound of fidelity31 (the lower bounded process fidelity) for the CCCZ gate, which can be estimated from the four above averaged state fidelities as FFc1+Fc2+Fc3+Fc44.

In this part, we show the rule of LU transformation when applying local Pauli operations on the hypergraph states of (leftvert {{{{{{{rm{HG}}}}}}}}rightrangle=({prod }_{ein E}{C}_{e}){leftvert+rightrangle }^{otimes n})9, where e is a hyperedge connecting vertices {i1,i2,...,im} and ({C}_{e}=I-2({leftvert 1rightrangle }_{{i}_{1}}{leftvert 1rightrangle }_{{i}_{2}}cdots {leftvert 1rightrangle }_{{i}_{m}})cdot ({leftlangle 1rightvert }_{{i}_{1}}{leftlangle 1rightvert }_{{i}_{2}}cdots {leftlangle 1rightvert }_{{i}_{m}})) is the corresponding multiqubit controlled-Z gates. To show the LU transformation, as an example, we consider the case when applying the Pauli X-operation on the kth qubit. The state can be written as:

$${X}_{k}leftvert HGrightrangle= {X}_{k}(mathop{prod}limits_{ein E}{C}_{e}){leftvert+rightrangle }^{otimes n}\= (mathop{prod}limits_{ein E,enotni k}{C}_{e}){X}_{k}(mathop{prod}limits_{ein E,eni k}{C}_{e}){leftvert+rightrangle }^{otimes n}\= (mathop{prod}limits_{ein E,enotni k}{C}_{e})cdot left[right.{X}_{k}(mathop{prod}limits_{ein E,eni k}{C}_{e}){X}_{k}left]right.{leftvert+rightrangle }^{otimes n}\= (mathop{prod }limits_{ein E,enotni k}{C}_{e})cdot (mathop{prod}limits_{ein E,eni k}{X}_{k}{C}_{e}{X}_{k}){leftvert+rightrangle }^{otimes n}.$$

(11)

Now we focus on to the single operator XkCeXk. Assume the edge e connects vertices {1,2,...,m} and for simplicity we can assume k=1 is the first vertex (this does not sacrifice generality). Following the above assumption, we can write the operator explicitly as:

$${X}_{k}{C}_{e}{X}_{k}= {X}_{k}(I-2leftvert 11cdots 1rightrangle leftlangle 11cdots 1rightvert ){X}_{k}\= I-2leftvert 01cdots 1rightrangle leftlangle 01cdots 1rightvert$$

(12)

Next step we separate Ce out on the left side. Notice that (I={C}_{e}^{2}) and

$$leftvert 01cdots 1rightrangle leftlangle 01cdots 1rightvert= (I-2leftvert 11cdots 1rightrangle leftlangle 11cdots 1rightvert )cdot leftvert 01cdots 1rightrangle leftlangle 01cdots 1rightvert \= {C}_{e}leftvert 01cdots 1rightrangle leftlangle 01cdots 1rightvert$$

(13)

Therefore, we have

$${X}_{k}{C}_{e}{X}_{k}= I-2leftvert 01cdots 1rightrangle leftlangle 01cdots 1rightverthfill \ = {C}_{e}cdot ({C}_{e}-2leftvert 01cdots 1rightrangle leftlangle 01cdots 1rightvert )hfill\ = {C}_{e}cdot (I-2leftvert 11cdots 1rightrangle leftlangle 11cdots 1rightvert -2leftvert 01cdots 1rightrangle leftlangle 01cdots 1rightvert )\ = {C}_{e}cdot left(I-2cdot underbrace{(leftvert 1rightrangle leftlangle 1rightvert+leftvert 0rightrangle leftlangle 0rightvert )}_{begin{array}{c}{I}_{k}end{array}}otimes underbrace{leftvert 1cdots 1rightrangle leftlangle 11cdots 1rightvert }_{begin{array}{c}m-1end{array}}right)hfill\= {C}_{e}({I}_{k}otimes {C}_{e/{k}})$$

(14)

where Ce/{k} represents the multiqubit controlled gates corresponding to a new hyperedge {1,2,...,k1,k+1,..,m}.

Finally, we complete the proof by substituting the above formula into Eq.(11), which leads to

$${X}_{k}leftvert Grightrangle= (mathop{prod}limits_{ein E,enotni k}{C}_{e})cdot (mathop{prod}limits_{ein E,eni k}{X}_{k}{C}_{e}{X}_{k}){leftvert+rightrangle }^{otimes n}\= (mathop{prod}limits_{ein E,enotni k}{C}_{e})cdot (mathop{prod}limits_{ein E,eni k}{C}_{e}({I}_{k}otimes {C}_{e/{k}})){leftvert+rightrangle }^{otimes n}\= (mathop{prod}limits_{ein E}{C}_{e})cdot (mathop{prod}limits_{ein E,eni k}{C}_{e/{k}}){leftvert+rightrangle }^{otimes n}.$$

(15)

Equation (15) shows the LU transformation rule: applying a local Pauli X gate on a qubit equals to applying a series of multiqubit controlled-Z gates which connect other qubits that share the same edge with it.

We take an example to illustrate local unitary transformation, as shown in Fig.1c. The initial state is

$$leftvert psi rightrangle= leftvert 0000rightrangle+leftvert 0001rightrangle+leftvert 0010rightrangle+leftvert 0011rightrangle \ +leftvert 0100rightrangle -leftvert 0101rightrangle+leftvert 0110rightrangle+leftvert 0111rightrangle \ +leftvert 1000rightrangle+leftvert 1001rightrangle+leftvert 1010rightrangle+leftvert 1011rightrangle \ -leftvert 1100rightrangle -leftvert 1101rightrangle -leftvert 1110rightrangle -leftvert 1111rightrangle$$

(16)

After applying X3, which flips the third qubit, the state becomes

$$leftvert psi rightrangle=leftvert 0000rightrangle+leftvert 0001rightrangle+leftvert 0010rightrangle+leftvert 0011rightrangle \+leftvert 0100rightrangle+leftvert 0101rightrangle+leftvert 0110rightrangle -leftvert 0111rightrangle \+leftvert 1000rightrangle+leftvert 1001rightrangle+leftvert 1010rightrangle+leftvert 1011rightrangle \ -leftvert 1100rightrangle -leftvert 1101rightrangle -leftvert 1110rightrangle -leftvert 1111rightrangle$$

(17)

which can be quickly verified as the expression for the second hypergraph state in Fig.1c. Following a similar procedure, the hypergraph can be simplified to only two edges as shown in Fig.1c. The rule of LU transformation can be graphically described as the X(k) operation on the qubit k removes or adds these hyper-edges in E(k) depending on whether they exist already or not, where E(k) represents all hyper-edges that contain qubit k but removing qubit k out. The Z(k) operation on the qubit k remove the one-edge on the qubit k.

We here derive the basis used for the evaluation of MK polynomials M4 and ({M}_{4}^{{prime} }). The general form of Mn is given as37:

$${M}_{n}=frac{1}{2}{M}_{n-1}({a}_{n}+{a}_{n}^{{prime} })+frac{1}{2}{M}_{n-1}^{{prime} }({a}_{n}-{a}_{n}^{{prime} })$$

(18)

where an and ({a}_{n}^{{prime} }) are single-qubit operators and M1=a1. ({M}_{n}^{{prime} }) can be obtained by interchanging the terms with and without the prime. In particular, for the four-qubit state, we then have M4 and ({M}_{4}^{{prime} }):

$$left{begin{array}{l}{M}_{4}=frac{1}{2}{M}_{3}({a}_{4}+{a}_{4}^{{prime} })+frac{1}{2}{M}_{3}^{{prime} }({a}_{4}-{a}_{4}^{{prime} })quad \ {M}_{4}^{{prime} }=frac{1}{2}{M}_{3}^{{prime} }({a}_{4}+{a}_{4}^{{prime} })-frac{1}{2}{M}_{3}({a}_{4}-{a}_{4}^{{prime} }).quad end{array}right.$$

(19)

Similarly, {M3,M2} and {({M}_{3}^{{prime} },{M}_{2}^{{prime} })} can be obtained. We instead use an alternative way by dividing the original 4-qubit operators into 2-qubit by 2-qubit parts because of the implementation of qubit-qudit mapping in our device. This leads to the construction of the MK polynomials M4 and ({M}_{4}^{{prime} }) from M2 and ({M}_{2}^{{prime} }):

$$left{begin{array}{l}{M}_{4}=frac{1}{2}left[right.{M}_{2}({a}_{3}{a}_{4}^{{prime} }+{a}_{3}^{{prime} }{a}_{4})+{M}_{2}^{{prime} }({a}_{3}{a}_{4}-{a}_{3}^{{prime} }{a}_{4}^{{prime} })left]right.quad \ {M}_{4}^{{prime} }=frac{1}{2}left[right.{M}_{2}^{{prime} }({a}_{3}{a}_{4}^{{prime} }+{a}_{3}^{{prime} }{a}_{4})-{M}_{2}({a}_{3}{a}_{4}-{a}_{3}^{{prime} }{a}_{4}^{{prime} })left]right..quad end{array}right.$$

(20)

In experiment, we first measured the ({M}_{2},, {M}_{2}^{{prime} },, ({a}_{3}{a}_{4}^{{prime} }+{a}_{3}^{{prime} }{a}_{4})) and (({a}_{3}{a}_{4}-{a}_{3}^{{prime} }{a}_{4}^{{prime} })), and then estimated the MK polynomials M4 and ({M}_{4}^{{prime} }). A total number of 64 bases are required for M4 and ({M}_{4}^{{prime} }), each of which is determined by the choice of the corresponding ai and ({a}_{i}^{{prime} }).

In blind quantum computation, clients use the expensive resource states shared by the server to perform their measurements. In such a scenario, the average fidelity of the states generated by the server has to be verified before computation. Ideally, the clients are capable of estimating a lower bound of the state fidelity and verifying genuine entanglement, without much cost. We here use a protocol of color-encoding stabilizers41. To achieve a verification of fidelity larger than 10, the number of states required is given by

$$N=leftlceil frac{{{{{{rm{ln}}}}}}(delta)}{{{{{{rm{ln}}}}}}(1-epsilon_0/s)} rightrceil,$$

(21)

where s is the minimum number of colors in the hypergraph state, is the significance level and 0 denotes the error. This formula can be better understood in the following form

$$delta ge {(1-{epsilon }_{0}/s)}^{N},$$

(22)

where the right-hand side represents a total passing probability of the total N tests for a state with an infidelity 0. When this probability is smaller than the chosen significance level and a passing event occurs on the client side, we can draw the conclusion that the real infidelity of the state generated from the server should satisfy <0 with a significance level .

A simple transformation of Eq. (21) gives

$$bar{F}ge scdot {delta }^{1/N}-(s-1).$$

(23)

In the ideal case, if the generated state is exactly the target hypergraph state, i.e, F=1, the probability of passing the test is always 100%, while increasing the number of tests will result in a tighter bound (smaller 0). In reality, for experimental states with non-unit fidelity, the total passing probability will decrease exponentially with the number of tests N. When we define the single-test passing probability as (bar{P}), the total passing probability will take the form of ({bar{P}}^{N}), which should be kept above the significance level . Therefore, for a selected significance level, the maximum number of tests, which corresponds to the tightest bound on fidelity, should satisfy ({bar{P}}^{N}=delta). Replacing by ({bar{P}}^{N}) in Eq. (23) thus returns

$$bar{F}ge scdot bar{P}-(s-1).$$

(24)

Continue reading here:
Demonstration of hypergraph-state quantum information processing - Nature.com

Revolutionizing Quantum Computing with Magnetic Waves – yTech

Summary: A team at Helmholtz-Zentrum Dresden-Rossendorf has introduced a groundbreaking quantum computing technique using magnons to manage qubits. Their research, broadening the horizons of quantum technology, might vastly improve computers capabilities and make them more scalable.

In a significant stride toward advanced quantum computing, researchers from the Helmholtz-Zentrum Dresden-Rossendorf have devised a novel approach to control and manage quantum bits, or qubitsfundamental units of quantum computers. This technique diverges from the traditional electromagnetic methods, and instead, harnesses magnons, which are disturbances within a magnetic field, to interact with qubits through a material known as silicon carbide.

The innovation sets itself apart by using the magnetic interactions in a nickel-iron alloy magnetic disk to manipulate qubits, side-stepping the limitations of current microwave antenna technologies. By employing magnons shorter wavelengths, the promise of denser and more powerful quantum computer architectures comes within reach. The results of this burgeoning research were published in Science Advances, detailing how magnons could serve as a new quantum bus, interfacing directly with the spin qubits that store quantum information.

While research is still underway to test the practical application of this method in quantum computing, the implications are vast. The potential for controlling numerous qubits and enabling their entanglement could revolutionize industries by providing more efficient cryptographic techniques and accelerating drug discovery processes.

With quantum computing still in its nascency, overcoming challenges such as error correction and the creation of stable qubit networks remains paramount. However, the Helmholtz-Zentrum Dresden-Rossendorfs breakthrough hints at an alternative pathway that mitigates some of these fundamental issues.

The progress made with magnons marks a crucial development towards viable, large-scale quantum computingan essential leap forward in technology that could reshape how we tackle the worlds most complex computational challenges.

Industry watchers point to agencies like the U.S. National Institute of Standards and Technology and The European Quantum Flagship initiative for up-to-date research and progress reports in this rapidly innovative field. These efforts underscore the increasing importance and potential impact of quantum computing on multiple sectors, from security to healthcare.

The discovery by the team at Helmholtz-Zentrum Dresden-Rossendorf of using magnons to manipulate qubits represents a potential paradigm shift for the quantum computing industryan industry that is still very much in its experimental and developmental stages but holds huge potential for transformative change across numerous fields.

Market Forecasts and Industry Growth Market forecasts for quantum computing are robust, with predictions of significant growth over the coming decades as the technology matures and becomes more commercially available. Analysts at companies like Gartner and MarketsandMarkets have projected that the quantum computing market could be worth billions of dollars in the ensuing decade. This optimism is based on advancements in quantum technologies and the increasing interest from governments and private sector participants in harnessing the power of quantum computers.

The quantum computing industry seeks to leverage the principles of quantum mechanics to perform calculations at speeds unattainable by traditional computers. This capability has the potential to transform industries by solving complex problems in fields such as cryptography, financial modeling, drug discovery, and logistics. Given its nascent stage, quantum computing attracts significant investments both from venture capitalists and public sector funds aimed at achieving strategic technological advantages.

Issues and Challenges Despite its promising outlook, the quantum computing industry faces numerous challenges that need to be addressed. Creating stable and scalable qubit systems, error correction, and developing a skilled workforce proficient in quantum technologies are among the hurdles the industry is grappling with. Furthermore, quantum computing is not immune to ethical and security concerns, especially considering the implications it has for breaking current encryption schemes used to protect data.

The development of new techniques like the one involving magnons presents a potential solution to some of these problems, especially related to the scalability and control of qubits. Nonetheless, the transition from groundbreaking research to practical application involves a significant amount of work and collaboration across various disciplines.

For those interested in keeping track of the latest advancements and industry trends, visiting the official websites of leading organizations and research institutions is advisable. You can refer to prominent agencies such as The U.S. National Institute of Standards and Technology or European research initiatives such as The European Quantum Flagship to obtain recent information and progress reports on quantum computing and quantum technologies.

The integration of magnons into quantum computing architectures is still a developing story, but it highlights the innovative spirit and continued evolution of this cutting-edge field. With ongoing research and development, quantum computing is poised to become a cornerstone of next-generation computing technology with the power to redefine our approach to solving the worlds most complex problems.

Leokadia Gogulska is an emerging figure in the field of environmental technology, known for her groundbreaking work in developing sustainable urban infrastructure solutions. Her research focuses on integrating green technologies in urban planning, aiming to reduce environmental impact while enhancing livability in cities. Gogulskas innovative approaches to renewable energy usage, waste management, and eco-friendly transportation systems have garnered attention for their practicality and effectiveness. Her contributions are increasingly influential in shaping policies and practices towards more sustainable and resilient urban environments.

More:
Revolutionizing Quantum Computing with Magnetic Waves - yTech