Archive for the ‘Quantum Computer’ Category

Fujitsu SD WAN and ISS are first users of quantum seciurity – Capacity Media

07 December 2021 | Alan Burkitt-Gray

Fujitsu and a company working with the International Space Station have been named as among the first users of Quantum Origin, whats claimed to be the worlds first commercial product built using quantum computers.

Cambridge Quantum, which is now part of the US-UK group Quantinuum, says it can fit quantum-level security to existing networks, including software-defined wide area networks (SD-WANs) from Fujitsu, which has incorporated the technology into its products.

Duncan Jones, head of cyber security at Cambridge Quantum, said last night: We are kick-starting the quantum cyber security industry. He said the company will start to distribute [quantum] keys into cloud platforms.

Houtan Houshmand, principal architect at Fujitsu, said his company was planning to incorporate the technology into its SD-WAN products.

David Zuniga, business development manager at Axiom Space, said the technology has been tested on the International Space Station (ISS) and would lead to space tourism with researchers and scientists [who] could do their work in space with total security.

Cambridge Quantum founder and Quantinuum CEO Ilyas Khan said: This product could be used by anyone.

He said it should be used by organisations worrying about the threat from people sequestering data storing encrypted information for the time when quantum computers will also be available to decrypt it.

You cannot afford to be asleep at the wheel, said Khan. When should we be worried? Of course, now. He said existing classical systems could be protected by a quantum computer.

Jones said that the Quantum Origin typical end point might be a hardware security module that could be added to existing infrastructure. For large enterprises to add this might be a year or two, he said. Smaller businesses were slightly further out.

On prices, he said that a typical key using existing technology costs about US$1 a month. He implied that a Quantum Origin key would be cheaper but did not go into details.

Fujitsus Houshmand was also asked about pricing. I cant provide a cost, he said, saying that what Fujitsu has done so far is just a proof of concept.

Jones said that Quantinuum, which is a joint venture of Cambridge Quantum and Honeywell, is forming a number of partnerships, naming military supplier Thalys and public key infrastructure (PKI) specialist Keyfactor. This is how the technology will diffuse into the market.

He said: We want to make this product broadly available, but accepted that there were global security considerations. There are export control laws. We have to do a lot of due diligence.

Zuniga at Axiom Space, which is training its own crew for the ISS and is planning its own private space station, said that the US operating segment of the ISS, where Quantum Origin is to be used, has a firewall to keep our data secure from the Russian sector. If we cant secure our data, it hurts a really expensive asset thats floating in space.

Khan, asked about possible exports to China and Russia, said: We are answerable to the regulators. We are an American and a British company. Were not actually able to sell to adversaries.

Houshmand at Fujitsu agreed: We have to stay rigidly compliant.

Elaborating on the technology, Jones said: Quantum Origin is a cloud-based platform that uses a quantum computer from Quantinuum to product cryptographic keys.

He was asked whether companies had five years, as is often suggested, to install quantum-level protection for their data. Theyre wrong by about five years, he said.

Jones said Quantum Origin keys are the strongest that have ever been created or could ever be created, because they use quantum physics to produce truly random numbers.

Khan noted that the beta version of Quantum Origin has been tested on an IBM quantum network.

Quantinuum and Cambridge Quantum has a number of clients that have tested the technology, but they are operating under a non-disclosure agreement (NDA), said Khan.

We have been working for a number of years now on a method to efficiently and effectively use the unique features of quantum computers in order to provide our customers with a defence against adversaries and criminals now and in the future once quantum computers are prevalent, he said.

He added: Quantum Origin gives us the ability to be safe from the most sophisticated and powerful threats today as well threats from quantum computers in the future.

Jones said: When we talk about protecting systems using quantum-powered technologies, were not just talking about protecting them from future threats. From large-scale takedowns of organisations, to nation state hackers and the worrying potential of hack now, decrypt later attacks, the threats are very real today, and very much here to stay. Responsible enterprises need to deploy every defence possible to ensure maximum protection at the encryption level today and tomorrow.

A quantum of disruptiion: Capacity's feature about quantum technology, its threat to data security and what it is also doing to protect security, is here

Go here to read the rest:
Fujitsu SD WAN and ISS are first users of quantum seciurity - Capacity Media

Quantum deep reinforcement learning for clinical decision support in oncology: application to adaptive radiotherapy | Scientific Reports – Nature.com

Quantum deep reinforcement learning

Quantum deep reinforcement learning is a novel action value-based decision-making framework derived from QRL23 and deep q-learning10 framework. Like conventional RL9,31, our qDRL based CDSS framework is comprised of 5 main elements: clinical AI agent, ARTE, radiation dose decision-making policy, reward, and q-value function. Here, the AI agent is a clinical decision-maker that learns to make dose decisions for achieving clinically desirable outcomes within the ARTE. The learning takes place by the agent-environment interaction, which can be sequentially ordered as: the AI decides on a dose and executes it, and in response, a patient (part of the ARTE) transits from one state to the next. Each transition provides the AI with feedback for its decision in terms of RT outcome and associated reward value. The goal of RL is for the AI to learn a decision-making policy that maximizes the reward in the long run, defined in terms of a specified q-value function that assigns a value to every state-dose-decision pair obtained from the accumulation of rewards over time (returns).

Assuming Markovs property (i.e., an environments response at time (t + 1) depends only on the state and dose-decision at time (t)), the qDRL task can be mathematically described as a 5-tuple ((S, left| D rightrangle , TF, P, R)), where (S) is a finite set of patients states, (left| D rightrangle) is a superimposed quantum state representing the finite set of eigen-dose decision, (TF:S times D to S^{prime }) is the transition function that maps patients state (s_{t}) and eigen-dose (left| d rightrangle_{t}) to the next state (s_{t + 1}), (P_{LC|RP2} :S^{prime } to left[ {0,1} right]) is the RT outcome estimator that assigns probability values (p_{LC}) and (p_{RP2}) to the state (s_{t + 1}), and (R:left[ {0,1} right] times left[ {0,1} right] to {mathbb{R}}) is the reward function that assigns a reward (r_{t + 1}) to the state-decision pair (left( {s_{t} ,left| d rightrangle_{t} } right)) based on the outcome probability estimates.

Eigen-dose (left| d rightrangle) is a physically performable decision that is selected via quantum methods from the superimposed quantum state (left| D rightrangle) which simultaneously represents all possible eigen-doses at once. In simple words, (left| D rightrangle) is the collection of all possible dose options and (left| d rightrangle) is one of those options which is selected after a decision is made. Selecting dose decision (left| d rightrangle) is carried out in two steps: (1) amplifying the optimal eigen-dose (left| d rightrangle^{*}) from the superimposed state (left| D rightrangle) (i.e., (left| D rightrangle^{prime } = widehat{Amp}_{{left| d rightrangle^{*} }} left| D rightrangle)) and (2) measuring the amplified state (i.e., (left| d rightrangle = widehat{Measure}(left| {D^{prime } } rightrangle )).

The optimal eigen-dose (left| d rightrangle^{*}) is obtained from deep Q-net, which is the AIs memory. Deep Q-net, (DQN:S to {mathbb{R}}^{d}), is a neural network that takes patients state as input and then outputs q-value for each eigen-dose ((left{ {q_{left| d rightrangle } } right})). The optimal dose is then selected following greedy policy where the dose with the maximum q-value is selected (i.e., (left| d rightrangle^{*} = begin{array}{*{20}c} {argmax} \ {left| {d^{prime } } rightrangle } \ end{array} { q_{left| d rightrangle } })). We have applied a double Q-learning 32 algorithm in training the deep Q-net. The schematic of a training cycle is presented in Fig.2 and additional technical details are presented in the Supplementary Material.

We initially employed Grovers amplification procedure33,34 for the decision selection mechanism. While Grovers procedure works on a quantum simulator, it fails to correctly work in a quantum computer. The quantum circuit depth of Grovers procedure (for 4 or higher qubits) is much greater than the coherence length of the current quantum processor35. Whenever the quantum circuit length exceeds the coherence length, quantum state becomes significantly affected by the system noise and loses vital information. Therefore, we designed a quantum controller circuit that is shorter than the coherence length and is suitable for the task of decision selection. The merit of our design is its fixed length; since its length is fixed for any number of qubits, it is suitable for higher qubit systems, as much as permitted by the circuit width. Technical details regarding its implementation in quantum processor is presented in the Supplementary Materials.

An example of a controller circuit is given in Fig.5. Controller circuits use twice the number of qubits (n), which can be divided into control and main. Optimal eigen-states obtained from deep Q-net are created in the control by selecting the appropriate pre-control gates. Then the control is entangled with the qubits from the main via controlled NOT (CNOT) gates. CNOT gates are connected between a control qubit from the control and a target qubit from the main. CNOT gates flip the target qubit from (left| 1 rightrangle) to (left| 0 rightrangle) only when the control is in (left| 1 rightrangle) state and does not perform any operation otherwise. Because all the main qubits are prepared in (left| 0 rightrangle) state, we introduced the reverse gates (n X-gates in parallel) to flip them to (left| 1 rightrangle). X-gates flip (left| 0 rightrangle) to (left| 1 rightrangle), and vice-versa. The CNOT flips all the qubits whose controls are in (left| 1 rightrangle) state, creating a state that is element-wise opposite to the marked state. Finally, another set of reverse gates is applied to the main before making a measurement.

Quantum controller circuit for a 5 qubit (32 bit) system. (a) Quantum controller circuit for the selection of the state (left| {10101} rightrangle). The probability distribution corresponding to (b) failed Grovers amplification procedure for one iteration run in the 5-qubit IBMQ Santiago quantum processor and (c) successful quantum controller selection run in the 15-qubit IBMQ Melbourne quantum processor.

Another advantage of the controller circuit is controlled uncertainty level. The controller circuit has additional degrees of freedom that can control the level of uncertainty that might be needed to model a highly dubious clinical situation. By replacing the CNOT gate by a more general (CU3left( {theta ,phi ,lambda } right)) gate, we can control the level of additional stochasticity with the rotation angles (theta), (phi), and (lambda), which corresponds to the angles in the Bloch sphere. The angles can either be fixed or, for additional control, changed with training episode.

The patients state in the ARTE is defined by 5 biological features: cytokine (IP10), PET imaging feature (GLSZM-ZSV), radiation doses (Tumor gEUD and lung gEUD), and genetics (cxcr1- Rs2234671). Their descriptions are presented in Table 2. These 5 variables were selected from a multi-objective Bayesian Network study13, which considered over 297 various biological features and found the best features for predicting the joint LC and RP2 RT outcomes.

The training data analyzed in this study are obtained from the University of Michigan study UMCC 2007.123 (NCI clinical trial NCT01190527) and the validation data analyzed in this study are obtained from the RTOG-0617 study (NCI clinical trial NCT00533949). Both trials were conducted in accordance with relevant guidelines and regulations and informed consent was obtained from all subjects and/or legal guardians. Details on training and validation datasets, and necessary model imputation carried out to accommodate the differences in the datasets are presented in the Supplementary Materials.

Deep Neural Networks (DNN) were applied as transition functions for IP10 and GLSZM-ZSV features. They were trained with a longitudinal (time-series) dataset, with the pre-irradiation patient state and corresponding radiation dose as input features and post-irradiation state as output. For lung and tumor gEUD, we utilized prior knowledge and applied a monotonic relationship for the transition function since we know that gEUD should increase with increasing radiation dose. We assumed that the change in gEUD is proportional to the dose fractionation and tissue radiosensitivity,

$$frac{{gleft( {t_{n} } right) - gleft( {t_{n - 1} } right)}}{{t_{n} - t_{n - 1} }} propto d_{n} left( {1 + frac{{d_{n} }}{{frac{alpha }{beta }}}} right).$$

(1)

Here (gleft( {t_{n} } right)) is the gEUD at time point (t_{n}), (d_{n}) is the radiation dose fractionation given during the nth time period, and (alpha /beta) ratio is the radiosensitivity parameter which differs between tissue type. Note that we first applied constrained training42 to maintain monotonicity with DNN model, however the gEUD over time trend was flatter than anticipated, thus we opted for a process-driven approach in the final implementation. The technical details on the NNs and its training are presented in the Supplementary Material.

DNN classifiers were applied as the RT outcome estimator for LC and RP2 treatment outcomes. They were trained with post irradiation patient states as input and binary LC and RP2 outcomes as its labels.

RT outcome estimator must also satisfy a monotone condition between increasing radiation dose and increasing probability of local control as well as probability of radiation induced pneumonitis. To maintain this monotonic relationship, we used a generic logistic function,

$$p_{LC|RP2} = frac{1}{{1 + exp left( {frac{{gleft( {t_{6} } right) - mu }}{T}} right)}},$$

(2)

where (gleft( {t_{6} } right)) is the gEUD at week 6, and (mu) and (T) are two patient-specific parameters that are learned from training the DNN. Here, (mu) and (T) are the outputs of two neural networks that are fed into the logistic function and tuned one after the other, leaving the other fixed. The training details are presented in the Supplementary Materials.

The task of the agent is to determine the optimal dose that maximizes (p_{LC}) while minimizing (p_{RP2}). Accordingly, we built a reward function on the base function (P^{ + } = P_{LC} left( {1 - P_{RP2} } right)) as shown in Fig.6. The algebraic form is as follows,

$$R = left{ {begin{array}{*{20}l} {P^{ + } + 10 } hfill & { {text{if}} 70% < p_{Lc} < 100% ;{text{and}}; 0% < p_{RP2} < 17.2% } hfill \ {P^{ + } + 5} hfill & {{text{if}} 50% < p_{Lc} < 70% ;{text{and}}; 17.2% < p_{RP2} < 50% } hfill \ {P^{ + } - 1} hfill & {{text{if}} 0% < p_{Lc} < 50% ;{text{and}}; 50 < p_{RP2} < 100% } hfill \ end{array} } right.$$

(3)

Reward function for reinforcement learning. Contour plot of reward function as a function of the probability of local control (PLC) and radiation induced pneumonitis of grade 2 or higher (PRP2). Area enclosed by the blue line corresponds to the clinically desirable outcome, i.e., (P_{LC} > 70{%}) and ({P_{RP2}} <17.2{%}). Similarly, the area enclosed by the green lines corresponds to the computationally desirable outcome, i.e., (P_{LC} > 50{%}) and ({P_{RP2}} <50{%}). Along with (P_{LC} times (1-P_{RP2})) the AI agent receives+10 reward for achieving clinically desirable outcome,+5 for achieving computationally desirable outcome, and -1 when unable to achieve a desirable outcome.

Here the AI agent receives additional 10 points for achieving clinically desirable outcome (i.e., (p_{LC} > 70% quad {text{and}} quad p_{RP2} < 17.2%)), 5 points for achieving computationally desirable outcome (i.e., (p_{LC} > 50% quad {text{and}} quad p_{RP2} < 50%)), and -1 point for failing to achieve a desirable outcome altogether. The negative point motivates the AI agent to search for the optimal dose as soon as possible.

To compensate for low number of data points we employed WGAN-GP43, which learns the underlying data distribution and generates more data points. We generated 4000 additional data points for training qDRL models. Having a larger training dataset helps the reinforcement learning algorithm in accurately representing the state space. The training details are presented in the Supplementary Material.

See the rest here:
Quantum deep reinforcement learning for clinical decision support in oncology: application to adaptive radiotherapy | Scientific Reports - Nature.com

The future of scientific research is quantum – The Next Web

Over the past few years, the capabilities of quantum computers have reached the stage where they can be used to pursue research with widespread technological impact. Through their research, the Q4Q team at the University of Southern California, University of North Texas, and Central Michigan University, explores how software and algorithms designed for the latest quantum computing technologies can be adapted to suit the needs of applied sciences. In a collaborative project, the Q4Q team sets out a roadmap for bringing accessible, user-friendly quantum computing into fields ranging from materials science, to pharmaceutical drug development.

Since it first emerged in the 1980s, the field of quantum computing has promised to transform the ways in which we process information. The technology is centered on the fact that quantum particles such as electrons exist in superpositions of states. Quantum mechanics also dictates that particles will only collapse into one single measurable state when observed by a user. By harnessing these unique properties, physicists discovered that batches of quantum particles can act as more advanced counterparts to conventional binary bits which only exist in one of two possible states (on or off) at a given time.

On classical computers, we write and process information in a binary form. Namely, the basic unit of information is a bit, which takes on the logical binary values 0 or 1. Similarly, quantum bits (also known as qubits) are the native information carriers on quantum computers. Much like bits, we read binary outcomes of qubits, that is 0 or 1 for each qubit.

However, in a stark contrast to bits, we can encode information on a qubit in the form of a superposition of logical values of 0 and 1. This means that we can encode much more information in a qubit than in a bit. In addition, when we have a collection of qubits, the principle of superposition leads to computational states that can encode correlations among the qubits, which are stronger than any type of correlations achieved within a collection of bits. Superposition and strong quantum correlations are, arguably, the foundations on which quantum computers rely on to provide faster processing speeds than their classical counterparts.

To realize computations, qubit states can be used in quantum logic gates, which perform operations on qubits, thus transforming the input state according to a programmed algorithm. This is a paradigm for quantum computation, analogous to conventional computers. In 1998, both qubits and quantum logic gates were realized experimentally for the first time bringing the previously-theoretical concept of quantum computing into the real world.

From this basis, researchers then began to develop new software and algorithms, specially designed for operations using qubits. At the time, however, the widespread adoption of these techniques in everyday applications still seemed a long way off. The heart of the issue lay in the errors that are inevitably introduced to quantum systems by their surrounding environments. If uncorrected, these errors can cause qubits to lose their quantum information, rendering computations completely useless. Many studies at the time aimed to develop ways to correct these errors, but the processes they came up with were invariably costly and time-consuming.

Unfortunately, the risk of introducing errors to quantum computations increases drastically as more qubits are added to a system. For over a decade after the initial experimental realization of qubits and quantum logic gates, this meant that quantum computers showed little promise in rivalling the capabilities of their conventional counterparts.

In addition, quantum computing was largely limited to specialized research labs, meaning that many research groups that could have benefited from the technology were unable to access it.

While error correction remains a hurdle, the technology has since moved beyond specialized research labs, becoming accessible to more users. This occurred for the first time in 2011, when the first quantum annealer was commercialized. With this event, feasible routes emerged towards reliable quantum processors containing thousands of qubits capable of useful computations.

Quantum annealing is an advanced technique for obtaining optimal solutions to complex mathematical problems. It is a quantum computation paradigm alternative to operating on qubits with quantum logic gates.

The availability of commercial quantum annealers spurned a new surge in interest for quantum computing, with consequent technological progress, especially fueled by industrial capitals. In 2016, this culminated in the development of a new cloud system based on quantum logic gates, which enabled owners and users of quantum computers around the world to pool their resources together, expanding the use of the devices outside of specialized research labs. Before long, the widespread use of quantum software and algorithms for specific research scenarios began to look increasingly realistic.

At the time, however, the technology still required high levels of expertise to operate. Without specific knowledge of the quantum processes involved, researchers in fields such as biology, chemistry, materials science, and drug development could not make full use of them. Further progress would be needed before the advantages of quantum computing could be widely applied outside the field of quantum mechanics itself.

Now, the Q4Q team aims to build on these previous advances using user-friendly quantum algorithms and software packages to realize quantum simulations of physical systems. Where the deeply complex properties of these systems are incredibly difficult to recreate within conventional computers, there is now hope that this could be achieved using large systems of qubits.

To recreate the technologies that could realistically become widely available in the near future, the teams experiments will incorporate noisy intermediate-scale quantum (NISQ) devices which contain relatively large numbers of qubits, and by themselves are prone to environmental errors.

In their projects, the Q4Q team identifies three particular aspects of molecules and solid materials that could be better explored through the techniques they aim to develop. The first of these concerns the band structures of solids which describe the range of energy levels that electrons can occupy within a solid, as well as the energies they are forbidden from possessing.

Secondly, they aim to describe the vibrations and electronic properties of individual molecules each of which can heavily influence their physical properties. Finally, the researchers will explore how certain aspects of quantum annealing can be exploited to realize machine-learning algorithms which automatically improve through their experience of processing data.

As they apply these techniques, the Q4Q team predicts that their findings will lead to a better knowledge of the quantum properties of both molecules and solid materials. In particular, they hope to provide better descriptions of periodic solids, whose constituent atoms are arranged in reliably repeating patterns.

Previously, researchers struggled to reproduce the wavefunctions of interacting quantum particles within these materials, which relate to the probability of finding the particles in particular positions when observed by a user. Through their techniques, the Q4Q team aims to reduce the number of qubits required to capture these wavefunctions, leading to more realistic quantum simulations of the solid materials.

Elsewhere, the Q4Q team will account for the often deeply complex quantum properties of individual molecules made up of large groups of atoms. During chemical reactions, any changes taking place within these molecules will be strongly driven by quantum processes, which are still poorly understood. By developing plugins to existing quantum software, the team hopes to accurately recreate this quantum chemistry in simulated reactions.

If they are successful in reaching these goals, the results of their work could open up many new avenues of research within a diverse array of fields especially where the effects of quantum mechanics have not yet been widely considered. In particular, they will also contribute to identifying bottlenecks of current quantum processing units, which will aid the design of better quantum computers.

Perhaps most generally, the Q4Q team hopes that their techniques will enable researchers to better understand how matter responds to external perturbations, such as lasers and other light sources.

Elsewhere, widely accessible quantum software could become immensely useful in the design of new pharmaceutical drugs, as well as new fertilizers. By ascertaining how reactions between organic and biological molecules unfold within simulations, researchers could engineer molecular structures that are specifically tailored to treating certain medical conditions.

The ability to simulate these reactions could also lead to new advances in the field of biology as a whole, where processes involving large, deeply complex molecules including proteins and nucleic acids are critical to the function of every living organism.

Finally, a better knowledge of the vibrational and electronic properties of periodic solids could transform the field of materials physics. By precisely engineering structures to display certain physical properties on macroscopic scales, researchers could tailor new materials with a vast array of desirable characteristics: including durability, advanced interaction with light, and environmental sustainability.

If the impacts of the teams proposed research goals are as transformative as they hope, researchers in many different fields of the technological endeavor could soon be working with quantum technologies.

Such a clear shift away from traditional research practices could in turn create many new jobs with required skillsets including the use of cutting-edge quantum software and algorithms. Therefore, a key element of the teams activity is to develop new strategies for training future generations of researchers. Members of the Q4Q team believe that this will present some of the clearest routes yet towards the widespread application of quantum computing in our everyday lives.

This article was authored by the Q4Q team, consisting of lead investigator Rosa Di Felice, Anna Krylov, Marco Fornari, Marco Buongiorno Nardelli, Itay Hen and Amir Kalev, in Scientia. Learn more about the team, and find the original article here.

See the article here:
The future of scientific research is quantum - The Next Web

Pushing the Limits of Quantum Sensing with Variational Quantum Circuits – Physics

December 6, 2021• Physics 14, 172

Variational quantum algorithms could help researchers improve the performance of optical atomic clocks and of other quantum-metrology schemes.

D. Vasilyev/University of Innsbruck

D. Vasilyev/University of Innsbruck

Since it was first introduced in 1949, Ramsey interferometry has had an exciting history. The method was at the center of a series of beautiful experiments performed by Serge Haroches group that were recognized by the 2012 Nobel Prize in Physics [1, 2]. The prize was given for methods that enable the measurement and manipulation of individual quantum systems. Haroches group used individual atoms to sense the properties of photons inside an optical cavity. Building on these ideas, researchers have reported a new theoretical study that points at a promising way to push the limits of quantum sensing. Raphael Kaubruegger at the University of Innsbruck, Austria, and his colleagues employ so-called variational quantum circuits to optimize the sensitivity of an atomic sensor based on entangled atoms [4]. The result is a sensor that, with surprisingly modest quantum resources, should outperform those based on standard Ramsey interferometry.

We often think of photons as probes to study atoms, but Ramsey interferometry flips the script and uses atoms to study photons. This type of interferometry first puts an atom in a superposition of electronic energy levels and then passes the atom through an optical cavity. As a result, the quantum superposition accumulates a measurable phase shift that depends on the properties of the photons in the cavity. The experiments by Haroches group involved passing atoms through an optical cavity one at a time in order to nondestructively detect the number of photons. More photons in the cavity lead to a larger phase shift in the atomic wave function. In such experiments, each atom can be regarded as an individual entity. In other words, each atom is prepared in an uncorrelated product statea state that can be described independently of every other atoms state.

Kaubruegger and colleagues propose to go a step further by entangling 64 atoms and using them to make an even better sensor for Ramsey interferometry. They demonstrate the effectiveness of their approach by considering an optical atomic clock, in which Ramsey-interferometry measurements of the atomic ensembles phase are used to correct the clocks laser frequency (Fig. 1). Like Haroches group, the researchers manipulate a single quantum system, but one made of 64 atoms. Rather than using atoms in the product state, they propose to prepare these atoms in an entangled state, in which each atoms state cannot be fully described independently of the other atoms. They show that performing Ramsey interferometry using entangled states gives a big boost to the sensitivity of the phase sensor, beating the standard quantum limit that applies when sensing using uncorrelated atoms.

Their proposal harnesses a key innovation to prepare the entangled state. Entangled atomic sensors have been employed before, and a standard approach involves using so-called Greenberger-Horne-Zeilinger (GHZ) states. Kaubruegger and colleagues note that these states are only optimal for sensing under certain assumptions regarding prior knowledge of the phase-shift value. This limitation opened the door for the researchers to improve upon and outperform GHZ states by taking advantage of one of todays hottest concepts in quantum computing: variational quantum circuits. These circuits, which have a set of free parameters, replace the fixed quantum circuits used to implement quantum algorithms such as Shors algorithm for factoring or the Harrow-Hassidim-Lloyd algorithm for solving linear systems. Variational quantum circuits have internal parameters (such as rotation angles about certain Bloch sphere axes) that one optimizes over to perform a given task. Kaubruegger and colleagues propose to use two sets of variational quantum circuits to prepare the entangled state for sensing and to measure the parameter that they want to sense (that is, the optical phase). They call these circuits the entangling and decoding circuits, respectively (Fig. 2).

Achieving good performance with variational quantum circuits is challenging, since the parameters can be hard to optimize and one does not know ahead of time how deep of a circuit one needs, that is, how many quantum gates are required. Kaubruegger and colleagues find that excellent performance can be achieved with shallow circuits composed using the quantum resources inherently available in Ramsey interferometry and atomic-clock platforms. With only a few layers of their quantum circuits, they not only beat the standard quantum limit (which applies to measurements made using uncorrelated atoms) but also get very close to the Heisenberg limitthe ultimate limit for the sensitivity that one can achieve with a quantum system and, therefore, the ultimate limit of a quantum sensor. Here, a layer refers to the building block of the variational quantum circuit: more layers are needed to do a more comprehensive search over the Hilbert space, whereas fewer layers can only search over a smaller subspace. The fact that good performance requires only a few layers suggests that states that are beneficial to quantum metrology are relatively easy to find. This is an exciting possibility that should stimulate more investigation.

This new work is important because it brings together two different communities: the quantum sensing community and the variational quantum algorithm community. While variational quantum algorithms are getting major attention for quantum computing applications, it is rare for them to appear in an atomic experimental setting or in a sensing setting. The beautiful observation that variational algorithms could work in a realistic sensing application should inspire many experimentalists to think about optimizing their setups with variational quantum circuits, regardless of whether they involve atoms, light, spins, or superconductors. We need cross fertilization between quantum experimentalists and quantum computer scientists, and this work gives an inspiring guide for how such cross fertilization can be brought about.

Patrick Coles is a staff scientist at Los Alamos National Laboratory (LANL), New Mexico. He leads the near-term quantum computing research efforts at LANL, focusing on variational quantum algorithms and quantum machine learning. He also co-organizes LANL's quantum computing summer school. He has switched fields many times: He received his master's degree in biochemistry from the University of Cambridge, UK, as a Churchill Scholar and then did his Ph.D. in chemical engineering at the University of California, Berkeley. In contrast, his three postdocs (at Carnegie Mellon University, Pennsylvania; the National University of Singapore; and the University of Waterloo, Canada) were focused on all things quantum, including quantum foundations, quantum optics, quantum information theory, quantum cryptography, and (his current field) quantum computing.

Read the original:
Pushing the Limits of Quantum Sensing with Variational Quantum Circuits - Physics

Quantum Engineering | Electrical and Computer Engineering

Quantum mechanics famously allows objects to be in two places at the same time. The same principle can be applied to information, represented by bits: quantum bits can be both zero and one at the same time. The field of quantum information science seeks to engineer real-world devices that can store and process quantum states of information. It is believed that computers operating according to such principles will be capable of solving problems exponentially faster than existing computers, while quantum networks have provable security guarantees. The same concepts can be applied to making more precise sensors and measurement devices. Constructing such systems is a significant challenge, because quantum effects are typically confined to the atomic scale. However, through careful engineering, several physical platforms have been identified for quantum computing, including superconducting circuits, laser-cooled atoms and ions and electron spins in semiconductors.

Research at Princeton focuses on several aspects of this problem, ranging from fundamental studies of materials and devices to quantum computer architecture and algorithms. Our research groups have close-knit collaborations across several departments including chemistry, computer science and physics and with industry.

See the article here:
Quantum Engineering | Electrical and Computer Engineering