Archive for the ‘Quantum Computer’ Category

Trapped ion quantum computer – Wikipedia

Proposed quantum computer implementation

A trapped ion quantum computer is one proposed approach to a large-scale quantum computer. Ions, or charged atomic particles, can be confined and suspended in free space using electromagnetic fields. Qubits are stored in stable electronic states of each ion, and quantum information can be transferred through the collective quantized motion of the ions in a shared trap (interacting through the Coulomb force). Lasers are applied to induce coupling between the qubit states (for single qubit operations) or coupling between the internal qubit states and the external motional states (for entanglement between qubits).[1]

The fundamental operations of a quantum computer have been demonstrated experimentally with the currently highest accuracy in trapped ion systems. Promising schemes in development to scale the system to arbitrarily large numbers of qubits include transporting ions to spatially distinct locations in an array of ion traps, building large entangled states via photonically connected networks of remotely entangled ion chains, and combinations of these two ideas. This makes the trapped ion quantum computer system one of the most promising architectures for a scalable, universal quantum computer. As of April 2018, the largest number of particles to be controllably entangled is 20 trapped ions.[2][3][4]

The first implementation scheme for a controlled-NOT quantum gate was proposed by Ignacio Cirac and Peter Zoller in 1995,[5] specifically for the trapped ion system. The same year, a key step in the controlled-NOT gate was experimentally realized at NIST Ion Storage Group, and research in quantum computing began to take off worldwide.[citation needed]

In 2021, researchers from the University of Innsbruck presented a quantum computing demonstrator that fits inside two 19-inch server racks, the world's first quality standards-meeting compact trapped ion quantum computer.[7][6]

The electrodynamic ion trap currently used in trapped ion quantum computing research was invented in the 1950s by Wolfgang Paul (who received the Nobel Prize for his work in 1989[8]). Charged particles cannot be trapped in 3D by just electrostatic forces because of Earnshaw's theorem. Instead, an electric field oscillating at radio frequency (RF) is applied, forming a potential with the shape of a saddle spinning at the RF frequency. If the RF field has the right parameters (oscillation frequency and field strength), the charged particle becomes effectively trapped at the saddle point by a restoring force, with the motion described by a set of Mathieu equations.[1]

This saddle point is the point of minimized energy magnitude, | E ( x ) | {displaystyle |E(mathbf {x} )|} , for the ions in the potential field.[9] The Paul trap is often described as a harmonic potential well that traps ions in two dimensions (assume x ^ {displaystyle {hat {x}}} and y ^ {displaystyle {widehat {y}}} without loss of generality) and does not trap ions in the z ^ {displaystyle {widehat {z}}} direction. When multiple ions are at the saddle point and the system is at equilibrium, the ions are only free to move in z ^ {displaystyle {widehat {z}}} . Therefore, the ions will repel each other and create a vertical configuration in z ^ {displaystyle {widehat {z}}} , the simplest case being a linear strand of only a few ions.[10] Coulomb interactions of increasing complexity will create a more intricate ion configuration if many ions are initialized in the same trap.[1] Furthermore, the additional vibrations of the added ions greatly complicate the quantum system, which makes initialization and computation more difficult.[10]

Once trapped, the ions should be cooled such that k B T z {displaystyle k_{rm {B}}Tll hbar omega _{z}} (see Lamb Dicke regime). This can be achieved by a combination of Doppler cooling and resolved sideband cooling. At this very low temperature, vibrational energy in the ion trap is quantized into phonons by the energy eigenstates of the ion strand, which are called the center of mass vibrational modes. A single phonon's energy is given by the relation z {displaystyle hbar omega _{z}} . These quantum states occur when the trapped ions vibrate together and are completely isolated from the external environment. If the ions are not properly isolated, noise can result from ions interacting with external electromagnetic fields, which creates random movement and destroys the quantized energy states.[1]

The full requirements for a functional quantum computer are not entirely known, but there are many generally accepted requirements. David DiVincenzo outlined several of these criterion for quantum computing.[1]

Any two-level quantum system can form a qubit, and there are two predominant ways to form a qubit using the electronic states of an ion:

Hyperfine qubits are extremely long-lived (decay time of the order of thousands to millions of years) and phase/frequency stable (traditionally used for atomic frequency standards).[10] Optical qubits are also relatively long-lived (with a decay time of the order of a second), compared to the logic gate operation time (which is of the order of microseconds). The use of each type of qubit poses its own distinct challenges in the laboratory.

Ionic qubit states can be prepared in a specific qubit state using a process called optical pumping. In this process, a laser couples the ion to some excited states which eventually decay to one state which is not coupled to the laser. Once the ion reaches that state, it has no excited levels to couple to in the presence of that laser and, therefore, remains in that state. If the ion decays to one of the other states, the laser will continue to excite the ion until it decays to the state that does not interact with the laser. This initialization process is standard in many physics experiments and can be performed with extremely high fidelity (>99.9%).[11]

The system's initial state for quantum computation can therefore be described by the ions in their hyperfine and motional ground states, resulting in an initial center of mass phonon state of | 0 {displaystyle |0rangle } (zero phonons).[1]

Measuring the state of the qubit stored in an ion is quite simple. Typically, a laser is applied to the ion that couples only one of the qubit states. When the ion collapses into this state during the measurement process, the laser will excite it, resulting in a photon being released when the ion decays from the excited state. After decay, the ion is continually excited by the laser and repeatedly emits photons. These photons can be collected by a photomultiplier tube (PMT) or a charge-coupled device (CCD) camera. If the ion collapses into the other qubit state, then it does not interact with the laser and no photon is emitted. By counting the number of collected photons, the state of the ion may be determined with a very high accuracy (>99.9%).[citation needed]

One of the requirements of universal quantum computing is to coherently change the state of a single qubit. For example, this can transform a qubit starting out in 0 into any arbitrary superposition of 0 and 1 defined by the user. In a trapped ion system, this is often done using magnetic dipole transitions or stimulated Raman transitions for hyperfine qubits and electric quadrupole transitions for optical qubits. The term "rotation" alludes to the Bloch sphere representation of a qubit pure state. Gate fidelity can be greater than 99%.

The rotation operators R x ( ) {displaystyle R_{x}(theta )} and R y ( ) {displaystyle R_{y}(theta )} can be applied to individual ions by manipulating the frequency of an external electromagnetic field from and exposing the ions to the field for specific amounts of time. These controls create a Hamiltonian of the form H I i = / 2 ( S + exp ( i ) + S exp ( i ) ) {displaystyle H_{I}^{i}=hbar Omega /2(S_{+}exp(iphi )+S_{-}exp(-iphi ))} . Here, S + {displaystyle S_{+}} and S {displaystyle S_{-}} are the raising and lowering operators of spin (see Ladder operator). These rotations are the universal building blocks for single-qubit gates in quantum computing.[1]

To obtain the Hamiltonian for the ion-laser interaction, apply the JaynesCummings model. Once the Hamiltonian is found, the formula for the unitary operation performed on the qubit can be derived using the principles of quantum time evolution. Although this model utilizes the rotating wave approximation, it proves to be effective for the purposes of trapped-ion quantum computing.[1]

Besides the controlled-NOT gate proposed by Cirac and Zoller in 1995, many equivalent, but more robust, schemes have been proposed and implemented experimentally since. Recent theoretical work by JJ. Garcia-Ripoll, Cirac, and Zoller have shown that there are no fundamental limitations to the speed of entangling gates, but gates in this impulsive regime (faster than 1 microsecond) have not yet been demonstrated experimentally. The fidelity of these implementations has been greater than 99%.[12]

Quantum computers must be capable of initializing, storing, and manipulating many qubits at once in order to solve difficult computational problems. However, as previously discussed, a finite number of qubits can be stored in each trap while still maintaining their computational abilities. It is therefore necessary to design interconnected ion traps that are capable of transferring information from one trap to another. Ions can be separated from the same interaction region to individual storage regions and brought back together without losing the quantum information stored in their internal states. Ions can also be made to turn corners at a "T" junction, allowing a two dimensional trap array design. Semiconductor fabrication techniques have also been employed to manufacture the new generation of traps, making the 'ion trap on a chip' a reality. An example is the quantum charge-coupled device (QCCD) designed by D. Kielpinski, C. Monroe, and D.J. Wineland.[13] QCCDs resemble mazes of electrodes with designated areas for storing and manipulating qubits.

The variable electric potential created by the electrodes can both trap ions in specific regions and move them through the transport channels, which negates the necessity of containing all ions in a single trap. Ions in the QCCD's memory region are isolated from any operations and therefore the information contained in their states is kept for later use. Gates, including those that entangle two ion states, are applied to qubits in the interaction region by the method already described in this article.[13]

When an ion is being transported between regions in an interconnected trap and is subjected to a nonuniform magnetic field, decoherence can occur in the form of the equation below (see Zeeman effect).[13] This is effectively changes the relative phase of the quantum state. The up and down arrows correspond to a general superposition qubit state, in this case the ground and excited states of the ion.

| + | exp ( i ) | + | {displaystyle left|uparrow rightrangle +left|downarrow rightrangle longrightarrow exp(ialpha )left|uparrow rightrangle +left|downarrow rightrangle }

Additional relative phases could arise from physical movements of the trap or the presence of unintended electric fields. If the user could determine the parameter , accounting for this decoherence would be relatively simple, as known quantum information processes exist for correcting a relative phase.[1] However, since from the interaction with the magnetic field is path-dependent, the problem is highly complex. Considering the multiple ways that decoherence of a relative phase can be introduced in an ion trap, reimagining the ion state in a new basis that minimizes decoherence could be a way to eliminate the issue.

One way to combat decoherence is to represent the quantum state in a new basis called the decoherence-free subspaces, or DFS., with basis states | {displaystyle left|uparrow downarrow rightrangle } and | {displaystyle left|downarrow uparrow rightrangle } . The DFS is actually the subspace of two ion states, such that if both ions acquire the same relative phase, the total quantum state in the DFS will be unaffected.[13]

Trapped ion quantum computers theoretically meet all of DiVincenzo's criteria for quantum computing, but implementation of the system can be quite difficult. The main challenges facing trapped ion quantum computing are the initialization of the ion's motional states, and the relatively brief lifetimes of the phonon states.[1] Decoherence also proves to be challenging to eliminate, and is caused when the qubits interact with the external environment undesirably.[5]

The controlled NOT gate is a crucial component for quantum computing, as any quantum gate can be created by a combination of CNOT gates and single-qubit rotations.[10] It is therefore important that a trapped-ion quantum computer can perform this operation by meeting the following three requirements.

First, the trapped ion quantum computer must be able to perform arbitrary rotations on qubits, which are already discussed in the "arbitrary single-qubit rotation" section.

The next component of a CNOT gate is the controlled phase-flip gate, or the controlled-Z gate (see quantum logic gate). In a trapped ion quantum computer, the state of the center of mass phonon functions as the control qubit, and the internal atomic spin state of the ion is the working qubit. The phase of the working qubit will therefore be flipped if the phonon qubit is in the state | 1 {displaystyle |1rangle } .

Lastly, a SWAP gate must be implemented, acting on both the ion state and the phonon state.[1]

Two alternate schemes to represent the CNOT gates are presented in Michael Nielsen and Isaac Chuang's Quantum Computation and Quantum Information and Cirac and Zoller's Quantum Computation with Cold Trapped Ions.[1][5]

Go here to read the rest:
Trapped ion quantum computer - Wikipedia

Yale Quantum Institute

Listen to the Segment on Science Friday Website hereThe computer chips that are delivering these words to you work on a simple, binary, on/off...

Sound Artist and ComposerSpencer Topel, our2019 Yale Quantum Institute Artist-in-Residenceperformed a live set of Quantum Sound: A Live...

YQI AlumniWolfgang Pfaffslab at UIUChas currently an opening for a postdoc.This position is for a project focusing on modular quantum...

The Department of Applied Physics at Yale University invite applications for a faculty appointment in the area of experimental optics and photonics....

The Department of Electrical and Computer Engineering at The University of Texas Austin has multiple faculty openings with a start date ofFall 2022...

The College of Science at Northeastern University has launched a junior faculty (tenure track) search across all departments. The priority for the...

Argonne National Laboratory seeks multiple postdoctoral candidates to participate in projects of strategic national importance in quantum...

Yale will lead a new project to simulate the dynamics of complex chemical reactions using quantum computing technology.The new Center for Quantum...

The Department of Physics and Astronomy at Rice University invites applications for tenure-track faculty positions in the broad area of experimental...

The quantum information theory group (Shruti Puri and Steven Girvin) in the Yale Quantum Institute seeks outstanding applicants for a postdoctoral...

See the original post:
Yale Quantum Institute

Creating the Heart of a Quantum Computer: Developing Qubits – SciTechDaily

By Shannon Brescher Shea, U.S. Department of EnergyJanuary 3, 2022

A computer is suspended from the ceiling. Delicate lines and loops of silvery wires and tubes connect gold-colored platforms. It seems to belong in a science-fiction movie, perhaps a steam-punk cousin of HAL in 2001: A Space Odyssey. But as the makers of that 1968 movie imagined computers the size of a spaceship, this technology would have never crossed their minds a quantum computer.

Quantum computers have the potential to solve problems that conventional computers cant. Conventional computer chips can only process so much information at one time and were coming very close to reaching their physical limits. In contrast, the unique properties of materials for quantum computing have the potential to process more information much faster.

These advances could revolutionize certain areas of scientific research. Identifying materials with specific characteristics, understanding photosynthesis, and discovering new medicines all require massive amounts of calculations. In theory, quantum computing could solve these problems faster and more efficiently. Quantum computing could also open up possibilities we never even considered. Its like a microwave oven versus a conventional oven different technologies with different purposes.

But were not there yet. So far, one company has claimed its quantum computer can complete a specific calculation faster than the worlds fastest conventional supercomputers. Scientists routinely using quantum computers to answer scientific questions is a long way off.

To use quantum computers on a large scale, we need to improve the technology at their heart qubits. Qubits are the quantum version of conventional computers most basic form of information, bits. The DOEs Office of Science is supporting research into developing the ingredients and recipes to build these challenging qubits.

DOEs Lawrence Berkeley National Laboratory is using a sophisticated cooling system to keep qubits the heart of quantum computers cold enough for scientists to study them for use in quantum computers. Credit: Image courtesy of Lawrence Berkeley National Laboratory

At the atomic scale, physics gets very weird. Electrons, atoms, and other quantum particles interact with each other differently than ordinary objects. In certain materials, we can harness these strange behaviors. Several of these properties particularly superposition and entanglement can be extremely useful in computing technology.

The principle of superposition is the idea that a qubit can be in multiple states at once. With traditional bits, you only have two options: 1 or 0. These binary numbers describe all of the information in any computer. Qubits are more complicated.

Imagine a pot with water in it. When you have water in a pot with a top on it, you dont know if its boiling or not. Real water is either boiling or not; looking at it doesnt change its state. But if the pot was in the quantum realm, the water (representing a quantum particle) could both be boiling and not boiling at the same time or any linear superposition of these two states. If you took the lid off of that quantum pot, the water would immediately be one state or the other. The measurement forces the quantum particle (or water) into a specific observable state.

Entanglement is when qubits have a relationship to each other that prevents them from acting independently. It happens when a quantum particle has a state (such as spin or electric charge) thats linked to another quantum particles state. This relationship persists even when the particles are physically far apart, even far beyond atomic distances.

These properties allow quantum computers to process more information than conventional bits that can only be in a single state and only act independently from each other.

But to get any of these great properties, you need to have fine control over a materials electrons or other quantum particles. In some ways, this isnt so different from conventional computers. Whether electrons move or not through a conventional transistor determines the bits value, making it either 1 or 0.

Rather than simply switching electron flow on or off, qubits require control over tricky things like electron spin. To create a qubit, scientists have to find a spot in a material where they can access and control these quantum properties. Once they access them, they can then use light or magnetic fields to create superposition, entanglement, and other properties.

In many materials, scientists do this by manipulating the spin of individual electrons. Electron spin is similar to the spin of a top; it has a direction, angle, and momentum. Each electrons spin is either up or down. But as a quantum mechanical property, spin can also exist in a combination of up and down. To influence electron spin, scientists apply microwaves (similar to the ones in your microwave oven) and magnets. The magnets and microwaves together allow scientists to control the qubit.

Since the 1990s, scientists have been able to gain better and better control over electron spin. Thats allowed them to access quantum states and manipulate quantum information more than ever before.

To see where thats gone today, its remarkable, said David Awschalom, a quantum physicist at DOEs Argonne National Laboratory and the University of Chicago as well as Director of the Chicago Quantum Exchange.

Whether they use electron spin or another approach, all qubits face major challenges before we can scale them up. Two of the biggest ones are coherence time and error correction.

When you run a computer, you need to be able to create and store a piece of information, leave it alone, and then come back later to retrieve it. However, if the system that holds the information changes on its own, its useless for computing. Unfortunately, qubits are sensitive to the environment around them and dont maintain their state for very long.

Right now, quantum systems are subject to a lot of noise, things that cause them to have a low coherence time (the time they can maintain their condition) or produce errors. Making sure that you get the right answer all of the time is one of the biggest hurdles in quantum computing, said Danna Freedman, an associate professor in chemistry at Northwestern University.

Even if you can reduce that noise, there will still be errors. We will have to build technology that is able to do error correction before we are able to make a big difference with quantum computing, said Giulia Galli, a quantum chemist and physicist at DOEs Argonne National Laboratory and the University of Chicago.

The more qubits you have in play, the more these problems multiply. While todays most powerful quantum computers have about 50 qubits, its likely that they will need hundreds or thousands to solve the problems that we want them to.

The jury is still out on which qubit technology will be the best. No real winner has been identified, said Galli. [Different ones] may have their place for different applications. In addition to computing, different quantum materials may be useful for quantum sensing or networked quantum communications.

To help move qubits forward, DOEs Office of Science is supporting research on a number of different technologies. To realize quantum computings enormous scientific potential, we need to reimagine quantum R&D by simultaneously exploring a range of possible solutions, said Irfan Siddiqi, a quantum physicist at the DOE Lawrence Berkeley National Laboratory and the University of California, Berkeley.

Superconducting Qubits

Superconducting qubits are currently the most advanced qubit technology. Most existing quantum computers use superconducting qubits, including the one that beat the worlds fastest supercomputer. They use metal-insulator-metal sandwiches called Josephson junctions. To turn these materials into superconductors materials that electricity can run through with no loss scientists lower them to extremely cold temperatures. Among other things, pairs of electrons coherently move through the material as if theyre single particles. This movement makes the quantum states more long-lived than in conventional materials.

To scale up superconducting qubits, Siddiqi and his colleagues are studying how to build them even better with support from the Office of Science. His team has examined how to make improvements to a Josephson junction, a thin insulating barrier between two superconductors in the qubit. By affecting how electrons flow, this barrier makes it possible to control electrons energy levels. Making this junction as consistent and small as possible can increase the qubits coherence time. In one paper on these junctions, Siddiqis team provides a recipe to build an eight-qubit quantum processor, complete with experimental ingredients and step-by-step instructions.

Qubits Using Defects

Defects are spaces where atoms are missing or misplaced in a materials structure. These spaces change how electrons move in the materials. In certain quantum materials, these spaces trap electrons, allowing researchers to access and control their spins. Unlike superconductors, these qubits dont always need to be at ultra-low temperatures. They have the potential to have long coherence times and be manufactured at scale.

While diamonds are usually valued for their lack of imperfections, their defects are actually quite useful for qubits. Adding a nitrogen atom to a place where there would normally be a carbon atom in diamonds creates whats called a nitrogen-vacancy center. Researchers using the Center for Functional Nanomaterials, a DOE Office of Science user facility, found a way to create a stencil just two nanometers long to create these defect patterns. This spacing helped increase these qubits coherence time and made it easier to entangle them.

But useful defects arent limited to diamonds. Diamonds are expensive, small, and hard to control. Aluminum nitride and silicon carbide are cheaper, easier to use, and already common in everyday electronics. Galli and her team used theory to predict how to physically strain aluminum nitride in just the right way to create electron states for qubits. As nitrogen vacancies occur naturally in aluminum nitride, scientists should be able to control electron spin in it just as they do in diamonds. Another option, silicon carbide, is already used in LED lights, high-powered electronics, and electronic displays. Awschaloms team found that certain defects in silicon carbide have coherence times comparable to or longer than those in nitrogen-vacancy centers in diamonds. In complementary work, Gallis group developed theoretical models explaining the longer coherence times.

Based on theoretical work, we began to examine these materials at the atomic scale. We found that the quantum states were always there, but no one had looked for them, said Awschalom. Their presence and robust behavior in these materials were unexpected. We imagined that their quantum properties would be short-lived due to interactions with nearby nuclear spins. Since then, his team has embedded these qubits in commercial electronic wafers and found that they do surprisingly well. This can allow them to connect the qubits with electronics.

Materials by Design

While some scientists are investigating how to use existing materials, others are taking a different tack designing materials from scratch. This approach builds custom materials molecule by molecule. By customizing metals, the molecules or ions bound to metals, and the surrounding environment, scientists can potentially control quantum states at the level of a single particle.

When youre talking about both understanding and optimizing the properties of a qubit, knowing that every atom in a quantum system is exactly where you want it is very important, said Freedman.

With this approach, scientists can limit the amount of nuclear spin (the spin of the nucleus of an atom) in the qubits environment. A lot of atoms that contain nuclear spin cause magnetic noise that makes it hard to maintain and control electron spin. That reduces the qubits coherence time. Freedman and her team developed an environment that had very little nuclear spin. By testing different combinations of solvents, temperatures, and ions/molecules attached to the metal, they achieved a 1 millisecond coherence time in a molecule that contains the metal vanadium. That was a much longer coherence time than anyone had achieved in a molecule before. While previous molecular qubits had coherence times that were five times shorter than diamond nitrogen-vacancy centers times, this matched coherence times in diamonds.

That was genuinely shocking to me because I thought molecules would necessarily be the underdogs in this game, said Freedman. [It] opens up a gigantic space for us to play in.

The surprises in quantum just keep coming. Awschalom compared our present-day situation to the 1950s when scientists were exploring the potential of transistors. At the time, transistors were less than half an inch long. Now laptops have billions of them. Quantum computing stands in a similar place.

The overall idea that we could completely transform the way that computation is done and the way nature is studied by doing quantum simulation is really very exciting, said Galli. Our fundamental way of looking at materials, based on quantum simulations, can finally be useful to develop technologically relevant devices and materials.

Read the rest here:
Creating the Heart of a Quantum Computer: Developing Qubits - SciTechDaily

Grover’s algorithm – Wikipedia

Quantum search algorithm

In quantum computing, Grover's algorithm, also known as the quantum search algorithm, refers to a quantum algorithm for unstructured search that finds with high probability the unique input to a black box function that produces a particular output value, using just O ( N ) {displaystyle O({sqrt {N}})} evaluations of the function, where N {displaystyle N} is the size of the function's domain. It was devised by Lov Grover in 1996.[1]

The analogous problem in classical computation cannot be solved in fewer than O ( N ) {displaystyle O(N)} evaluations (because, on average, one has to check half of the domain to get a 50% chance of finding the right input). At roughly the same time that Grover published his algorithm, Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani proved that any quantum solution to the problem needs to evaluate the function ( N ) {displaystyle Omega ({sqrt {N}})} times, so Grover's algorithm is asymptotically optimal.[2] Since researchers generally believe that NP-complete problems are difficult because their search spaces have essentially no structure, the optimality of Grover's algorithm for unstructured search suggests (but does not prove) that quantum computers cannot solve NP-complete problems in polynomial time.[3]

Unlike other quantum algorithms, which may provide exponential speedup over their classical counterparts, Grover's algorithm provides only a quadratic speedup. However, even quadratic speedup is considerable when N {displaystyle N} is large, and Grover's algorithm can be applied to speed up broad classes of algorithms.[3] Grover's algorithm could brute-force a 128-bit symmetric cryptographic key in roughly 264 iterations, or a 256-bit key in roughly 2128 iterations. As a result, it is sometimes suggested[4] that symmetric key lengths be doubled to protect against future quantum attacks.

Grover's algorithm, along with variants like amplitude amplification, can be used to speed up a broad range of algorithms.[5][6][7] In particular, algorithms for NP-complete problems generally contain exhaustive search as a subroutine, which can be sped up by Grover's algorithm.[6] The current best algorithm for 3SAT is one such example. Generic constraint satisfaction problems also see quadratic speedups with Grover.[8] These algorithms do not require that the input be given in the form of an oracle, since Grover's algorithm is being applied with an explicit function, e.g. the function checking that a set of bits satisfies a 3SAT instance.

Grover's algorithm can also give provable speedups for black-box problems in quantum query complexity, including element distinctness[9] and the collision problem[10] (solved with the BrassardHyerTapp algorithm). In these types of problems, one treats the oracle function f as a database, and the goal is to use the quantum query to this function as few times as possible.

Grover's algorithm essentially solves the task of function inversion. Roughly speaking, if we have a function y = f ( x ) {displaystyle y=f(x)} that can be evaluated on a quantum computer, Grover's algorithm allows us to calculate x {displaystyle x} when given y {displaystyle y} . Consequently, Grover's algorithm gives broad asymptotic speed-ups to many kinds of brute-force attacks on symmetric-key cryptography, including collision attacks and pre-image attacks.[11] However, this may not necessarily be the most efficient algorithm since, for example, the parallel rho algorithm is able to find a collision in SHA2 more efficiently than Grover's algorithm.[12]

Grover's original paper described the algorithm as a database search algorithm, and this description is still common. The database in this analogy is a table of all of the function's outputs, indexed by the corresponding input. However, this database is not represented explicitly. Instead, an oracle is invoked to evaluate an item by its index. Reading a full data-base item by item and converting it into such a representation may take a lot longer than Grover's search. To account for such effects, Grover's algorithm can be viewed as solving an equation or satisfying a constraint. In such applications, the oracle is a way to check the constraint and is not related to the search algorithm. This separation usually prevents algorithmic optimizations, whereas conventional search algorithms often rely on such optimizations and avoid exhaustive search.[13]

The major barrier to instantiating a speedup from Grover's algorithm is that the quadratic speedup achieved is too modest to overcome the large overhead of near-term quantum computers.[14] However, later generations of fault-tolerant quantum computers with better hardware performance may be able to realize these speedups for practical instances of data.

As input for Grover's algorithm, suppose we have a function f : { 0 , 1 , , N 1 } { 0 , 1 } {displaystyle f:{0,1,ldots ,N-1}to {0,1}} . In the "unstructured database" analogy, the domain represent indices to a database, and f(x) = 1 if and only if the data that x points to satisfies the search criterion. We additionally assume that only one index satisfies f(x) = 1, and we call this index . Our goal is to identify .

We can access f with a subroutine (sometimes called an oracle) in the form of a unitary operator U that acts as follows:

This uses the N {displaystyle N} -dimensional state space H {displaystyle {mathcal {H}}} , which is supplied by a register with n = log 2 N {displaystyle n=lceil log _{2}Nrceil } qubits.This is often written as

Grover's algorithm outputs with probability at least 1/2 using O ( N ) {displaystyle O({sqrt {N}})} applications of U. This probability can be made arbitrarily large by running Grover's algorithm multiple times. If one runs Grover's algorithm until is found, the expected number of applications is still O ( N ) {displaystyle O({sqrt {N}})} , since it will only be run twice on average.

This section compares the above oracle U {displaystyle U_{omega }} with an oracle U f {displaystyle U_{f}} .

U is different from the standard quantum oracle for a function f. This standard oracle, denoted here as Uf, uses an ancillary qubit system. The operation then represents an inversion (NOT gate) conditioned by the value of f(x) on the main system:

or briefly,

These oracles are typically realized using uncomputation.

If we are given Uf as our oracle, then we can also implement U, since U is Uf when the ancillary qubit is in the state | = 1 2 ( | 0 | 1 ) = H | 1 {displaystyle |-rangle ={frac {1}{sqrt {2}}}{big (}|0rangle -|1rangle {big )}=H|1rangle } :

So, Grover's algorithm can be run regardless of which oracle is given.[3] If Uf is given, then we must maintain an additional qubit in the state | {displaystyle |-rangle } and apply Uf in place of U.

The steps of Grover's algorithm are given as follows:

For the correctly chosen value of r {displaystyle r} , the output will be | {displaystyle |omega rangle } with probability approaching 1 for N 1. Analysis shows that this eventual value for r ( N ) {displaystyle r(N)} satisfies r ( N ) 4 N {displaystyle r(N)leq {Big lceil }{frac {pi }{4}}{sqrt {N}}{Big rceil }} .

Implementing the steps for this algorithm can be done using a number of gates linear in the number of qubits.[3] Thus, the gate complexity of this algorithm is O ( log ( N ) r ( N ) ) {displaystyle O(log(N)r(N))} , or O ( log ( N ) ) {displaystyle O(log(N))} per iteration.

There is a geometric interpretation of Grover's algorithm, following from the observation that the quantum state of Grover's algorithm stays in a two-dimensional subspace after each step. Consider the plane spanned by | s {displaystyle |srangle } and | {displaystyle |omega rangle } ; equivalently, the plane spanned by | {displaystyle |omega rangle } and the perpendicular ket | s = 1 N 1 x | x {displaystyle textstyle |s'rangle ={frac {1}{sqrt {N-1}}}sum _{xneq omega }|xrangle } .

Grover's algorithm begins with the initial ket | s {displaystyle |srangle } , which lies in the subspace. The operator U {displaystyle U_{omega }} is a reflection at the hyperplane orthogonal to | {displaystyle |omega rangle } for vectors in the plane spanned by | s {displaystyle |s'rangle } and | {displaystyle |omega rangle } , i.e. it acts as a reflection across | s {displaystyle |s'rangle } . This can be seen by writing U {displaystyle U_{omega }} in the form of a Householder reflection:

The operator U s = 2 | s s | I {displaystyle U_{s}=2|srangle langle s|-I} is a reflection through | s {displaystyle |srangle } . Both operators U s {displaystyle U_{s}} and U {displaystyle U_{omega }} take states in the plane spanned by | s {displaystyle |s'rangle } and | {displaystyle |omega rangle } to states in the plane. Therefore, Grover's algorithm stays in this plane for the entire algorithm.

It is straightforward to check that the operator U s U {displaystyle U_{s}U_{omega }} of each Grover iteration step rotates the state vector by an angle of = 2 arcsin 1 N {displaystyle theta =2arcsin {tfrac {1}{sqrt {N}}}} .So, with enough iterations, one can rotate from the initial state | s {displaystyle |srangle } to the desired output state | {displaystyle |omega rangle } . The initial ket is close to the state orthogonal to | {displaystyle |omega rangle } :

In geometric terms, the angle / 2 {displaystyle theta /2} between | s {displaystyle |srangle } and | s {displaystyle |s'rangle } is given by

We need to stop when the state vector passes close to | {displaystyle |omega rangle } ; after this, subsequent iterations rotate the state vector away from | {displaystyle |omega rangle } , reducing the probability of obtaining the correct answer. The exact probability of measuring the correct answer is

where r is the (integer) number of Grover iterations. The earliest time that we get a near-optimal measurement is therefore r N / 4 {displaystyle rapprox pi {sqrt {N}}/4} .

To complete the algebraic analysis, we need to find out what happens when we repeatedly apply U s U {displaystyle U_{s}U_{omega }} . A natural way to do this is by eigenvalue analysis of a matrix. Notice that during the entire computation, the state of the algorithm is a linear combination of s {displaystyle s} and {displaystyle omega } . We can write the action of U s {displaystyle U_{s}} and U {displaystyle U_{omega }} in the space spanned by { | s , | } {displaystyle {|srangle ,|omega rangle }} as:

So in the basis { | , | s } {displaystyle {|omega rangle ,|srangle }} (which is neither orthogonal nor a basis of the whole space) the action U s U {displaystyle U_{s}U_{omega }} of applying U {displaystyle U_{omega }} followed by U s {displaystyle U_{s}} is given by the matrix

This matrix happens to have a very convenient Jordan form. If we define t = arcsin ( 1 / N ) {displaystyle t=arcsin(1/{sqrt {N}})} , it is

It follows that r-th power of the matrix (corresponding to r iterations) is

Using this form, we can use trigonometric identities to compute the probability of observing after r iterations mentioned in the previous section,

Alternatively, one might reasonably imagine that a near-optimal time to distinguish would be when the angles 2rt and 2rt are as far apart as possible, which corresponds to 2 r t / 2 {displaystyle 2rtapprox pi /2} , or r = / 4 t = / 4 arcsin ( 1 / N ) N / 4 {displaystyle r=pi /4t=pi /4arcsin(1/{sqrt {N}})approx pi {sqrt {N}}/4} . Then the system is in state

A short calculation now shows that the observation yields the correct answer with error O ( 1 N ) {displaystyle Oleft({frac {1}{N}}right)} .

If, instead of 1 matching entry, there are k matching entries, the same algorithm works, but the number of iterations must be 4 ( N k ) 1 / 2 {textstyle {frac {pi }{4}}{left({frac {N}{k}}right)^{1/2}}} instead of 4 N 1 / 2 {textstyle {frac {pi }{4}}{N^{1/2}}} .

There are several ways to handle the case if k is unknown.[15] A simple solution performs optimally up to a constant factor: run Grover's algorithm repeatedly for increasingly small values of k, e.g. taking k = N, N/2, N/4, ..., and so on, taking k = N / 2 t {displaystyle k=N/2^{t}} for iteration t until a matching entry is found.

With sufficiently high probability, a marked entry will be found by iteration t = log 2 ( N / k ) + c {displaystyle t=log _{2}(N/k)+c} for some constant c. Thus, the total number of iterations taken is at most

A version of this algorithm is used in order to solve the collision problem.[16][17]

A modification of Grover's algorithm called quantum partial search was described by Grover and Radhakrishnan in 2004.[18] In partial search, one is not interested in finding the exact address of the target item, only the first few digits of the address. Equivalently, we can think of "chunking" the search space into blocks, and then asking "in which block is the target item?". In many applications, such a search yields enough information if the target address contains the information wanted. For instance, to use the example given by L. K. Grover, if one has a list of students organized by class rank, we may only be interested in whether a student is in the lower 25%, 2550%, 5075% or 75100% percentile.

To describe partial search, we consider a database separated into K {displaystyle K} blocks, each of size b = N / K {displaystyle b=N/K} . The partial search problem is easier. Consider the approach we would take classically we pick one block at random, and then perform a normal search through the rest of the blocks (in set theory language, the complement). If we don't find the target, then we know it's in the block we didn't search. The average number of iterations drops from N / 2 {displaystyle N/2} to ( N b ) / 2 {displaystyle (N-b)/2} .

Grover's algorithm requires 4 N {textstyle {frac {pi }{4}}{sqrt {N}}} iterations. Partial search will be faster by a numerical factor that depends on the number of blocks K {displaystyle K} . Partial search uses n 1 {displaystyle n_{1}} global iterations and n 2 {displaystyle n_{2}} local iterations. The global Grover operator is designated G 1 {displaystyle G_{1}} and the local Grover operator is designated G 2 {displaystyle G_{2}} .

The global Grover operator acts on the blocks. Essentially, it is given as follows:

The optimal values of j 1 {displaystyle j_{1}} and j 2 {displaystyle j_{2}} are discussed in the paper by Grover and Radhakrishnan. One might also wonder what happens if one applies successive partial searches at different levels of "resolution". This idea was studied in detail by Vladimir Korepin and Xu, who called it binary quantum search. They proved that it is not in fact any faster than performing a single partial search.

Grover's algorithm is optimal up to sub-constant factors. That is, any algorithm that accesses the database only by using the operator U must apply U at least a 1 o ( 1 ) {displaystyle 1-o(1)} fraction as many times as Grover's algorithm.[19] The extension of Grover's algorithm to k matching entries, (N/k)1/2/4, is also optimal.[16] This result is important in understanding the limits of quantum computation.

If the Grover's search problem was solvable with logc N applications of U, that would imply that NP is contained in BQP, by transforming problems in NP into Grover-type search problems. The optimality of Grover's algorithm suggests that quantum computers cannot solve NP-Complete problems in polynomial time, and thus NP is not contained in BQP.

It has been shown that a class of non-local hidden variable quantum computers could implement a search of an N {displaystyle N} -item database in at most O ( N 3 ) {displaystyle O({sqrt[{3}]{N}})} steps. This is faster than the O ( N ) {displaystyle O({sqrt {N}})} steps taken by Grover's algorithm.[20]

See the rest here:
Grover's algorithm - Wikipedia

Time crystal in a quantum computer | Stanford News

There is a huge global effort to engineer a computer capable of harnessing the power of quantum physics to carry out computations of unprecedented complexity. While formidable technological obstacles still stand in the way of creating such a quantum computer, todays early prototypes are still capable of remarkable feats.

The Google Sycamore chip used in the creation of a time crystal. (Image credit: Google Quantum AI)

For example, the creation of a new phase of matter called a time crystal. Just as a crystals structure repeats in space, a time crystal repeats in time and, importantly, does so infinitely and without any further input of energy like a clock that runs forever without any batteries. The quest to realize this phase of matter has been a longstanding challenge in theory and experiment one that has now finally come to fruition.

In research published Nov. 30 in Nature, a team of scientists from Stanford University, Google Quantum AI, the Max Planck Institute for Physics of Complex Systems and Oxford University detail their creation of a time crystal using Googles Sycamore quantum computing hardware.

The big picture is that we are taking the devices that are meant to be the quantum computers of the future and thinking of them as complex quantum systems in their own right, said Matteo Ippoliti, a postdoctoral scholar at Stanford and co-lead author of the work. Instead of computation, were putting the computer to work as a new experimental platform to realize and detect new phases of matter.

For the team, the excitement of their achievement lies not only in creating a new phase of matter but in opening up opportunities to explore new regimes in their field of condensed matter physics, which studies the novel phenomena and properties brought about by the collective interactions of many objects in a system. (Such interactions can be far richer than the properties of the individual objects.)

Time-crystals are a striking example of a new type of non-equilibrium quantum phase of matter, said Vedika Khemani, assistant professor of physics at Stanford and a senior author of the paper. While much of our understanding of condensed matter physics is based on equilibrium systems, these new quantum devices are providing us a fascinating window into new non-equilibrium regimes in many-body physics.

The basic ingredients to make this time crystal are as follows: The physics equivalent of a fruit fly and something to give it a kick. The fruit fly of physics is the Ising model, a longstanding tool for understanding various physical phenomena including phase transitions and magnetism which consists of a lattice where each site is occupied by a particle that can be in two states, represented as a spin up or down.

During her graduate school years, Khemani, her doctoral advisor Shivaji Sondhi, then at Princeton University, and Achilleas Lazarides and Roderich Moessner at the Max Planck Institute for Physics of Complex Systems stumbled upon this recipe for making time crystals unintentionally. They were studying non-equilibrium many-body localized systems systems where the particles get stuck in the state in which they started and can never relax to an equilibrium state. They were interested in exploring phases that might develop in such systems when they are periodically kicked by a laser. Not only did they manage to find stable non-equilibrium phases, they found one where the spins of the particles flipped between patterns that repeat in time forever, at a period twice that of the driving period of the laser, thus making a time crystal.

The periodic kick of the laser establishes a specific rhythm to the dynamics. Normally the dance of the spins should sync up with this rhythm, but in a time crystal it doesnt. Instead, the spins flip between two states, completing a cycle only after being kicked by the laser twice. This means that the systems time translation symmetry is broken. Symmetries play a fundamental role in physics, and they are often broken explaining the origins of regular crystals, magnets and many other phenomena; however, time translation symmetry stands out because unlike other symmetries, it cant be broken in equilibrium. The periodic kick is a loophole that makes time crystals possible.

The doubling of the oscillation period is unusual, but not unprecedented. And long-lived oscillations are also very common in the quantum dynamics of few-particle systems. What makes a time crystal unique is that its a system of millions of things that are showing this kind of concerted behavior without any energy coming in or leaking out.

Its a completely robust phase of matter, where youre not fine-tuning parameters or states but your system is still quantum, said Sondhi, professor of physics at Oxford and co-author of the paper. Theres no feed of energy, theres no drain of energy, and it keeps going forever and it involves many strongly interacting particles.

While this may sound suspiciously close to a perpetual motion machine, a closer look reveals that time crystals dont break any laws of physics. Entropy a measure of disorder in the system remains stationary over time, marginally satisfying the second law of thermodynamics by not decreasing.

Between the development of this plan for a time crystal and the quantum computer experiment that brought it to reality, many experiments by many different teams of researchers achieved various almost-time-crystal milestones. However, providing all the ingredients in the recipe for many-body localization (the phenomenon that enables an infinitely stable time crystal) had remained an outstanding challenge.

For Khemani and her collaborators, the final step to time crystal success was working with a team at Google Quantum AI. Together, this group used Googles Sycamore quantum computing hardware to program 20 spins using the quantum version of a classical computers bits of information, known as qubits.

Revealing just how intense the interest in time crystals currently is, another time crystal was published in Science this month. That crystal was created using qubits within a diamond by researchers at Delft University of Technology in the Netherlands.

The researchers were able to confirm their claim of a true time crystal thanks to special capabilities of the quantum computer. Although the finite size and coherence time of the (imperfect) quantum device meant that their experiment was limited in size and duration so that the time crystal oscillations could only be observed for a few hundred cycles rather than indefinitely the researchers devised various protocols for assessing the stability of their creation. These included running the simulation forward and backward in time and scaling its size.

A view of the Google dilution refrigerator, which houses the Sycamore chip. (Image credit: Google Quantum AI)

We managed to use the versatility of the quantum computer to help us analyze its own limitations, said Moessner, co-author of the paper and director at the Max Planck Institute for Physics of Complex Systems. It essentially told us how to correct for its own errors, so that the fingerprint of ideal time-crystalline behavior could be ascertained from finite time observations.

A key signature of an ideal time crystal is that it shows indefinite oscillations from all states. Verifying this robustness to choice of states was a key experimental challenge, and the researchers devised a protocol to probe over a million states of their time crystal in just a single run of the machine, requiring mere milliseconds of runtime. This is like viewing a physical crystal from many angles to verify its repetitive structure.

A unique feature of our quantum processor is its ability to create highly complex quantum states, said Xiao Mi, a researcher at Google and co-lead author of the paper. These states allow the phase structures of matter to be effectively verified without needing to investigate the entire computational space an otherwise intractable task.

Creating a new phase of matter is unquestionably exciting on a fundamental level. In addition, the fact that these researchers were able to do so points to the increasing usefulness of quantum computers for applications other than computing. I am optimistic that with more and better qubits, our approach can become a main method in studying non-equilibrium dynamics, said Pedram Roushan, researcher at Google and senior author of the paper.

We think that the most exciting use for quantum computers right now is as platforms for fundamental quantum physics, said Ippoliti. With the unique capabilities of these systems, theres hope that you might discover some new phenomenon that you hadnt predicted.

This work was led by Stanford University, Google Quantum AI, the Max Planck Institute for Physics of Complex Systems and Oxford University. The full author list is available in the Nature paper.

This research was funded by the Defense Advanced Research Projects Agency (DARPA), a Google Research Award, the Sloan Foundation, the Gordon and Betty Moore Foundation and the Deutsche Forschungsgemeinschaft.

To read all stories about Stanford science, subscribe to the biweeklyStanford Science Digest.

Read more from the original source:
Time crystal in a quantum computer | Stanford News