Archive for the ‘Quantum Computer’ Category

Yale Quantum Institute

Listen to the Segment on Science Friday Website hereThe computer chips that are delivering these words to you work on a simple, binary, on/off...

Sound Artist and ComposerSpencer Topel, our2019 Yale Quantum Institute Artist-in-Residenceperformed a live set of Quantum Sound: A Live...

YQI AlumniWolfgang Pfaffslab at UIUChas currently an opening for a postdoc.This position is for a project focusing on modular quantum...

The Department of Applied Physics at Yale University invite applications for a faculty appointment in the area of experimental optics and photonics....

The Department of Electrical and Computer Engineering at The University of Texas Austin has multiple faculty openings with a start date ofFall 2022...

The College of Science at Northeastern University has launched a junior faculty (tenure track) search across all departments. The priority for the...

Argonne National Laboratory seeks multiple postdoctoral candidates to participate in projects of strategic national importance in quantum...

Yale will lead a new project to simulate the dynamics of complex chemical reactions using quantum computing technology.The new Center for Quantum...

The Department of Physics and Astronomy at Rice University invites applications for tenure-track faculty positions in the broad area of experimental...

The quantum information theory group (Shruti Puri and Steven Girvin) in the Yale Quantum Institute seeks outstanding applicants for a postdoctoral...

See the original post:
Yale Quantum Institute

Creating the Heart of a Quantum Computer: Developing Qubits – SciTechDaily

By Shannon Brescher Shea, U.S. Department of EnergyJanuary 3, 2022

A computer is suspended from the ceiling. Delicate lines and loops of silvery wires and tubes connect gold-colored platforms. It seems to belong in a science-fiction movie, perhaps a steam-punk cousin of HAL in 2001: A Space Odyssey. But as the makers of that 1968 movie imagined computers the size of a spaceship, this technology would have never crossed their minds a quantum computer.

Quantum computers have the potential to solve problems that conventional computers cant. Conventional computer chips can only process so much information at one time and were coming very close to reaching their physical limits. In contrast, the unique properties of materials for quantum computing have the potential to process more information much faster.

These advances could revolutionize certain areas of scientific research. Identifying materials with specific characteristics, understanding photosynthesis, and discovering new medicines all require massive amounts of calculations. In theory, quantum computing could solve these problems faster and more efficiently. Quantum computing could also open up possibilities we never even considered. Its like a microwave oven versus a conventional oven different technologies with different purposes.

But were not there yet. So far, one company has claimed its quantum computer can complete a specific calculation faster than the worlds fastest conventional supercomputers. Scientists routinely using quantum computers to answer scientific questions is a long way off.

To use quantum computers on a large scale, we need to improve the technology at their heart qubits. Qubits are the quantum version of conventional computers most basic form of information, bits. The DOEs Office of Science is supporting research into developing the ingredients and recipes to build these challenging qubits.

DOEs Lawrence Berkeley National Laboratory is using a sophisticated cooling system to keep qubits the heart of quantum computers cold enough for scientists to study them for use in quantum computers. Credit: Image courtesy of Lawrence Berkeley National Laboratory

At the atomic scale, physics gets very weird. Electrons, atoms, and other quantum particles interact with each other differently than ordinary objects. In certain materials, we can harness these strange behaviors. Several of these properties particularly superposition and entanglement can be extremely useful in computing technology.

The principle of superposition is the idea that a qubit can be in multiple states at once. With traditional bits, you only have two options: 1 or 0. These binary numbers describe all of the information in any computer. Qubits are more complicated.

Imagine a pot with water in it. When you have water in a pot with a top on it, you dont know if its boiling or not. Real water is either boiling or not; looking at it doesnt change its state. But if the pot was in the quantum realm, the water (representing a quantum particle) could both be boiling and not boiling at the same time or any linear superposition of these two states. If you took the lid off of that quantum pot, the water would immediately be one state or the other. The measurement forces the quantum particle (or water) into a specific observable state.

Entanglement is when qubits have a relationship to each other that prevents them from acting independently. It happens when a quantum particle has a state (such as spin or electric charge) thats linked to another quantum particles state. This relationship persists even when the particles are physically far apart, even far beyond atomic distances.

These properties allow quantum computers to process more information than conventional bits that can only be in a single state and only act independently from each other.

But to get any of these great properties, you need to have fine control over a materials electrons or other quantum particles. In some ways, this isnt so different from conventional computers. Whether electrons move or not through a conventional transistor determines the bits value, making it either 1 or 0.

Rather than simply switching electron flow on or off, qubits require control over tricky things like electron spin. To create a qubit, scientists have to find a spot in a material where they can access and control these quantum properties. Once they access them, they can then use light or magnetic fields to create superposition, entanglement, and other properties.

In many materials, scientists do this by manipulating the spin of individual electrons. Electron spin is similar to the spin of a top; it has a direction, angle, and momentum. Each electrons spin is either up or down. But as a quantum mechanical property, spin can also exist in a combination of up and down. To influence electron spin, scientists apply microwaves (similar to the ones in your microwave oven) and magnets. The magnets and microwaves together allow scientists to control the qubit.

Since the 1990s, scientists have been able to gain better and better control over electron spin. Thats allowed them to access quantum states and manipulate quantum information more than ever before.

To see where thats gone today, its remarkable, said David Awschalom, a quantum physicist at DOEs Argonne National Laboratory and the University of Chicago as well as Director of the Chicago Quantum Exchange.

Whether they use electron spin or another approach, all qubits face major challenges before we can scale them up. Two of the biggest ones are coherence time and error correction.

When you run a computer, you need to be able to create and store a piece of information, leave it alone, and then come back later to retrieve it. However, if the system that holds the information changes on its own, its useless for computing. Unfortunately, qubits are sensitive to the environment around them and dont maintain their state for very long.

Right now, quantum systems are subject to a lot of noise, things that cause them to have a low coherence time (the time they can maintain their condition) or produce errors. Making sure that you get the right answer all of the time is one of the biggest hurdles in quantum computing, said Danna Freedman, an associate professor in chemistry at Northwestern University.

Even if you can reduce that noise, there will still be errors. We will have to build technology that is able to do error correction before we are able to make a big difference with quantum computing, said Giulia Galli, a quantum chemist and physicist at DOEs Argonne National Laboratory and the University of Chicago.

The more qubits you have in play, the more these problems multiply. While todays most powerful quantum computers have about 50 qubits, its likely that they will need hundreds or thousands to solve the problems that we want them to.

The jury is still out on which qubit technology will be the best. No real winner has been identified, said Galli. [Different ones] may have their place for different applications. In addition to computing, different quantum materials may be useful for quantum sensing or networked quantum communications.

To help move qubits forward, DOEs Office of Science is supporting research on a number of different technologies. To realize quantum computings enormous scientific potential, we need to reimagine quantum R&D by simultaneously exploring a range of possible solutions, said Irfan Siddiqi, a quantum physicist at the DOE Lawrence Berkeley National Laboratory and the University of California, Berkeley.

Superconducting Qubits

Superconducting qubits are currently the most advanced qubit technology. Most existing quantum computers use superconducting qubits, including the one that beat the worlds fastest supercomputer. They use metal-insulator-metal sandwiches called Josephson junctions. To turn these materials into superconductors materials that electricity can run through with no loss scientists lower them to extremely cold temperatures. Among other things, pairs of electrons coherently move through the material as if theyre single particles. This movement makes the quantum states more long-lived than in conventional materials.

To scale up superconducting qubits, Siddiqi and his colleagues are studying how to build them even better with support from the Office of Science. His team has examined how to make improvements to a Josephson junction, a thin insulating barrier between two superconductors in the qubit. By affecting how electrons flow, this barrier makes it possible to control electrons energy levels. Making this junction as consistent and small as possible can increase the qubits coherence time. In one paper on these junctions, Siddiqis team provides a recipe to build an eight-qubit quantum processor, complete with experimental ingredients and step-by-step instructions.

Qubits Using Defects

Defects are spaces where atoms are missing or misplaced in a materials structure. These spaces change how electrons move in the materials. In certain quantum materials, these spaces trap electrons, allowing researchers to access and control their spins. Unlike superconductors, these qubits dont always need to be at ultra-low temperatures. They have the potential to have long coherence times and be manufactured at scale.

While diamonds are usually valued for their lack of imperfections, their defects are actually quite useful for qubits. Adding a nitrogen atom to a place where there would normally be a carbon atom in diamonds creates whats called a nitrogen-vacancy center. Researchers using the Center for Functional Nanomaterials, a DOE Office of Science user facility, found a way to create a stencil just two nanometers long to create these defect patterns. This spacing helped increase these qubits coherence time and made it easier to entangle them.

But useful defects arent limited to diamonds. Diamonds are expensive, small, and hard to control. Aluminum nitride and silicon carbide are cheaper, easier to use, and already common in everyday electronics. Galli and her team used theory to predict how to physically strain aluminum nitride in just the right way to create electron states for qubits. As nitrogen vacancies occur naturally in aluminum nitride, scientists should be able to control electron spin in it just as they do in diamonds. Another option, silicon carbide, is already used in LED lights, high-powered electronics, and electronic displays. Awschaloms team found that certain defects in silicon carbide have coherence times comparable to or longer than those in nitrogen-vacancy centers in diamonds. In complementary work, Gallis group developed theoretical models explaining the longer coherence times.

Based on theoretical work, we began to examine these materials at the atomic scale. We found that the quantum states were always there, but no one had looked for them, said Awschalom. Their presence and robust behavior in these materials were unexpected. We imagined that their quantum properties would be short-lived due to interactions with nearby nuclear spins. Since then, his team has embedded these qubits in commercial electronic wafers and found that they do surprisingly well. This can allow them to connect the qubits with electronics.

Materials by Design

While some scientists are investigating how to use existing materials, others are taking a different tack designing materials from scratch. This approach builds custom materials molecule by molecule. By customizing metals, the molecules or ions bound to metals, and the surrounding environment, scientists can potentially control quantum states at the level of a single particle.

When youre talking about both understanding and optimizing the properties of a qubit, knowing that every atom in a quantum system is exactly where you want it is very important, said Freedman.

With this approach, scientists can limit the amount of nuclear spin (the spin of the nucleus of an atom) in the qubits environment. A lot of atoms that contain nuclear spin cause magnetic noise that makes it hard to maintain and control electron spin. That reduces the qubits coherence time. Freedman and her team developed an environment that had very little nuclear spin. By testing different combinations of solvents, temperatures, and ions/molecules attached to the metal, they achieved a 1 millisecond coherence time in a molecule that contains the metal vanadium. That was a much longer coherence time than anyone had achieved in a molecule before. While previous molecular qubits had coherence times that were five times shorter than diamond nitrogen-vacancy centers times, this matched coherence times in diamonds.

That was genuinely shocking to me because I thought molecules would necessarily be the underdogs in this game, said Freedman. [It] opens up a gigantic space for us to play in.

The surprises in quantum just keep coming. Awschalom compared our present-day situation to the 1950s when scientists were exploring the potential of transistors. At the time, transistors were less than half an inch long. Now laptops have billions of them. Quantum computing stands in a similar place.

The overall idea that we could completely transform the way that computation is done and the way nature is studied by doing quantum simulation is really very exciting, said Galli. Our fundamental way of looking at materials, based on quantum simulations, can finally be useful to develop technologically relevant devices and materials.

Read the rest here:
Creating the Heart of a Quantum Computer: Developing Qubits - SciTechDaily

Grover’s algorithm – Wikipedia

Quantum search algorithm

In quantum computing, Grover's algorithm, also known as the quantum search algorithm, refers to a quantum algorithm for unstructured search that finds with high probability the unique input to a black box function that produces a particular output value, using just O ( N ) {displaystyle O({sqrt {N}})} evaluations of the function, where N {displaystyle N} is the size of the function's domain. It was devised by Lov Grover in 1996.[1]

The analogous problem in classical computation cannot be solved in fewer than O ( N ) {displaystyle O(N)} evaluations (because, on average, one has to check half of the domain to get a 50% chance of finding the right input). At roughly the same time that Grover published his algorithm, Charles H. Bennett, Ethan Bernstein, Gilles Brassard, and Umesh Vazirani proved that any quantum solution to the problem needs to evaluate the function ( N ) {displaystyle Omega ({sqrt {N}})} times, so Grover's algorithm is asymptotically optimal.[2] Since researchers generally believe that NP-complete problems are difficult because their search spaces have essentially no structure, the optimality of Grover's algorithm for unstructured search suggests (but does not prove) that quantum computers cannot solve NP-complete problems in polynomial time.[3]

Unlike other quantum algorithms, which may provide exponential speedup over their classical counterparts, Grover's algorithm provides only a quadratic speedup. However, even quadratic speedup is considerable when N {displaystyle N} is large, and Grover's algorithm can be applied to speed up broad classes of algorithms.[3] Grover's algorithm could brute-force a 128-bit symmetric cryptographic key in roughly 264 iterations, or a 256-bit key in roughly 2128 iterations. As a result, it is sometimes suggested[4] that symmetric key lengths be doubled to protect against future quantum attacks.

Grover's algorithm, along with variants like amplitude amplification, can be used to speed up a broad range of algorithms.[5][6][7] In particular, algorithms for NP-complete problems generally contain exhaustive search as a subroutine, which can be sped up by Grover's algorithm.[6] The current best algorithm for 3SAT is one such example. Generic constraint satisfaction problems also see quadratic speedups with Grover.[8] These algorithms do not require that the input be given in the form of an oracle, since Grover's algorithm is being applied with an explicit function, e.g. the function checking that a set of bits satisfies a 3SAT instance.

Grover's algorithm can also give provable speedups for black-box problems in quantum query complexity, including element distinctness[9] and the collision problem[10] (solved with the BrassardHyerTapp algorithm). In these types of problems, one treats the oracle function f as a database, and the goal is to use the quantum query to this function as few times as possible.

Grover's algorithm essentially solves the task of function inversion. Roughly speaking, if we have a function y = f ( x ) {displaystyle y=f(x)} that can be evaluated on a quantum computer, Grover's algorithm allows us to calculate x {displaystyle x} when given y {displaystyle y} . Consequently, Grover's algorithm gives broad asymptotic speed-ups to many kinds of brute-force attacks on symmetric-key cryptography, including collision attacks and pre-image attacks.[11] However, this may not necessarily be the most efficient algorithm since, for example, the parallel rho algorithm is able to find a collision in SHA2 more efficiently than Grover's algorithm.[12]

Grover's original paper described the algorithm as a database search algorithm, and this description is still common. The database in this analogy is a table of all of the function's outputs, indexed by the corresponding input. However, this database is not represented explicitly. Instead, an oracle is invoked to evaluate an item by its index. Reading a full data-base item by item and converting it into such a representation may take a lot longer than Grover's search. To account for such effects, Grover's algorithm can be viewed as solving an equation or satisfying a constraint. In such applications, the oracle is a way to check the constraint and is not related to the search algorithm. This separation usually prevents algorithmic optimizations, whereas conventional search algorithms often rely on such optimizations and avoid exhaustive search.[13]

The major barrier to instantiating a speedup from Grover's algorithm is that the quadratic speedup achieved is too modest to overcome the large overhead of near-term quantum computers.[14] However, later generations of fault-tolerant quantum computers with better hardware performance may be able to realize these speedups for practical instances of data.

As input for Grover's algorithm, suppose we have a function f : { 0 , 1 , , N 1 } { 0 , 1 } {displaystyle f:{0,1,ldots ,N-1}to {0,1}} . In the "unstructured database" analogy, the domain represent indices to a database, and f(x) = 1 if and only if the data that x points to satisfies the search criterion. We additionally assume that only one index satisfies f(x) = 1, and we call this index . Our goal is to identify .

We can access f with a subroutine (sometimes called an oracle) in the form of a unitary operator U that acts as follows:

This uses the N {displaystyle N} -dimensional state space H {displaystyle {mathcal {H}}} , which is supplied by a register with n = log 2 N {displaystyle n=lceil log _{2}Nrceil } qubits.This is often written as

Grover's algorithm outputs with probability at least 1/2 using O ( N ) {displaystyle O({sqrt {N}})} applications of U. This probability can be made arbitrarily large by running Grover's algorithm multiple times. If one runs Grover's algorithm until is found, the expected number of applications is still O ( N ) {displaystyle O({sqrt {N}})} , since it will only be run twice on average.

This section compares the above oracle U {displaystyle U_{omega }} with an oracle U f {displaystyle U_{f}} .

U is different from the standard quantum oracle for a function f. This standard oracle, denoted here as Uf, uses an ancillary qubit system. The operation then represents an inversion (NOT gate) conditioned by the value of f(x) on the main system:

or briefly,

These oracles are typically realized using uncomputation.

If we are given Uf as our oracle, then we can also implement U, since U is Uf when the ancillary qubit is in the state | = 1 2 ( | 0 | 1 ) = H | 1 {displaystyle |-rangle ={frac {1}{sqrt {2}}}{big (}|0rangle -|1rangle {big )}=H|1rangle } :

So, Grover's algorithm can be run regardless of which oracle is given.[3] If Uf is given, then we must maintain an additional qubit in the state | {displaystyle |-rangle } and apply Uf in place of U.

The steps of Grover's algorithm are given as follows:

For the correctly chosen value of r {displaystyle r} , the output will be | {displaystyle |omega rangle } with probability approaching 1 for N 1. Analysis shows that this eventual value for r ( N ) {displaystyle r(N)} satisfies r ( N ) 4 N {displaystyle r(N)leq {Big lceil }{frac {pi }{4}}{sqrt {N}}{Big rceil }} .

Implementing the steps for this algorithm can be done using a number of gates linear in the number of qubits.[3] Thus, the gate complexity of this algorithm is O ( log ( N ) r ( N ) ) {displaystyle O(log(N)r(N))} , or O ( log ( N ) ) {displaystyle O(log(N))} per iteration.

There is a geometric interpretation of Grover's algorithm, following from the observation that the quantum state of Grover's algorithm stays in a two-dimensional subspace after each step. Consider the plane spanned by | s {displaystyle |srangle } and | {displaystyle |omega rangle } ; equivalently, the plane spanned by | {displaystyle |omega rangle } and the perpendicular ket | s = 1 N 1 x | x {displaystyle textstyle |s'rangle ={frac {1}{sqrt {N-1}}}sum _{xneq omega }|xrangle } .

Grover's algorithm begins with the initial ket | s {displaystyle |srangle } , which lies in the subspace. The operator U {displaystyle U_{omega }} is a reflection at the hyperplane orthogonal to | {displaystyle |omega rangle } for vectors in the plane spanned by | s {displaystyle |s'rangle } and | {displaystyle |omega rangle } , i.e. it acts as a reflection across | s {displaystyle |s'rangle } . This can be seen by writing U {displaystyle U_{omega }} in the form of a Householder reflection:

The operator U s = 2 | s s | I {displaystyle U_{s}=2|srangle langle s|-I} is a reflection through | s {displaystyle |srangle } . Both operators U s {displaystyle U_{s}} and U {displaystyle U_{omega }} take states in the plane spanned by | s {displaystyle |s'rangle } and | {displaystyle |omega rangle } to states in the plane. Therefore, Grover's algorithm stays in this plane for the entire algorithm.

It is straightforward to check that the operator U s U {displaystyle U_{s}U_{omega }} of each Grover iteration step rotates the state vector by an angle of = 2 arcsin 1 N {displaystyle theta =2arcsin {tfrac {1}{sqrt {N}}}} .So, with enough iterations, one can rotate from the initial state | s {displaystyle |srangle } to the desired output state | {displaystyle |omega rangle } . The initial ket is close to the state orthogonal to | {displaystyle |omega rangle } :

In geometric terms, the angle / 2 {displaystyle theta /2} between | s {displaystyle |srangle } and | s {displaystyle |s'rangle } is given by

We need to stop when the state vector passes close to | {displaystyle |omega rangle } ; after this, subsequent iterations rotate the state vector away from | {displaystyle |omega rangle } , reducing the probability of obtaining the correct answer. The exact probability of measuring the correct answer is

where r is the (integer) number of Grover iterations. The earliest time that we get a near-optimal measurement is therefore r N / 4 {displaystyle rapprox pi {sqrt {N}}/4} .

To complete the algebraic analysis, we need to find out what happens when we repeatedly apply U s U {displaystyle U_{s}U_{omega }} . A natural way to do this is by eigenvalue analysis of a matrix. Notice that during the entire computation, the state of the algorithm is a linear combination of s {displaystyle s} and {displaystyle omega } . We can write the action of U s {displaystyle U_{s}} and U {displaystyle U_{omega }} in the space spanned by { | s , | } {displaystyle {|srangle ,|omega rangle }} as:

So in the basis { | , | s } {displaystyle {|omega rangle ,|srangle }} (which is neither orthogonal nor a basis of the whole space) the action U s U {displaystyle U_{s}U_{omega }} of applying U {displaystyle U_{omega }} followed by U s {displaystyle U_{s}} is given by the matrix

This matrix happens to have a very convenient Jordan form. If we define t = arcsin ( 1 / N ) {displaystyle t=arcsin(1/{sqrt {N}})} , it is

It follows that r-th power of the matrix (corresponding to r iterations) is

Using this form, we can use trigonometric identities to compute the probability of observing after r iterations mentioned in the previous section,

Alternatively, one might reasonably imagine that a near-optimal time to distinguish would be when the angles 2rt and 2rt are as far apart as possible, which corresponds to 2 r t / 2 {displaystyle 2rtapprox pi /2} , or r = / 4 t = / 4 arcsin ( 1 / N ) N / 4 {displaystyle r=pi /4t=pi /4arcsin(1/{sqrt {N}})approx pi {sqrt {N}}/4} . Then the system is in state

A short calculation now shows that the observation yields the correct answer with error O ( 1 N ) {displaystyle Oleft({frac {1}{N}}right)} .

If, instead of 1 matching entry, there are k matching entries, the same algorithm works, but the number of iterations must be 4 ( N k ) 1 / 2 {textstyle {frac {pi }{4}}{left({frac {N}{k}}right)^{1/2}}} instead of 4 N 1 / 2 {textstyle {frac {pi }{4}}{N^{1/2}}} .

There are several ways to handle the case if k is unknown.[15] A simple solution performs optimally up to a constant factor: run Grover's algorithm repeatedly for increasingly small values of k, e.g. taking k = N, N/2, N/4, ..., and so on, taking k = N / 2 t {displaystyle k=N/2^{t}} for iteration t until a matching entry is found.

With sufficiently high probability, a marked entry will be found by iteration t = log 2 ( N / k ) + c {displaystyle t=log _{2}(N/k)+c} for some constant c. Thus, the total number of iterations taken is at most

A version of this algorithm is used in order to solve the collision problem.[16][17]

A modification of Grover's algorithm called quantum partial search was described by Grover and Radhakrishnan in 2004.[18] In partial search, one is not interested in finding the exact address of the target item, only the first few digits of the address. Equivalently, we can think of "chunking" the search space into blocks, and then asking "in which block is the target item?". In many applications, such a search yields enough information if the target address contains the information wanted. For instance, to use the example given by L. K. Grover, if one has a list of students organized by class rank, we may only be interested in whether a student is in the lower 25%, 2550%, 5075% or 75100% percentile.

To describe partial search, we consider a database separated into K {displaystyle K} blocks, each of size b = N / K {displaystyle b=N/K} . The partial search problem is easier. Consider the approach we would take classically we pick one block at random, and then perform a normal search through the rest of the blocks (in set theory language, the complement). If we don't find the target, then we know it's in the block we didn't search. The average number of iterations drops from N / 2 {displaystyle N/2} to ( N b ) / 2 {displaystyle (N-b)/2} .

Grover's algorithm requires 4 N {textstyle {frac {pi }{4}}{sqrt {N}}} iterations. Partial search will be faster by a numerical factor that depends on the number of blocks K {displaystyle K} . Partial search uses n 1 {displaystyle n_{1}} global iterations and n 2 {displaystyle n_{2}} local iterations. The global Grover operator is designated G 1 {displaystyle G_{1}} and the local Grover operator is designated G 2 {displaystyle G_{2}} .

The global Grover operator acts on the blocks. Essentially, it is given as follows:

The optimal values of j 1 {displaystyle j_{1}} and j 2 {displaystyle j_{2}} are discussed in the paper by Grover and Radhakrishnan. One might also wonder what happens if one applies successive partial searches at different levels of "resolution". This idea was studied in detail by Vladimir Korepin and Xu, who called it binary quantum search. They proved that it is not in fact any faster than performing a single partial search.

Grover's algorithm is optimal up to sub-constant factors. That is, any algorithm that accesses the database only by using the operator U must apply U at least a 1 o ( 1 ) {displaystyle 1-o(1)} fraction as many times as Grover's algorithm.[19] The extension of Grover's algorithm to k matching entries, (N/k)1/2/4, is also optimal.[16] This result is important in understanding the limits of quantum computation.

If the Grover's search problem was solvable with logc N applications of U, that would imply that NP is contained in BQP, by transforming problems in NP into Grover-type search problems. The optimality of Grover's algorithm suggests that quantum computers cannot solve NP-Complete problems in polynomial time, and thus NP is not contained in BQP.

It has been shown that a class of non-local hidden variable quantum computers could implement a search of an N {displaystyle N} -item database in at most O ( N 3 ) {displaystyle O({sqrt[{3}]{N}})} steps. This is faster than the O ( N ) {displaystyle O({sqrt {N}})} steps taken by Grover's algorithm.[20]

See the rest here:
Grover's algorithm - Wikipedia

Time crystal in a quantum computer | Stanford News

There is a huge global effort to engineer a computer capable of harnessing the power of quantum physics to carry out computations of unprecedented complexity. While formidable technological obstacles still stand in the way of creating such a quantum computer, todays early prototypes are still capable of remarkable feats.

The Google Sycamore chip used in the creation of a time crystal. (Image credit: Google Quantum AI)

For example, the creation of a new phase of matter called a time crystal. Just as a crystals structure repeats in space, a time crystal repeats in time and, importantly, does so infinitely and without any further input of energy like a clock that runs forever without any batteries. The quest to realize this phase of matter has been a longstanding challenge in theory and experiment one that has now finally come to fruition.

In research published Nov. 30 in Nature, a team of scientists from Stanford University, Google Quantum AI, the Max Planck Institute for Physics of Complex Systems and Oxford University detail their creation of a time crystal using Googles Sycamore quantum computing hardware.

The big picture is that we are taking the devices that are meant to be the quantum computers of the future and thinking of them as complex quantum systems in their own right, said Matteo Ippoliti, a postdoctoral scholar at Stanford and co-lead author of the work. Instead of computation, were putting the computer to work as a new experimental platform to realize and detect new phases of matter.

For the team, the excitement of their achievement lies not only in creating a new phase of matter but in opening up opportunities to explore new regimes in their field of condensed matter physics, which studies the novel phenomena and properties brought about by the collective interactions of many objects in a system. (Such interactions can be far richer than the properties of the individual objects.)

Time-crystals are a striking example of a new type of non-equilibrium quantum phase of matter, said Vedika Khemani, assistant professor of physics at Stanford and a senior author of the paper. While much of our understanding of condensed matter physics is based on equilibrium systems, these new quantum devices are providing us a fascinating window into new non-equilibrium regimes in many-body physics.

The basic ingredients to make this time crystal are as follows: The physics equivalent of a fruit fly and something to give it a kick. The fruit fly of physics is the Ising model, a longstanding tool for understanding various physical phenomena including phase transitions and magnetism which consists of a lattice where each site is occupied by a particle that can be in two states, represented as a spin up or down.

During her graduate school years, Khemani, her doctoral advisor Shivaji Sondhi, then at Princeton University, and Achilleas Lazarides and Roderich Moessner at the Max Planck Institute for Physics of Complex Systems stumbled upon this recipe for making time crystals unintentionally. They were studying non-equilibrium many-body localized systems systems where the particles get stuck in the state in which they started and can never relax to an equilibrium state. They were interested in exploring phases that might develop in such systems when they are periodically kicked by a laser. Not only did they manage to find stable non-equilibrium phases, they found one where the spins of the particles flipped between patterns that repeat in time forever, at a period twice that of the driving period of the laser, thus making a time crystal.

The periodic kick of the laser establishes a specific rhythm to the dynamics. Normally the dance of the spins should sync up with this rhythm, but in a time crystal it doesnt. Instead, the spins flip between two states, completing a cycle only after being kicked by the laser twice. This means that the systems time translation symmetry is broken. Symmetries play a fundamental role in physics, and they are often broken explaining the origins of regular crystals, magnets and many other phenomena; however, time translation symmetry stands out because unlike other symmetries, it cant be broken in equilibrium. The periodic kick is a loophole that makes time crystals possible.

The doubling of the oscillation period is unusual, but not unprecedented. And long-lived oscillations are also very common in the quantum dynamics of few-particle systems. What makes a time crystal unique is that its a system of millions of things that are showing this kind of concerted behavior without any energy coming in or leaking out.

Its a completely robust phase of matter, where youre not fine-tuning parameters or states but your system is still quantum, said Sondhi, professor of physics at Oxford and co-author of the paper. Theres no feed of energy, theres no drain of energy, and it keeps going forever and it involves many strongly interacting particles.

While this may sound suspiciously close to a perpetual motion machine, a closer look reveals that time crystals dont break any laws of physics. Entropy a measure of disorder in the system remains stationary over time, marginally satisfying the second law of thermodynamics by not decreasing.

Between the development of this plan for a time crystal and the quantum computer experiment that brought it to reality, many experiments by many different teams of researchers achieved various almost-time-crystal milestones. However, providing all the ingredients in the recipe for many-body localization (the phenomenon that enables an infinitely stable time crystal) had remained an outstanding challenge.

For Khemani and her collaborators, the final step to time crystal success was working with a team at Google Quantum AI. Together, this group used Googles Sycamore quantum computing hardware to program 20 spins using the quantum version of a classical computers bits of information, known as qubits.

Revealing just how intense the interest in time crystals currently is, another time crystal was published in Science this month. That crystal was created using qubits within a diamond by researchers at Delft University of Technology in the Netherlands.

The researchers were able to confirm their claim of a true time crystal thanks to special capabilities of the quantum computer. Although the finite size and coherence time of the (imperfect) quantum device meant that their experiment was limited in size and duration so that the time crystal oscillations could only be observed for a few hundred cycles rather than indefinitely the researchers devised various protocols for assessing the stability of their creation. These included running the simulation forward and backward in time and scaling its size.

A view of the Google dilution refrigerator, which houses the Sycamore chip. (Image credit: Google Quantum AI)

We managed to use the versatility of the quantum computer to help us analyze its own limitations, said Moessner, co-author of the paper and director at the Max Planck Institute for Physics of Complex Systems. It essentially told us how to correct for its own errors, so that the fingerprint of ideal time-crystalline behavior could be ascertained from finite time observations.

A key signature of an ideal time crystal is that it shows indefinite oscillations from all states. Verifying this robustness to choice of states was a key experimental challenge, and the researchers devised a protocol to probe over a million states of their time crystal in just a single run of the machine, requiring mere milliseconds of runtime. This is like viewing a physical crystal from many angles to verify its repetitive structure.

A unique feature of our quantum processor is its ability to create highly complex quantum states, said Xiao Mi, a researcher at Google and co-lead author of the paper. These states allow the phase structures of matter to be effectively verified without needing to investigate the entire computational space an otherwise intractable task.

Creating a new phase of matter is unquestionably exciting on a fundamental level. In addition, the fact that these researchers were able to do so points to the increasing usefulness of quantum computers for applications other than computing. I am optimistic that with more and better qubits, our approach can become a main method in studying non-equilibrium dynamics, said Pedram Roushan, researcher at Google and senior author of the paper.

We think that the most exciting use for quantum computers right now is as platforms for fundamental quantum physics, said Ippoliti. With the unique capabilities of these systems, theres hope that you might discover some new phenomenon that you hadnt predicted.

This work was led by Stanford University, Google Quantum AI, the Max Planck Institute for Physics of Complex Systems and Oxford University. The full author list is available in the Nature paper.

This research was funded by the Defense Advanced Research Projects Agency (DARPA), a Google Research Award, the Sloan Foundation, the Gordon and Betty Moore Foundation and the Deutsche Forschungsgemeinschaft.

To read all stories about Stanford science, subscribe to the biweeklyStanford Science Digest.

Read more from the original source:
Time crystal in a quantum computer | Stanford News

How Does a Quantum Computer Work? – Scientific American

If someone asked you to picture a quantum computer, what would you see in your mind?

Maybe you see a normal computer-- just bigger, with some mysterious physics magic going on inside? Forget laptops or desktops. Forget computer server farms. A quantum computer is fundamentally different in both the way it looks, and ,more importantly, in the way it processes information.

There are currently several ways to build a quantum computer. But lets start by describing one of the leading designs to help explain how it works.

Imagine a lightbulb filament, hanging upside down, but its the most complicated light youve ever seen. Instead of one slender twist of wire, it has organized silvery swarms of them, neatly braided around a core. They are arranged in layers that narrow as you move down. Golden plates separate the structure into sections.

The outer part of this vessel is called the chandelier. Its a supercharged refrigerator that uses a special liquified helium mix to cool the computers quantum chip down to near absolute zero. Thats the coldest temperature theoretically possible.

At such low temperatures, the tiny superconducting circuits in the chip take on their quantum properties. And its those properties, as well soon see, that could be harnessed to perform computational tasks that would be practically impossible on a classical computer.

Traditional computer processors work in binarythe billions of transistors that handle information on your laptop or smartphone are either on (1) or theyre off (0). Using a series of circuits, called gates, computers perform logical operations based on the state of those switches.

Classical computers are designed to follow specific inflexible rules. This makes them extremely reliable, but it also makes them ill-suited for solving certain kinds of problemsin particular, problems where youre trying to find a needle in a haystack.

This is where quantum computers shine.

If you think of a computer solving a problem as a mouse running through a maze, a classical computer finds its way through by trying every path until it reaches the end.

What if, instead of solving the maze through trial and error, you could consider all possible routes simultaneously?

Quantum computers do this by substituting the binary bits of classical computing with something called qubits. Qubits operate according to the mysterious laws of quantum mechanics: the theory that physics works differently at the atomic and subatomic scale.

The classic way to demonstrate quantum mechanics is by shining a light through a barrier with two slits. Some light goes through the top slit, some the bottom, and the light waves knock into each other to create an interference pattern.

But now dim the light until youre firing individual photons one by oneelementary particles that comprise light. Logically, each photon has to travel through a single slit, and theyve got nothing to interfere with. But somehow, you still end up with an interference pattern.

Heres what happens according to quantum mechanics: Until you detect them on the screen, each photon exists in a state called superposition. Its as though its traveling all possible paths at once. That is, until the superposition state collapses under observation to reveal a single point on the screen.

Qubits use this ability to do very efficient calculations.

For the maze example, the superposition state would contain all the possible routes. And then youd have to collapse the state of superposition to reveal the likeliest path to the cheese.

Just like you add more transistors to extend the capabilities of your classical computer, you add more qubits to create a more powerful quantum computer.

Thanks to a quantum mechanical property called entanglement, scientists can push multiple qubits into the same state, even if the qubits arent in contact with each other. And while individual qubits exist in a superposition of two states, this increases exponentially as you entangle more qubits with each other. So a two-qubit system stores 4 possible values, a 20-qubit system more than a million.

So what does that mean for computing power? It helps to think about applying quantum computing to a real world problem: the one of prime numbers.

A prime number is a natural number greater than 1 that can only be divided evenly by itself or 1.

While its easy to multiply small numbers into giant ones, its much harder to go the reverse direction; you cant just look at a number and tell its factors. This is the basis for one of the most popular forms of data encryption, called RSA.

You can only decrypt RSA security by factoring the product of two prime numbers. Each prime factor is typically hundreds of digits long, and they serve as unique keys to a problem thats effectively unsolvable without knowing the answers in advance.

In 1995, M.I.T. mathematician Peter Shor, then at AT&T Bell Laboratories, devised a novel algorithm for factoring prime numbers whatever the size. One day, a quantum computer could use its computational power, and Shors algorithm, to hack everything from your bank records to your personal files.

In 2001, IBM made a quantum computer with seven qubits to demonstrate Shors algorithm. For qubits, they used atomic nuclei, which have two different spin states that can be controlled through radio frequency pulses.

This wasnt a great way to make a quantum computer, because its very hard to scale up. But it did manage to run Shors algorithm and factor 15 into 3 and 5. Hardly an impressive calculation, but still a major achievement in simply proving the algorithm works in practice.

Even now, experts are still trying to get quantum computers to work well enough to best classical supercomputers.

That remains extremely challenging, mostly because quantum states are fragile. Its hard to completely stop qubits from interacting with their outside environment, even with precise lasers in supercooled or vacuum chambers.

Any noise in the system leads to a state called decoherence, where superposition breaks down and the computer loses information.

A small amount of error is natural in quantum computing, because were dealing in probabilities rather than the strict rules of binary. But decoherence often introduces so much noise that it obscures the result.

When one qubit goes into a state of decoherence, the entanglement that enables the entire system breaks down.

So how do you fix this? The answer is called error correction--and it can happen in a few ways.

Error Correction #1:A fully error-corrected quantum computer could handle common errors like bit flips, where a qubit suddenly changes to the wrong state.

To do this you would need to build a quantum computer with a few so-called logical qubits that actually do the math, and a bunch of standard qubits that correct for errors.

It would take a lot of error-correcting qubitsmaybe 100 or so per logical qubit--to make the system work. But the end result would be an extremely reliable and generally useful quantum computer.

Error Correction #2:Other experts are trying to find clever ways to see through the noise generated by different errors. They are trying to build what they call Noisy intermediate-scale quantum computers using another set of algorithms.

That may work in some cases, but probably not across the board.

Error Correction #3: Another tactic is to find a new qubit source that isnt as susceptible to noise, such as topological particles that are better at retaining information. But some of these exotic particles (or quasi-particles) are purely hypothetical, so this technology could be years or decades off.

Because of these difficulties, quantum computing has advanced slowly, though there have been some significant achievements.

In 2019, Google used a 54-qubit quantum computer named Sycamore to do an incredibly complex (if useless) simulation in under 4 minutesrunning a quantum random number generator a million times to sample the likelihood of different results.

Sycamore works very differently from the quantum computer that IBM built to demonstrate Shors algorithm. Sycamore takes superconducting circuits and cools them to such low temperatures that the electrical current starts to behave like a quantum mechanical system. At present, this is one of the leading methods for building a quantum computer, alongside trapping ions in electric fields, where different energy levels similarly represent different qubit states.

Sycamore was a major breakthrough, though many engineers disagree exactly how major. Google said it was the first demonstration of so-called quantum advantage: achieving a task that would have been impossible for a classical computer.

It said the worlds best supercomputer would have needed 10,000 years to do the same task. IBM has disputed that claim.

At least for now, serious quantum computers are a ways off. But with billions of dollars of investment from governments and the worlds biggest companies, the race for quantum computing capabilities is well underway. The real question is: how will quantum computing change what a computer actually means to us. How will it change how our electronically connected world works? And when?

Original post:
How Does a Quantum Computer Work? - Scientific American