Archive for the ‘Quantum Computing’ Category

Who will dominate the tech arms race? – The Jerusalem Post

It is almost impossible to overstate what a quantum computer will be able to do, Christopher Monroe told the Magazine in a recent interview.

Monroe a professor at both the University of Maryland and Duke University, as well as co-founder of the quantum computing company IonQ discussed how quantum computing will change the face of the planet, even if this might take some more time.

The Magazine also interviewed four other experts in the quantum field and visited seven of their labs at the University of Maryland.

cnxps.cmd.push(function () { cnxps({ playerId: '36af7c51-0caf-4741-9824-2c941fc6c17b' }).render('4c4d856e0e6f4e3d808bbc1715e132f6'); });

These labs the full likes of which do not yet exist in Israel hosted all kinds of qubits (the basis of quantum computers), lasers blasting targets to cause plasma to come off to form distinctive films, infrared lasers, furnaces reaching 2,000C, a tetra arc furnace for growing silicon crystals, special dilution refrigerators to achieve cryostorage (deep freezing) and a variety of vacuum chambers that would seem like an alternate reality to the uninitiated.

Before entering each lab, there needed to be a conversation about whether this reporter should be wearing the special goggles that were handed out to avoid getting blinded.

One top quantum official at Maryland, Prof. Dr. Johnpierre Paglione, assured the Magazine that the ultrahazardous materials warning on many of the lab doors was not a concern at that moment.

From cracking the Internet as we know it, to military and economic dominance, to changing the way people manage their lives, quantum computers are predicted to make mincemeat of todays supercomputers. Put simply, they are made out of and operate from a completely different kind of material and set of principles connected to qubits and quantum mechanics, with computing potential that dwarfs classical computers capabilities.

But lets say the US wins the race who in the US would win it? Would it be giants like Google, Microsoft, Amazon, IBM and Honeywell? Or might it be a lean and fast solely quantum-focused challenger like Monroes IonQ?

At first glance, Google has no real challenger. In 2019, Google said it achieved quantum supremacy when its quantum computer became the first to perform a calculation that would be practically impossible for a classical machine, by checking the outputs from a quantum random-number generator.

The search-engine giant has already built a 54-qubit computer whereas IonQs largest quantum computer only has 32 qubits. Google has also promised to achieve the holy grail of quantum computing, a system large enough to revolutionize the Internet, military and economic issues, by 2029. Although China recently reproduced Googles experiment, Google is still regarded as ahead of the game.

Why is a 32-qubit quantum computer better than a 54-qubit one?

So why is Monroe so confident that his company will finish the race long before Google?

First, he takes a shot at the Google 2019 experiment.

It was a fairly academic exercise. The problem they attacked was one of those rare problems where you can prove something and you can prove the super computer cannot do it. Quantum mechanics works. It is not a surprise. The problem Google tackled was utterly useless. The system was not flexible enough to program to hit other problems. So a big company did a big academic demonstration, he said with a sort of whoop-dee-do tone and expression on his face.

Google had to repeat its experiment millions of times The signal went down by orders of magnitude. There are special issues to get the data. There are general problems where it cannot maintain [coherence]. The Google experiment and qubits decayed by seven times the constant. We gauge on one time for the constant and we can do 100 operations, with IonQs quantum computers.

In radioactive decay, the time constant is related to the decay constant and essentially represents the average lifetime of a decaying system, such as an atom. Some of the tactics for potentially overcoming decay go back to the lasers, vacuum chambers and cryostorage refrigerators mentioned above.

Monroe said from a business perspective, the experiment was a big distraction, and you will hear this from Google computer employees. They had to run simulations to prove how hard it would be to do what they were doing with old computers instead of building better quantum computers and solving useful algorithms.

We believe quantum computers work now it is time to build them, he stressed.

Describing IonQs quantum computers, Monroe said, The 32-qubit computer is fifth generation. The third and fourth generation is available to [clients of] Microsoft, Amazon and Google Cloud. It is 11 qubits, which is admittedly small, but it still runs more than any IBM machine can run. An 11-qubit computer is very clean operationally. It can run 100 or so ops [operations] before the laser noise causes coherence to be lost [before the qubits stop working]. That is many more ops [operations] than superconductors. If [a computer] has one million qubits, but can only run a few ops [operations], it is boring. But trapped ions adding more qubits at the same time makes things cheaper.

He added, The 32-qubit computer is not yet on the cloud. We are working in private with customers financials, noting that a future publication will discuss the baby version of an algorithm which could be very interesting when you start to scale it up. Maybe in the next generation, we can engineer it to solve an optimization problem something we dont get from the cloud, where we dont get any telemetry, which would be an unusual benefit for clients.

According to Monroe, that he will be able to build a 1,000-qubit computer by 2025 practically tomorrow in the sphere of new inventions will in and of itself be game-changing. This is true even if it is not yet capable of accomplishing all the extreme miracles that much larger quantum computers may someday accomplish.

A major innovation or risk (depending on your worldview) by Monroe is how he treats the paramount challenge of quantum computers and error correction basically the idea that for quantum computers to work, some process must be conceived to prevent qubits from decaying at the rate they currently decay at otherwise crucial calculations get interrupted mid-calculation.

Here, Monroe critiques both the Google approach and responds to criticism from some of his academic colleagues about his approach to error correction. Google is trying to get to one million qubits that do not work well together.

In contrast, a special encoding process could allow IonQ to create what Monroe called a single sort of super qubit, which would eliminate 99.9% of native errors. This is the easiest way to get better at quantum computing, as opposed to the quantity over quality path Google is pursuing.

But he has to defend himself from others poking holes in his approach as unrealistic, including some of his colleagues at University of Maryland (all sides still express great respect for each other). Confronted by this criticism, he responded that their path of attack was based on the theory of error correction. It implies that you will do indefinitely long computations, [but] no one will ever need this high a standard to do business.

We do not use error correction on our CPU [central processing unit] because silicon is so stable. We call it OK if it fails in one year, since that is more than enough time to be economically worthwhile. Instead of trying to eliminate errors, his strategy is to gradually add more qubits, which achieves slightly more substantial results. His goal is to work around the error-correction problem.

Part of the difference between Monroe and his academic colleagues relates to his having crossed over into a mix of business and academia. Monroes view on this issue? Industry and academia do not always see things the same way. Academics are trained to prove everything we do. But if a computer works better to solve a certain problem, we do not need to prove it.

For example, if a quantum computer doubled the value of a financial portfolio compared to a super computers financial recommendations, the client is thrilled even if no one knows how.

He said that when shortcuts solve problems and certain things cannot be proven but where quantum computing finds value academics hate it. They are trained to be pessimists. I do believe quantum computers will find narrow applications within five years.

Besides error correction, another question is what the qubits themselves, the basis of different kinds of quantum computers, should be made out of. The technique that many of his competitors are using to make computers out of a particular kind of qubit has the benefit of being not hard to do, inexpensive and representing beautiful physics.

However, he warned, No one knows where to find it if it exists So stay in solid-state physics and build computers out of solid-state systems. Google, Amazon and others are all invested in solid-state computers. But I dont see it happening without fundamental physics breakthroughs. If you want to build and engineer a device if you want to have a business you should not be reliant on physics breakthroughs.

Instead of the path of his competitors, Monroe emphasized working with natural quantum atoms and tricking and engineering them to act how he wants using low pressure instead of low temperatures.

I work with charged atoms or ions. We levitate them inside a vacuum chamber which is getting smaller every year. We have a silicon chip. Just electrodes, electric force fields are holding up these atoms. There are no solids and no air in the vacuum chamber, which means the atoms remain extremely well isolated. They are the most perfect atoms we know, so we can scale without worrying about the top of the noise [the threshold where qubits decay]. We can pick qubit levels that do not yet decay.

Why are Google and IBM investing in natural qubits? Because they have a blind spot. They have been first in solid-state physics and engineering for 50 years. If there is a silicon solid-state quantum computer, Intel will make that, but I dont see how it will be scaled, he declared.

MONROE IS far from the full quantum show at Maryland.

Paglione has been a professor at University of Maryland for 13 years and the director of the Maryland Quantum Materials Center for the last five years.

In 1986, the center was working on high-temperature superconductors, Paglione said, noting that work on quantum computers is a more recent development. The development has not merely altered the focus of the centers research. According to Paglione, it has also helped grow the center from around seven staff members 30 years ago to around 100 staff members when all of the affiliate members, students and administrative staff are taken into account.

Similarly, Dr. Gretchen Campbell, director of the Joint Quantum Institute, told the Magazine that a big part of her institutions role and her personal role has been to first bring together people from atomic physics and condensed-matter physics even within physics, we do not always talk to each other, followed by connecting these experts with computer science experts.

Campbell explained it was crucial to explore the interaction between the quantum realm and quantum algorithms, for which they needed more math and computer science backgrounds and to continue to move from laboratories to real-world applications to translating into technology and interacting more with industry.

She also guided the Magazine, adorning goggles, through a lab with a digital micromirror device and laser beams relating to atom clouds and light projectors.

Add in some additional departments at Maryland as well as a partnership with the National Institute of Standards and Technology (NIST) and the number of staff swells way past 100. What are their many different teams working on? The lab studies and experiments are as varied as the different disciplines, with Paglione talking about possibilities for making squid devices or sensitive magnetic sensors that could be constructed by using a superconducting quantum interference device.

Paglione said magnetometer systems could be used with squids to sense the magnetic field of samples. These could be used as detectors in water. If they were made sensitive enough, they could sense changes in a magnetic field, such as when a submarine passes by and generates a changed magnetic field.

This has drawn attention from the US Department of Defense.

A multidisciplinary mix of Pagliones team recently captured the most direct evidence to date of a quantum quirk, which permits particles to tunnel through a barrier as if it is not even there. The upshot could be assisting engineers in designing more uniform components to build both future quantum computers and quantum sensors (reported applications could detect not only submarines but aircraft).

Pagliones team, headed by Ichiro Takeuchi, a professor of materials science and engineering at Maryland, successfully carried out a new experiment in which they observed Klein tunneling. In the quantum world, tunneling enables particles, such as electrons, to pass through a barrier even if they lack sufficient energy to actually climb over it. A taller barrier usually makes climbing over harder and fewer particles are able to cross through. The phenomenon, known as Klein tunneling, happens when the barrier becomes completely transparent and opens up a portal that particles can traverse regardless of the barriers height.

Scientists and engineers from Marylands Center for Nanophysics and Advanced Materials, the Joint Quantum Institute and the Condensed Matter Theory Center along with the Department of Materials Science and Engineering and Department of Physics, succeeded in making the most compelling measurements of the phenomenon to date.

Given that Klein tunneling was initially predicted to occur in the world of high-energy quantum particles moving close to the speed of light, observing the effect was viewed as impossible. That was until scientists revealed that some of the rules governing fast-moving quantum particles can also apply to the comparatively sluggish particles traveling near the surface of some highly unusual materials.

It was a piece of serendipity that the unusual material and an elemental relative of sorts shared the same crystal structure, said Paglione. However, the multidisciplinary team we have was one of the keys to this success. Having experts on topological physics, thin-film synthesis, spectroscopy and theoretical understanding really got us to this point.

Bringing this back to quantum computing, the idea is that interactions between superconductors and other materials are central ingredients in some quantum computer architectures and precision-sensing devices. Yet, there has always been a problem that the junction, or crossover spot, where they interact is slightly different. Takeuchi said this led to sucking up countless amounts of time and energy tuning and calibrating to reach the best performance.

Takeuchi said Klein tunneling could eliminate this variability, which has played havoc with device-to-device interactions.

AN ENTIRELY separate quantum application could be physics department chairman Prof. Steve Rolstons work on establishing a quantum communications network. Rolston explained that when a pair of photons are quantum entangled you can achieve quantum encryption over a communications network, by using entangled particles to create secure keys that cannot be hacked. There are varying paths to achieve such a quantum network and Rolston is skeptical of others in the field who could be seen as cutting corners.

He also is underwhelmed by Chinas achievements in this area. According to Rolston, no one has figured out how to extend a secure quantum network over any space sizable enough to make the network usable and marketable in practical terms.

Rather, he said existing quantum networks are either limited to very small spaces, or to extend their range they must employ gimmicks that usually impair how secure they are. Because of these limitations, Rolston went as far as to say that his view is that the US National Security Agency views the issue as a distraction.

In terms of export trade barriers or issues with China, he said he opposes controls and believes cooperation in the quantum realm should continue, especially since all of his centers research is made public anyway.

Rolston also lives up to Monroes framing of the difference between academics and industry-focused people. He said that even Monroe would have to admit that no one is close to the true holy grail of quantum computers computers with a massive number of qubits and that the IonQ founder is banking on interesting optimization problems being solvable for industry to an extent which will justify the hype instead.

In contrast, Rolston remained pessimistic that such smaller quantum computers would achieve sufficient superiority at optimization issues in business to justify a rushed prediction that transforming the world is just around the corner.

In Rolstons view, the longer, more patient and steadier path is the one that will eventually reap rewards.

For the moment, we do not know whether Google or IonQ, or those like Monroe or Rolston will eventually be able to declare they were right. We do know that whoever is right and whoever is first will radically change the world as we know it.

Originally posted here:
Who will dominate the tech arms race? - The Jerusalem Post

Why Quantum Resistance Is the Next Blockchain Frontier – Tech Times

(Photo : Why Quantum Resistance Is the Next Blockchain Frontier)

As decentralized networks secured by potentially thousands of miners and/or nodes, blockchains are widely considered to be an incredibly secure example of distributed ledger technology.

On the back of this, they also have dozens of potential applications - ranging from decentralized content storage networks, to medical records databases, and supply chain management. But to this day, they're most commonly thought of as the ideal platform hosting the financial infrastructure of tomorrow - such as decentralized exchanges and payment settlement networks.

But there's a problem. While the blockchains of today are practically unhackable - due to the type of encryption they use to secure private keys and transactions - this might not be the case for much longer. This is due to the advent of so-called "quantum computers", that is, computers that can leverage the properties of quantum mechanics to solve problems that would be impossible with traditional computers... such as breaking the cryptography that secures current generation blockchains.

Many blockchains of today use at least two types of cryptographic algorithms - asymmetric key algorithms and hash functions.

The first kind, also known as public-key cryptography, is used to produce pairs of private and public keys that are provably cryptographically linked. In Bitcoin, this private key is used to spend UTXOs - thereby transferring value from one person to another. The second kind - the hash function - is used to securely process raw transaction data into a block in a way that is practically irreversible.

As you might imagine, a sufficiently powerful quantum computer capable of breaking either of these security mechanisms could have devastating consequences for susceptible blockchains - since they could be used to potentially derive private keys or even mine cryptocurrency units much faster than the expected rate (leading to supply inflation).

So, just how far away from this are we? Well, according to recent estimates, a quantum computer possessing 4,000 qubits of processing power could be the minimum necessary to break the public key cryptography that secures Bitcoin user funds. A sufficiently flexible quantum computer with this processing power could, theoretically, take over the funds contained in any Bitcoin p2pk address - that's a total of around 2 million BTC (circa $67 billion at today's rates).

Fortunately, this isn't an immediate concern. As it stands, the world's most powerful quantum computer - the Zuchongzhi quantum computer- currently clocks in at an impressive (albeit insufficient) 66 qubits. However, given the rapid pace of development in the quantum computing sector, some experts predict that Bitcoin's Elliptic Curve Digital Signature Algorithm (ECDSA) could meet its quantum match within a decade.

(Photo : The Next Platform)

The algorithm that could be potentially used to break ECDSA has already been developed. If generalized and applied by a powerful enough quantum computer, it is widely thought that Peter Shor's polynomial time quantum algorithm would be able to attack the Bitcoin blockchain - while similar algorithms could be applied to other forms of traditional encryption.

But this might not be a concern for much longer, thanks to the introduction of what many consider to be the world's first truly quantum-resistant blockchain. The platform, known as QANplatform, is built to resist all known quantum attacks by using lattice cryptography. QAN manages to achieve quantum resistance while simultaneously tackling the energy concerns that come with some other blockchains through its highly efficient consensus mechanism known as Proof-of-Randomness (PoR).

Unlike some other so-called quantum-resistant blockchains, QAN is unusual in that it also supports decentralized applications (DApps) - allowing developers to launch quantum-resistant DApps within minutes using its free developer tools.

Besides platforms like QAN, the development communities behind several popular blockchains are already beginning to consider implementing their own quantum-resistance solutions, such as the recently elaboratedcommit-delay-reveal scheme - which could be used to transition Bitcoin to a quantum-resistant state. Nonetheless, the future of post-quantum cryptography still remains up in the air, as none of the top ten blockchains by user count have yet committed to a specific quantum-resistant signature scheme.

2021 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Tags:

Read this article:
Why Quantum Resistance Is the Next Blockchain Frontier - Tech Times

Life, the universe and everything Physics seeks the future – The Economist

Aug 25th 2021

A WISE PROVERB suggests not putting all your eggs in one basket. Over recent decades, however, physicists have failed to follow that wisdom. The 20th centuryand, indeed, the 19th before itwere periods of triumph for them. They transformed understanding of the material universe and thus peoples ability to manipulate the world around them. Modernity could not exist without the knowledge won by physicists over those two centuries.

Your browser does not support the

Get The Economist app and play articles, wherever you are

In exchange, the world has given them expensive toys to play with. The most recent of these, the Large Hadron Collider (LHC), which occupies a 27km-circumference tunnel near Geneva and cost $6bn, opened for business in 2008. It quickly found a long-predicted elementary particle, the Higgs boson, that was a hangover from calculations done in the 1960s. It then embarked on its real purpose, to search for a phenomenon called Supersymmetry.

This theory, devised in the 1970s and known as Susy for short, is the all-containing basket into which particle physicss eggs have until recently been placed. Of itself, it would eliminate many arbitrary mathematical assumptions needed for the proper working of what is known as the Standard Model of particle physics. But it is also the vanguard of a deeper hypothesis, string theory, which is intended to synthesise the Standard Model with Einsteins general theory of relativity. Einsteins theory explains gravity. The Standard Model explains the other three fundamental forceselectromagnetism and the weak and strong nuclear forcesand their associated particles. Both describe their particular provinces of reality well. But they do not connect together. String theory would connect them, and thus provide a so-called theory of everything.

String theory proposes that the universe is composed of minuscule objects which vibrate in the manner of the strings of a musical instrument. Like such strings, they have resonant frequencies and harmonics. These various vibrational modes, string theorists contend, correspond to various fundamental particles. Such particles include all of those already observed as part of the Standard Model, the further particles predicted by Susy, which posits that the Standard Models mathematical fragility will go away if each of that models particles has a heavier supersymmetric partner particle, or sparticle, and also particles called gravitons, which are needed to tie the force of gravity into any unified theory, but are not predicted by relativity.

But, no Susy, no string theory. And, 13 years after the LHC opened, no sparticles have shown up. Even two as-yet-unexplained results announced earlier this year (one from the LHC and one from a smaller machine) offer no evidence directly supporting Susy. Many physicists thus worry they have been on a wild-goose chase.

They have good reason to be nervous. String theory already comes with a disturbing conceptual price tagthat of adding six (or in one version seven) extra dimensions to the universe, over and above the four familiar ones (three of space and one of time). It also describes about 10500 possible universes, only one of which matches the universe in which human beings live. Accepting all that is challenging enough. Without Susy, though, string theory goes bananas. The number of dimensions balloons to 26. The theory also loses the ability to describe most of the Standard Models particles. And it implies the existence of weird stuff such as particles called tachyons that move faster than light and are thus incompatible with the theory of relativity. Without Susy, string theory thus looks pretty-much dead as a theory of everything. Which, if true, clears the field for non-string theories of everything.

The names of many of these do, it must be conceded, torture the English language. They include causal dynamical triangulation, asymptotically safe gravity, loop quantum gravity and the amplituhedron formulation of quantum theory. But at the moment the bookies favourite for unifying relativity and the Standard Model is something called entropic gravity.

Entropy is a measure of a systems disorder. Famously, the second law of thermodynamics asserts that it increases with time (ie, things have a tendency to get messier as they get older). What that has to do with a theory of gravity, let alone of everything, is not, perhaps, immediately obvious. But the link is black holes. These are objects which have such strong gravitational fields that even light cannot escape from them. They are predicted by the mathematics of general relativity. And even though Einstein remained sceptical about their actual existence until the day he died in 1955, subsequent observations have shown that they are indeed real. But they are not black.

In 1974 Stephen Hawking, of Cambridge University, showed that quantum effects at a black holes boundary allow it to radiate particlesespecially photons, which are the particles of electromagnetic radiation, including light. This has peculiar consequences. Photons carry radiant heat, so something which emits them has a temperature. And, from its temperature and mass, it is possible to calculate a black holes entropy. This matters because, when all these variables are plugged into the first law of thermodynamics, which states that energy can be neither created nor destroyed, only transformed from one form (say, heat) into another (say, mechanical work), what pops out are Einsteins equations of general relativity.

That relationship was discovered in 2010 by Erik Verlinde of Amsterdam University. It has serious implications. The laws of thermodynamics rely on statistical mechanics. They involve properties (temperature, entropy and so on) which emerge from probabilistic descriptions of the behaviour of the underlying particles involved. These are also the particles described by quantum mechanics, the mathematical theory which underpins the Standard Model. That Einsteins equations can be rewritten thermodynamically implies that space and time are also emergent properties of this deeper microscopic picture. The existing forms of quantum mechanics and relativity thus do indeed both seem derivable in principle from some deeper theory that describes the underlying fabric of the universe.

String theory is not so derivable. Strings are not fundamental enough entities. But entropic gravity claims to describe the very nature of space and timeor, to use Einsteinian terminology, spacetime. It asserts this is woven from filaments of quantum entanglement linking every particle in the cosmos.

The idea of quantum entanglement, another phenomenon pooh-poohed by Einstein that turned out to be true, goes back to 1935. It is that the properties of two or more objects can be correlated (entangled) in a way which means they cannot be described independently. This leads to weird effects. In particular, it means that two entangled particles can appear to influence each others behaviour instantaneously even when they are far apart. Einstein dubbed this spooky action at a distance, because it seems to violate the premise of relativity theory that, in the speed of light, the universe has a speed limit.

As with black holes, Einstein did not live long enough to see himself proved wrong. Experiments have nevertheless shown he was. Entanglement is real, and does not violate relativity because although the influence of one particle on another can be instantaneous there is no way to use the effect to pass information faster than light-speed. And, in the past five years, Brian Swingle of Harvard University and Sean Carroll of the California Institute of Technology have begun building models of what Dr Verlindes ideas might mean in practice, using ideas from quantum information theory. Their approach employs bits of quantum information (so-called qubits) to stand in for the entangled particles. The result is a simple but informative analogue of spacetime.

Qubits, the quantum equivalent of classical bitsthe ones and zeros on which regular computing is builtwill be familiar to those who follow the field of quantum computing. They are the basis of quantum information theory. Two properties distinguish qubits from the regular sort. First, they can be placed in a state of superposition, representing both a one and a zero at the same time. Second, several qubits can become entangled. Together, these properties let quantum computers accomplish feats such as performing multiple calculations at once, or completing certain classes of calculation in a sensible amount of time, that are difficult or impossible for a regular computer.

And because of their entanglement qubits can also, according to Dr Swingle and Dr Carroll, be used as stand-ins for how reality works. More closely entangled qubits represent particles at points in spacetime that are closer together. So far, quantum computers being a work in progress, this modelling can be done only with mathematical representations of qubits. These do, though, seem to obey the equations of general relativity. That supports entropic-gravity-theorys claims.

All of this modelling puts entropic gravity in pole position to replace strings as the long-sought theory of everything. But the idea that spacetime is an emergent property of the universe rather than being fundamental to it has a disturbing consequence. It blurs the nature of causality.

In the picture built by entropic gravity, spacetime is a superposition of multiple states. It is this which muddies causality. The branch of maths that best describes spacetime is a form of geometry that has four axes at right angles to each other instead of the more familiar three. The fourth represents time, so, like the position of objects, the order of events in spacetime is determined geometrically. If different geometric arrangements are superposed, as entropic gravity requires, it can therefore sometimes happen that the statements A causes B and B causes A are both true.

This is not mere speculation. In 2016 Giulia Rubino of the University of Bristol, in England, constructed an experiment involving polarised photons and prisms which achieved exactly that. This spells trouble for those who have old-fashioned notions about causalitys nature.

However, Lucien Hardy of the Perimeter Institute, in Canada, has discovered a way to reformulate the laws of quantum mechanics to get around this. In his view, causality as commonly perceived is like data compression in computing: it is a concept that gives you more bang for your buck. With a little bit of information about the present, causality can infer a lot about the futurecompressing the amount of information needed to capture the details of a physical system in time.

But causality, Dr Hardy thinks, may not be the only way to describe such correlations. Instead, he has invented a general method for building descriptions of the patterns in correlations from scratch. This method, which he calls the causaloid framework, tends to reproduce causality but it does not assume it, and he has used it to reformulate both quantum theory (in 2005) and general relativity (in 2016). Causaloid maths is not a theory of everything. But there is a good chance that if and when such a theory is found, causaloid principles will be needed to describe it, just as general relativity needed a geometry of four dimensions to describe spacetime.

Entropic gravity has, then, a lot of heavy-duty conceptual work to back it up. But it is not the only candidate to replace string theory. Others jostling for attention include an old competitor called loop quantum gravity, originally proposed in 1994 by Carlo Rovelli, then at the University of Pittsburgh, and Lee Smolin, of the Perimeter Institute. This, and causal dynamical triangulation, a more recent but similar idea, suggest that spacetime is not the smooth fabric asserted by general relativity, but, rather, has a structureeither elementary loops or triangles, according to which of the two theories you support.

A third option, asymptotically safe gravity, goes back still further, to 1976. It was suggested by Steven Weinberg, one of the Standard Models chief architects. A natural way to develop a theory of quantum gravity is to add gravitons to the model. Unfortunately, this approach got nowhere, because when the interactions of these putative particles were calculated at higher energies, the maths seemed to become nonsensical. However, Weinberg, who died in July, argued that this apparent breakdown would go away (in maths speak, the calculations would be asymptotically safe) if sufficiently powerful machines were used to do the calculating. And, with the recent advent of supercomputers of such power, it looks, from early results, as if he might have been right.

One of the most intriguing competitors of entropic gravity, though, is the amplituhedron formulation of quantum theory. This was introduced in 2013 by Nima Arkani-Hamed of the Institute of Advanced Study at Princeton and Jaroslav Trnka of the University of California, Davis. They have found a class of geometric structures dubbed amplituhedrons, each of which encodes the details of a possible quantum interaction. These, in turn, are facets of a master amplituhedron that encodes every possible type of physical process. It is thus possible to reformulate all of quantum theory in terms of the amplituhedron.

Most attempts at a theory of everything try to fit gravity, which Einstein describes geometrically, into quantum theory, which does not rely on geometry in this way. The amplituhedron approach does the opposite, by suggesting that quantum theory is actually deeply geometric after all. Better yet, the amplituhedron is not founded on notions of spacetime, or even statistical mechanics. Instead, these ideas emerge naturally from it. So, while the amplituhedron approach does not as yet offer a full theory of quantum gravity, it has opened up an intriguing path that may lead to one.

That space, time and even causality are emergent rather than fundamental properties of the cosmos are radical ideas. But this is the point. General relativity and quantum mechanics, the physics revolutions of the 20th century, were viewed as profound precisely because they overthrew common sense. To accept relativity meant abandoning a universal notion of time and space. To take quantum mechanics seriously meant getting comfortable with ideas like entanglement and superposition. Embracing entropic gravity or its alternatives will require similar feats of the imagination.

No theory, though, is worth a damn without data. That, after all, is the problem with Supersymmetry. Work like Dr Rubinos points the way. But something out of a particle-physics laboratory would also be welcome. And, though their meaning is obscure, the past few months have indeed seen two experimentally induced cracks in the Standard Model.

On March 23rd a team from CERN, the organisation that runs the LHC, reported an unexpected difference in behaviour between electrons and their heavier cousins, muons. These particles differ from one another in no known properties but their masses, so the Standard Model predicts that when other particles decay into them, the two should each be produced in equal numbers. But this appears not to be true. Interim results from the LHC suggest that a type of particle called a B-meson is more likely to decay into an electron than a muon. That suggests an as-yet-undescribed fundamental force is missing from the Standard Model. Then, on April 7th, Fermilab, Americas biggest particle-physics facility, announced the interim results of its own muon experiment, Muon g-2.

In the quantum world, there is no such thing as a perfect vacuum. Instead, a froth of particles constantly pops in and out of existence everywhere in spacetime. These are virtual rather than real particlesthat is, they are transient fluctuations which emerge straight out of quantum uncertainty. But, although they are short-lived, during the brief periods of their existence they still have time to interact with more permanent sorts of matter. They are, for example, the source of the black-hole radiation predicted by Hawking.

The strengths of their interactions with types of matter more conventional than black holes are predicted by the Standard Model, and to test these predictions, Muon g-2 shoots muons in circles around a powerful superconducting magnetic-storage ring. The quantum froth changes the way the muons wobble, which detectors can pick up with incredible precision. The Muon g-2 experiment suggests that the interactions causing these wobbles are slightly stronger than the Standard Model predicts. If confirmed, this would mean the model is missing one or more elementary particles.

There is a slim chance that these are the absent sparticles. If so, it is the supporters of supersymmetry who will have the last laugh. But nothing points in this direction and, having failed thus far to stand their ideas up, they are keeping sensibly quiet.

Whatever the causes of these two results, they do show that there is something out there which established explanations cannot account for. Similarly unexplained anomalies were starting points for both quantum theory and relativity. It looks possible, therefore, that what has seemed one of physicss darkest periods is about to brighten into a new morning.

This article appeared in the Science & technology section of the print edition under the headline "Bye, bye, little Susy"

Originally posted here:
Life, the universe and everything Physics seeks the future - The Economist

This Exotic Particle Had an Out-of-Body Experience These Surprised Scientists Took a Picture of It – SciTechDaily

Artists illustration of ghost particles moving through a quantum spin liquid. Credit: Jenny Nuss/Berkeley Lab

An unexpected finding by scientists at Berkeley Lab and UC Berkeley could advance quantum computers and high-temperature superconductors.

Scientists have taken the clearest picture yet of electronic particles that make up a mysterious magnetic state called a quantum spin liquid (QSL).

The achievement could facilitate the development of superfast quantum computers and energy-efficient superconductors.

The scientists are the first to capture an image of how electrons in a QSL decompose into spin-like particles called spinons and charge-like particles called chargons.

Artists illustration of ghost particles moving through a quantum spin liquid. Credit: Jenny Nuss/Berkeley Lab

Other studies have seen various footprints of this phenomenon, but we have an actual picture of the state in which the spinon lives. This is something new, said study leader Mike Crommie, a senior faculty scientist at Lawrence Berkeley National Laboratory (Berkeley Lab) and physics professor at UC.

Spinons are like ghost particles. They are like the Big Foot of quantum physics people say that theyve seen them, but its hard to prove that they exist, said co-author Sung-Kwan Mo, a staff scientist at Berkeley Labs Advanced Light Source. With our method weve provided some of the best evidence to date.

In a QSL, spinons freely move about carrying heat and spin but no electrical charge. To detect them, most researchers have relied on techniques that look for their heat signatures.

Now, as reported in the journal Nature Physics, Crommie, Mo, and their research teams have demonstrated how to characterize spinons in QSLs by directly imaging how they are distributed in a material.

Schematic of the triangular spin lattice and star-of-David charge density wave pattern in a monolayer of tantalum diselenide. Each star consists of 13 tantalum atoms. Localized spins are represented by a blue arrow at the star center. The wavefunction of the localized electrons is represented by gray shading. Credit: Mike Crommie et al./Berkeley Lab

To begin the study, Mos group at Berkeley Labs Advanced Light Source (ALS) grew single-layer samples of tantalum diselenide (1T-TaSe2) that are only three-atoms thick. This material is part of a class of materials called transition metal dichalcogenides (TMDCs). The researchers in Mos team are experts in molecular beam epitaxy, a technique for synthesizing atomically thin TMDC crystals from their constituent elements.

Mos team then characterized the thin films through angle-resolved photoemission spectroscopy, a technique that uses X-rays generated at the ALS.

Scanning tunneling microscopy image of a tantalum diselenide sample that is just 3 atoms thick. Credit: Mike Crommie et al./Berkeley Lab

Using a microscopy technique called scanning tunneling microscopy (STM), researchers in the Crommie lab including co-first authors Wei Ruan, a postdoctoral fellow at the time, and Yi Chen, then a UC Berkeley graduate student injected electrons from a metal needle into the tantalum diselenide TMDC sample.

Images gathered by scanning tunneling spectroscopy (STS) an imaging technique that measures how particles arrange themselves at a particular energy revealed something quite unexpected: a layer of mysterious waves having wavelengths larger than one nanometer (1 billionth of a meter) blanketing the materials surface.

The long wavelengths we saw didnt correspond to any known behavior of the crystal, Crommie said. We scratched our heads for a long time. What could cause such long wavelength modulations in the crystal? We ruled out the conventional explanations one by one. Little did we know that this was the signature of spinon ghost particles.

With help from a theoretical collaborator at MIT, the researchers realized that when an electron is injected into a QSL from the tip of an STM, it breaks apart into two different particles inside the QSL spinons (also known as ghost particles) and chargons. This is due to the peculiar way in which spin and charge in a QSL collectively interact with each other. The spinon ghost particles end up separately carrying the spin while the chargons separately bear the electrical charge.

Illustration of an electron breaking apart into spinon ghost particles and chargons inside a quantum spin liquid. Credit: Mike Crommie et al./Berkeley Lab

In the current study, STM/STS images show that the chargons freeze in place, forming what scientists call a star-of-David charge-density-wave. Meanwhile, the spinons undergo an out-of-body experience as they separate from the immobilized chargons and move freely through the material, Crommie said. This is unusual since in a conventional material, electrons carry both the spin and charge combined into one particle as they move about, he explained. They dont usually break apart in this funny way.

Crommie added that QSLs might one day form the basis of robust quantum bits (qubits) used for quantum computing. In conventional computing a bit encodes information either as a zero or a one, but a qubit can hold both zero and one at the same time, thus potentially speeding up certain types of calculations. Understanding how spinons and chargons behave in QSLs could help advance research in this area of next-gen computing.

Another motivation for understanding the inner workings of QSLs is that they have been predicted to be a precursor to exotic superconductivity. Crommie plans to test that prediction with Mos help at the ALS.

Part of the beauty of this topic is that all the complex interactions within a QSL somehow combine to form a simple ghost particle that just bounces around inside the crystal, he said. Seeing this behavior was pretty surprising, especially since we werent even looking for it.

Reference: Evidence for quantum spin liquid behaviour in single-layer 1T-TaSe2 from scanning tunnelling microscopy by Wei Ruan, Yi Chen, Shujie Tang, Jinwoong Hwang, Hsin-Zon Tsai, Ryan L. Lee, Meng Wu, Hyejin Ryu, Salman Kahn, Franklin Liou, Caihong Jia, Andrew Aikawa, Choongyu Hwang, Feng Wang, Yongseong Choi, Steven G. Louie, Patrick A. Lee, Zhi-Xun Shen, Sung-Kwan Mo & Michael F. Crommie, 19 August 2021, Nature Physics.DOI: 10.1038/s41567-021-01321-0

Researchers from SLAC National Accelerator Laboratory; Stanford University; Argonne National Laboratory; the Massachusetts Institute of Technology; the Chinese Academy of Sciences, Shanghai Tech University, Shenzhen University, Henan University of China; and the Korea Institute of Science and Technology and Pusan National University of Korea contributed to this study. (Co-first author Wei Ruan is now an assistant professor of physics at Fudan University in China; co-first author Yi Chen is currently a postdoctoral fellow at the Center for Quantum Nanoscience, Institute for Basic Science of Korea.)

This work was supported by the DOE Office of Science, and used resources at Berkeley Labs Advanced Light Source and Argonne National Laboratorys Advanced Photon Source. The Advanced Light Source and Advanced Photon Source are DOE Office of Science user facilities.

Additional support was provided by the National Science Foundation.

More here:
This Exotic Particle Had an Out-of-Body Experience These Surprised Scientists Took a Picture of It - SciTechDaily

Expert: Now is the time to prepare for the quantum computing revolution – TechRepublic

Though quantum computing is likely five to 10 years away, waiting until it happens will put your organization behind. Don't play catch-up later.

TechRepublic's Karen Roby spoke with Christopher Savoie, CEO and co-founder of Zapata Computing, a quantum application company, about the future of quantum computing. The following is an edited transcript of their conversation.

SEE: The CIO's guide to quantum computing (free PDF) (TechRepublic)

Christoper Savoie: There are two types of quantum-computing algorithms if you will. There are those that will require what we call a fault-tolerant computing system, one that doesn't have error, for all intents and purposes, that's corrected for error, which is the way most classical computers are now. They don't make errors in their calculations, or at least we hope they don't, not at any significant rate. And eventually we'll have these fault-tolerant quantum computers. People are working on it. We've proven that it can happen already, so that is down the line. But it's in the five- to 10-year range that it's going to take until we have that hardware available. But that's where a lot of the promises for these exponentially faster algorithms. So, these are the algorithms that will use these fault-tolerant computers to basically look at all the options available in a combinatorial matrix.

So, if you have something like Monte Carlo simulation, you can try significantly all the different variables that are possible and look at every possible combination and find the best optimal solution. So, that's really, practically impossible on today's classical computers. You have to choose what variables you're going to use and reduce things and take shortcuts. But with these fault-tolerant computers, for significantly many of the possible solutions in the solution space, we can look at all of the combinations. So, you can imagine almost an infinite amount or an exponential amount of variables that you can try out to see what your best solution is. In things like CCAR [Comprehensive Capital Analysis and Review], Dodd-Frank [Dodd-Frank Wall Street Reform and Consumer Protection Act] compliance, these things where you have to do these complex simulations, we rely on a Monte Carlo simulation.

So, trying all of the possible scenarios. That's not possible today, but this fault tolerance will allow us to try significantly all of the different combinations, which will hopefully give us the ability to predict the future in a much better way, which is important in these financial applications. But we don't have those computers today. They will be available sometime in the future. I hate putting a date on it, but think about it on the decade time horizon. On the other hand, there are these nearer-term algorithms that run on these noisy, so not error-corrected, noisy intermediate-scale quantum devices. We call them NISQ for short. And these are more heuristic types of algorithms that are tolerant to noise, much like neural networks are today in classical computing and [artificial intelligence] AI. You can deal a little bit with the sparse data and maybe some error in the data or other areas of your calculation. Because it's an about-type of calculation like neural networks do. It's not looking at the exact answers, all of them and figuring out which one is definitely the best. This is an approximate algorithm that iterates and tries to get closer and closer to the right answer.

SEE: Hiring Kit: Video Game Designer (TechRepublic Premium)

But we know that neural networks work this way, deep neural networks. AI, in its current state, uses this type of algorithm, these heuristics. Most of what we do in computation nowadays and finance is heuristic in its nature and statistical in its nature, and it works good enough to do some really good work. In algorithmic trading, in risk analysis, this is what we use today. And these quantum versions of that will also be able to give us some advantage and maybe an advantage overwe've been able to show in recent workthe purely classical version of that. So, we'll have some quantum-augmented AI, quantum-augmented [machine learning] ML. We call it a quantum-enhanced ML or quantum-enhanced optimization that we'll be able to do.

So, people think of this as a dichotomy. We have these NISQ machines, and they're faulty, and then one day we'll wake up and we'll have this fault tolerance, but it's really not that way. These faulty algorithms, if you will, these heuristics that are about, they will still work and they may work better than the fault-tolerant algorithms for some problems and some datasets, so this really is a gradient. It really is. You'd have a false sense of solace, maybe two. "Oh well, if that's 10 years down the road we can just wait and let's wait till we wake up and have fault tolerance." But really the algorithms are going to be progressing. And the things that we develop now will still be useful in that fault-tolerant regime. And the patents will all be good for the stuff that we do now.

So, thinking that, "OK, this is a 10 year time horizon for those fault-tolerant computers. Our organization is just going to wait." Well, if you do, you get a couple of things. You're not going to have the workforce in place to be able to take advantage of this. You're probably not going to have the infrastructure in place to be able to take advantage of this. And meanwhile, all of your competitors and their vendors have acquired a portfolio of patents on these methodologies that are good for 20 years. So, if you wait five years from now and there's a patent four years down the line, that's good for 24 years. So there really is, I think, an incentive for organizations to really start working, even in this NISQ, this noisier regime that we're in today.

Karen Roby: You get a little false sense of security, as you mentioned, of something, oh, you say that's 10 years down the line, but really with this, you don't have the luxury of catching up if you wait too long. This is something that people need to be focused on now for what is down the road.

SEE: Quantum entanglement-as-a-service: "The key technology" for unbreakable networks (TechRepublic)

Christoper Savoie: Yes, absolutely. And in finance, if you have a better ability to detect risks then than your competitors; you're at a huge advantage to be able to find alpha in the market. If you can do that better than others, you're going to be at a huge advantage. And if you're blocked by people's patents or blocked by the fact that your workforce doesn't know how to use these things, you're really behind the eight ball. And we've seen this time and time again with different technology evolutions and revolutions. With big data and our use of big data, with that infrastructure, with AI and machine learning. The organizations that have waited generally have found themselves behind the eight ball, and it's really hard to catch up because this stuff is changing daily, weekly, and new inventions are happening. And if you don't have a workforce that's up and running and an infrastructure ready to accept this, it's really hard to catch up with your competitors.

Karen Roby: You've touched on this a little bit, but really for the finance industry, this can be transformative, really significant what quantum computing can do.

Christoper Savoie: Absolutely. At the end of the day, finance is math, and we can do better math and more accurate math on large datasets with quantum computing. There is no question about that. It's no longer an "if." Google has, with their experiment, proven that at some point we're going to have a machine that is definitely going to be better at doing math, some types of math, than classical computers. With that premise, if you're in a field that depends on math, that depends on numbers, which is everything, and statistics, which is finance, no matter what side you're on. If you're on the risk side or the investing side, you're going to need to have the best tools. And that doesn't mean you have to be an algorithmic trader necessarily, but even looking at tail risk and creating portfolios and this kind of thing. You're dependent on being able to quickly ascertain what that risk is, and computing is the only way to do that.

SEE: The quantum decade: IBM predicts the 2020s will see quantum begin to solve real problems (TechRepublic)

And on the regulatory side, I mentioned CCAR. I think as these capabilities emerge, it allows the regulators to ask for even more scenarios to be simulated, those things that are a big headache for a lot of companies. But it's important because our global financial system depends on stability and predictability, and to be able to have a computational resource like quantum that's going to allow us to see more variables or more possibilities or more disaster scenarios. It can really help. "What is the effect of, say, a COVID-type event on the global financial system?" To be more predictive of that and more accurate at doing that is good for everybody. I think all boats rise, and quantum is definitely going to give us that advantage as well.

Karen Roby: Most definitely. And Christopher, before I let you go, if you would just give us a quick snapshot of Zapata Computing and the work that you guys do.

Christoper Savoie: We have two really important components to try and make this stuff reality. On the one hand, we've got over 30 of the brightest young minds and algorithms, particularly for these near-term devices and how to write those. We've written some of the fundamental algorithms that are out there to be used on quantum computers. On the other hand, how do you make those things work? That's a software engineering thing. That's not really quantum science. How do you make the big data work? And that's all the boring stuff of ETL and data transformation and digitalization and cloud and multicloud and all this boring but very important stuff. So basically Zapata is a company that has the best of the algorithms, but also best-of-breed means of actually software engineering that in a modern, multicloud environment that particularly finance companies, banks, they're regulated companies with a lot of data that is sensitive and private and proprietary. So, you need to be able to work in a safe and secure multicloud environment, and that's what our software engineering side allows us to do. We have the best of both worlds there.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Image: sakkmesterke, Getty Images/iStockphoto

Go here to read the rest:
Expert: Now is the time to prepare for the quantum computing revolution - TechRepublic