Archive for the ‘Quantum Computing’ Category

Why Quantum Resistance Is the Next Blockchain Frontier – Tech Times

(Photo : Why Quantum Resistance Is the Next Blockchain Frontier)

As decentralized networks secured by potentially thousands of miners and/or nodes, blockchains are widely considered to be an incredibly secure example of distributed ledger technology.

On the back of this, they also have dozens of potential applications - ranging from decentralized content storage networks, to medical records databases, and supply chain management. But to this day, they're most commonly thought of as the ideal platform hosting the financial infrastructure of tomorrow - such as decentralized exchanges and payment settlement networks.

But there's a problem. While the blockchains of today are practically unhackable - due to the type of encryption they use to secure private keys and transactions - this might not be the case for much longer. This is due to the advent of so-called "quantum computers", that is, computers that can leverage the properties of quantum mechanics to solve problems that would be impossible with traditional computers... such as breaking the cryptography that secures current generation blockchains.

Many blockchains of today use at least two types of cryptographic algorithms - asymmetric key algorithms and hash functions.

The first kind, also known as public-key cryptography, is used to produce pairs of private and public keys that are provably cryptographically linked. In Bitcoin, this private key is used to spend UTXOs - thereby transferring value from one person to another. The second kind - the hash function - is used to securely process raw transaction data into a block in a way that is practically irreversible.

As you might imagine, a sufficiently powerful quantum computer capable of breaking either of these security mechanisms could have devastating consequences for susceptible blockchains - since they could be used to potentially derive private keys or even mine cryptocurrency units much faster than the expected rate (leading to supply inflation).

So, just how far away from this are we? Well, according to recent estimates, a quantum computer possessing 4,000 qubits of processing power could be the minimum necessary to break the public key cryptography that secures Bitcoin user funds. A sufficiently flexible quantum computer with this processing power could, theoretically, take over the funds contained in any Bitcoin p2pk address - that's a total of around 2 million BTC (circa $67 billion at today's rates).

Fortunately, this isn't an immediate concern. As it stands, the world's most powerful quantum computer - the Zuchongzhi quantum computer- currently clocks in at an impressive (albeit insufficient) 66 qubits. However, given the rapid pace of development in the quantum computing sector, some experts predict that Bitcoin's Elliptic Curve Digital Signature Algorithm (ECDSA) could meet its quantum match within a decade.

(Photo : The Next Platform)

The algorithm that could be potentially used to break ECDSA has already been developed. If generalized and applied by a powerful enough quantum computer, it is widely thought that Peter Shor's polynomial time quantum algorithm would be able to attack the Bitcoin blockchain - while similar algorithms could be applied to other forms of traditional encryption.

But this might not be a concern for much longer, thanks to the introduction of what many consider to be the world's first truly quantum-resistant blockchain. The platform, known as QANplatform, is built to resist all known quantum attacks by using lattice cryptography. QAN manages to achieve quantum resistance while simultaneously tackling the energy concerns that come with some other blockchains through its highly efficient consensus mechanism known as Proof-of-Randomness (PoR).

Unlike some other so-called quantum-resistant blockchains, QAN is unusual in that it also supports decentralized applications (DApps) - allowing developers to launch quantum-resistant DApps within minutes using its free developer tools.

Besides platforms like QAN, the development communities behind several popular blockchains are already beginning to consider implementing their own quantum-resistance solutions, such as the recently elaboratedcommit-delay-reveal scheme - which could be used to transition Bitcoin to a quantum-resistant state. Nonetheless, the future of post-quantum cryptography still remains up in the air, as none of the top ten blockchains by user count have yet committed to a specific quantum-resistant signature scheme.

2021 TECHTIMES.com All rights reserved. Do not reproduce without permission.

Tags:

Read this article:
Why Quantum Resistance Is the Next Blockchain Frontier - Tech Times

Life, the universe and everything Physics seeks the future – The Economist

Aug 25th 2021

A WISE PROVERB suggests not putting all your eggs in one basket. Over recent decades, however, physicists have failed to follow that wisdom. The 20th centuryand, indeed, the 19th before itwere periods of triumph for them. They transformed understanding of the material universe and thus peoples ability to manipulate the world around them. Modernity could not exist without the knowledge won by physicists over those two centuries.

Your browser does not support the

Get The Economist app and play articles, wherever you are

In exchange, the world has given them expensive toys to play with. The most recent of these, the Large Hadron Collider (LHC), which occupies a 27km-circumference tunnel near Geneva and cost $6bn, opened for business in 2008. It quickly found a long-predicted elementary particle, the Higgs boson, that was a hangover from calculations done in the 1960s. It then embarked on its real purpose, to search for a phenomenon called Supersymmetry.

This theory, devised in the 1970s and known as Susy for short, is the all-containing basket into which particle physicss eggs have until recently been placed. Of itself, it would eliminate many arbitrary mathematical assumptions needed for the proper working of what is known as the Standard Model of particle physics. But it is also the vanguard of a deeper hypothesis, string theory, which is intended to synthesise the Standard Model with Einsteins general theory of relativity. Einsteins theory explains gravity. The Standard Model explains the other three fundamental forceselectromagnetism and the weak and strong nuclear forcesand their associated particles. Both describe their particular provinces of reality well. But they do not connect together. String theory would connect them, and thus provide a so-called theory of everything.

String theory proposes that the universe is composed of minuscule objects which vibrate in the manner of the strings of a musical instrument. Like such strings, they have resonant frequencies and harmonics. These various vibrational modes, string theorists contend, correspond to various fundamental particles. Such particles include all of those already observed as part of the Standard Model, the further particles predicted by Susy, which posits that the Standard Models mathematical fragility will go away if each of that models particles has a heavier supersymmetric partner particle, or sparticle, and also particles called gravitons, which are needed to tie the force of gravity into any unified theory, but are not predicted by relativity.

But, no Susy, no string theory. And, 13 years after the LHC opened, no sparticles have shown up. Even two as-yet-unexplained results announced earlier this year (one from the LHC and one from a smaller machine) offer no evidence directly supporting Susy. Many physicists thus worry they have been on a wild-goose chase.

They have good reason to be nervous. String theory already comes with a disturbing conceptual price tagthat of adding six (or in one version seven) extra dimensions to the universe, over and above the four familiar ones (three of space and one of time). It also describes about 10500 possible universes, only one of which matches the universe in which human beings live. Accepting all that is challenging enough. Without Susy, though, string theory goes bananas. The number of dimensions balloons to 26. The theory also loses the ability to describe most of the Standard Models particles. And it implies the existence of weird stuff such as particles called tachyons that move faster than light and are thus incompatible with the theory of relativity. Without Susy, string theory thus looks pretty-much dead as a theory of everything. Which, if true, clears the field for non-string theories of everything.

The names of many of these do, it must be conceded, torture the English language. They include causal dynamical triangulation, asymptotically safe gravity, loop quantum gravity and the amplituhedron formulation of quantum theory. But at the moment the bookies favourite for unifying relativity and the Standard Model is something called entropic gravity.

Entropy is a measure of a systems disorder. Famously, the second law of thermodynamics asserts that it increases with time (ie, things have a tendency to get messier as they get older). What that has to do with a theory of gravity, let alone of everything, is not, perhaps, immediately obvious. But the link is black holes. These are objects which have such strong gravitational fields that even light cannot escape from them. They are predicted by the mathematics of general relativity. And even though Einstein remained sceptical about their actual existence until the day he died in 1955, subsequent observations have shown that they are indeed real. But they are not black.

In 1974 Stephen Hawking, of Cambridge University, showed that quantum effects at a black holes boundary allow it to radiate particlesespecially photons, which are the particles of electromagnetic radiation, including light. This has peculiar consequences. Photons carry radiant heat, so something which emits them has a temperature. And, from its temperature and mass, it is possible to calculate a black holes entropy. This matters because, when all these variables are plugged into the first law of thermodynamics, which states that energy can be neither created nor destroyed, only transformed from one form (say, heat) into another (say, mechanical work), what pops out are Einsteins equations of general relativity.

That relationship was discovered in 2010 by Erik Verlinde of Amsterdam University. It has serious implications. The laws of thermodynamics rely on statistical mechanics. They involve properties (temperature, entropy and so on) which emerge from probabilistic descriptions of the behaviour of the underlying particles involved. These are also the particles described by quantum mechanics, the mathematical theory which underpins the Standard Model. That Einsteins equations can be rewritten thermodynamically implies that space and time are also emergent properties of this deeper microscopic picture. The existing forms of quantum mechanics and relativity thus do indeed both seem derivable in principle from some deeper theory that describes the underlying fabric of the universe.

String theory is not so derivable. Strings are not fundamental enough entities. But entropic gravity claims to describe the very nature of space and timeor, to use Einsteinian terminology, spacetime. It asserts this is woven from filaments of quantum entanglement linking every particle in the cosmos.

The idea of quantum entanglement, another phenomenon pooh-poohed by Einstein that turned out to be true, goes back to 1935. It is that the properties of two or more objects can be correlated (entangled) in a way which means they cannot be described independently. This leads to weird effects. In particular, it means that two entangled particles can appear to influence each others behaviour instantaneously even when they are far apart. Einstein dubbed this spooky action at a distance, because it seems to violate the premise of relativity theory that, in the speed of light, the universe has a speed limit.

As with black holes, Einstein did not live long enough to see himself proved wrong. Experiments have nevertheless shown he was. Entanglement is real, and does not violate relativity because although the influence of one particle on another can be instantaneous there is no way to use the effect to pass information faster than light-speed. And, in the past five years, Brian Swingle of Harvard University and Sean Carroll of the California Institute of Technology have begun building models of what Dr Verlindes ideas might mean in practice, using ideas from quantum information theory. Their approach employs bits of quantum information (so-called qubits) to stand in for the entangled particles. The result is a simple but informative analogue of spacetime.

Qubits, the quantum equivalent of classical bitsthe ones and zeros on which regular computing is builtwill be familiar to those who follow the field of quantum computing. They are the basis of quantum information theory. Two properties distinguish qubits from the regular sort. First, they can be placed in a state of superposition, representing both a one and a zero at the same time. Second, several qubits can become entangled. Together, these properties let quantum computers accomplish feats such as performing multiple calculations at once, or completing certain classes of calculation in a sensible amount of time, that are difficult or impossible for a regular computer.

And because of their entanglement qubits can also, according to Dr Swingle and Dr Carroll, be used as stand-ins for how reality works. More closely entangled qubits represent particles at points in spacetime that are closer together. So far, quantum computers being a work in progress, this modelling can be done only with mathematical representations of qubits. These do, though, seem to obey the equations of general relativity. That supports entropic-gravity-theorys claims.

All of this modelling puts entropic gravity in pole position to replace strings as the long-sought theory of everything. But the idea that spacetime is an emergent property of the universe rather than being fundamental to it has a disturbing consequence. It blurs the nature of causality.

In the picture built by entropic gravity, spacetime is a superposition of multiple states. It is this which muddies causality. The branch of maths that best describes spacetime is a form of geometry that has four axes at right angles to each other instead of the more familiar three. The fourth represents time, so, like the position of objects, the order of events in spacetime is determined geometrically. If different geometric arrangements are superposed, as entropic gravity requires, it can therefore sometimes happen that the statements A causes B and B causes A are both true.

This is not mere speculation. In 2016 Giulia Rubino of the University of Bristol, in England, constructed an experiment involving polarised photons and prisms which achieved exactly that. This spells trouble for those who have old-fashioned notions about causalitys nature.

However, Lucien Hardy of the Perimeter Institute, in Canada, has discovered a way to reformulate the laws of quantum mechanics to get around this. In his view, causality as commonly perceived is like data compression in computing: it is a concept that gives you more bang for your buck. With a little bit of information about the present, causality can infer a lot about the futurecompressing the amount of information needed to capture the details of a physical system in time.

But causality, Dr Hardy thinks, may not be the only way to describe such correlations. Instead, he has invented a general method for building descriptions of the patterns in correlations from scratch. This method, which he calls the causaloid framework, tends to reproduce causality but it does not assume it, and he has used it to reformulate both quantum theory (in 2005) and general relativity (in 2016). Causaloid maths is not a theory of everything. But there is a good chance that if and when such a theory is found, causaloid principles will be needed to describe it, just as general relativity needed a geometry of four dimensions to describe spacetime.

Entropic gravity has, then, a lot of heavy-duty conceptual work to back it up. But it is not the only candidate to replace string theory. Others jostling for attention include an old competitor called loop quantum gravity, originally proposed in 1994 by Carlo Rovelli, then at the University of Pittsburgh, and Lee Smolin, of the Perimeter Institute. This, and causal dynamical triangulation, a more recent but similar idea, suggest that spacetime is not the smooth fabric asserted by general relativity, but, rather, has a structureeither elementary loops or triangles, according to which of the two theories you support.

A third option, asymptotically safe gravity, goes back still further, to 1976. It was suggested by Steven Weinberg, one of the Standard Models chief architects. A natural way to develop a theory of quantum gravity is to add gravitons to the model. Unfortunately, this approach got nowhere, because when the interactions of these putative particles were calculated at higher energies, the maths seemed to become nonsensical. However, Weinberg, who died in July, argued that this apparent breakdown would go away (in maths speak, the calculations would be asymptotically safe) if sufficiently powerful machines were used to do the calculating. And, with the recent advent of supercomputers of such power, it looks, from early results, as if he might have been right.

One of the most intriguing competitors of entropic gravity, though, is the amplituhedron formulation of quantum theory. This was introduced in 2013 by Nima Arkani-Hamed of the Institute of Advanced Study at Princeton and Jaroslav Trnka of the University of California, Davis. They have found a class of geometric structures dubbed amplituhedrons, each of which encodes the details of a possible quantum interaction. These, in turn, are facets of a master amplituhedron that encodes every possible type of physical process. It is thus possible to reformulate all of quantum theory in terms of the amplituhedron.

Most attempts at a theory of everything try to fit gravity, which Einstein describes geometrically, into quantum theory, which does not rely on geometry in this way. The amplituhedron approach does the opposite, by suggesting that quantum theory is actually deeply geometric after all. Better yet, the amplituhedron is not founded on notions of spacetime, or even statistical mechanics. Instead, these ideas emerge naturally from it. So, while the amplituhedron approach does not as yet offer a full theory of quantum gravity, it has opened up an intriguing path that may lead to one.

That space, time and even causality are emergent rather than fundamental properties of the cosmos are radical ideas. But this is the point. General relativity and quantum mechanics, the physics revolutions of the 20th century, were viewed as profound precisely because they overthrew common sense. To accept relativity meant abandoning a universal notion of time and space. To take quantum mechanics seriously meant getting comfortable with ideas like entanglement and superposition. Embracing entropic gravity or its alternatives will require similar feats of the imagination.

No theory, though, is worth a damn without data. That, after all, is the problem with Supersymmetry. Work like Dr Rubinos points the way. But something out of a particle-physics laboratory would also be welcome. And, though their meaning is obscure, the past few months have indeed seen two experimentally induced cracks in the Standard Model.

On March 23rd a team from CERN, the organisation that runs the LHC, reported an unexpected difference in behaviour between electrons and their heavier cousins, muons. These particles differ from one another in no known properties but their masses, so the Standard Model predicts that when other particles decay into them, the two should each be produced in equal numbers. But this appears not to be true. Interim results from the LHC suggest that a type of particle called a B-meson is more likely to decay into an electron than a muon. That suggests an as-yet-undescribed fundamental force is missing from the Standard Model. Then, on April 7th, Fermilab, Americas biggest particle-physics facility, announced the interim results of its own muon experiment, Muon g-2.

In the quantum world, there is no such thing as a perfect vacuum. Instead, a froth of particles constantly pops in and out of existence everywhere in spacetime. These are virtual rather than real particlesthat is, they are transient fluctuations which emerge straight out of quantum uncertainty. But, although they are short-lived, during the brief periods of their existence they still have time to interact with more permanent sorts of matter. They are, for example, the source of the black-hole radiation predicted by Hawking.

The strengths of their interactions with types of matter more conventional than black holes are predicted by the Standard Model, and to test these predictions, Muon g-2 shoots muons in circles around a powerful superconducting magnetic-storage ring. The quantum froth changes the way the muons wobble, which detectors can pick up with incredible precision. The Muon g-2 experiment suggests that the interactions causing these wobbles are slightly stronger than the Standard Model predicts. If confirmed, this would mean the model is missing one or more elementary particles.

There is a slim chance that these are the absent sparticles. If so, it is the supporters of supersymmetry who will have the last laugh. But nothing points in this direction and, having failed thus far to stand their ideas up, they are keeping sensibly quiet.

Whatever the causes of these two results, they do show that there is something out there which established explanations cannot account for. Similarly unexplained anomalies were starting points for both quantum theory and relativity. It looks possible, therefore, that what has seemed one of physicss darkest periods is about to brighten into a new morning.

This article appeared in the Science & technology section of the print edition under the headline "Bye, bye, little Susy"

Originally posted here:
Life, the universe and everything Physics seeks the future - The Economist

This Exotic Particle Had an Out-of-Body Experience These Surprised Scientists Took a Picture of It – SciTechDaily

Artists illustration of ghost particles moving through a quantum spin liquid. Credit: Jenny Nuss/Berkeley Lab

An unexpected finding by scientists at Berkeley Lab and UC Berkeley could advance quantum computers and high-temperature superconductors.

Scientists have taken the clearest picture yet of electronic particles that make up a mysterious magnetic state called a quantum spin liquid (QSL).

The achievement could facilitate the development of superfast quantum computers and energy-efficient superconductors.

The scientists are the first to capture an image of how electrons in a QSL decompose into spin-like particles called spinons and charge-like particles called chargons.

Artists illustration of ghost particles moving through a quantum spin liquid. Credit: Jenny Nuss/Berkeley Lab

Other studies have seen various footprints of this phenomenon, but we have an actual picture of the state in which the spinon lives. This is something new, said study leader Mike Crommie, a senior faculty scientist at Lawrence Berkeley National Laboratory (Berkeley Lab) and physics professor at UC.

Spinons are like ghost particles. They are like the Big Foot of quantum physics people say that theyve seen them, but its hard to prove that they exist, said co-author Sung-Kwan Mo, a staff scientist at Berkeley Labs Advanced Light Source. With our method weve provided some of the best evidence to date.

In a QSL, spinons freely move about carrying heat and spin but no electrical charge. To detect them, most researchers have relied on techniques that look for their heat signatures.

Now, as reported in the journal Nature Physics, Crommie, Mo, and their research teams have demonstrated how to characterize spinons in QSLs by directly imaging how they are distributed in a material.

Schematic of the triangular spin lattice and star-of-David charge density wave pattern in a monolayer of tantalum diselenide. Each star consists of 13 tantalum atoms. Localized spins are represented by a blue arrow at the star center. The wavefunction of the localized electrons is represented by gray shading. Credit: Mike Crommie et al./Berkeley Lab

To begin the study, Mos group at Berkeley Labs Advanced Light Source (ALS) grew single-layer samples of tantalum diselenide (1T-TaSe2) that are only three-atoms thick. This material is part of a class of materials called transition metal dichalcogenides (TMDCs). The researchers in Mos team are experts in molecular beam epitaxy, a technique for synthesizing atomically thin TMDC crystals from their constituent elements.

Mos team then characterized the thin films through angle-resolved photoemission spectroscopy, a technique that uses X-rays generated at the ALS.

Scanning tunneling microscopy image of a tantalum diselenide sample that is just 3 atoms thick. Credit: Mike Crommie et al./Berkeley Lab

Using a microscopy technique called scanning tunneling microscopy (STM), researchers in the Crommie lab including co-first authors Wei Ruan, a postdoctoral fellow at the time, and Yi Chen, then a UC Berkeley graduate student injected electrons from a metal needle into the tantalum diselenide TMDC sample.

Images gathered by scanning tunneling spectroscopy (STS) an imaging technique that measures how particles arrange themselves at a particular energy revealed something quite unexpected: a layer of mysterious waves having wavelengths larger than one nanometer (1 billionth of a meter) blanketing the materials surface.

The long wavelengths we saw didnt correspond to any known behavior of the crystal, Crommie said. We scratched our heads for a long time. What could cause such long wavelength modulations in the crystal? We ruled out the conventional explanations one by one. Little did we know that this was the signature of spinon ghost particles.

With help from a theoretical collaborator at MIT, the researchers realized that when an electron is injected into a QSL from the tip of an STM, it breaks apart into two different particles inside the QSL spinons (also known as ghost particles) and chargons. This is due to the peculiar way in which spin and charge in a QSL collectively interact with each other. The spinon ghost particles end up separately carrying the spin while the chargons separately bear the electrical charge.

Illustration of an electron breaking apart into spinon ghost particles and chargons inside a quantum spin liquid. Credit: Mike Crommie et al./Berkeley Lab

In the current study, STM/STS images show that the chargons freeze in place, forming what scientists call a star-of-David charge-density-wave. Meanwhile, the spinons undergo an out-of-body experience as they separate from the immobilized chargons and move freely through the material, Crommie said. This is unusual since in a conventional material, electrons carry both the spin and charge combined into one particle as they move about, he explained. They dont usually break apart in this funny way.

Crommie added that QSLs might one day form the basis of robust quantum bits (qubits) used for quantum computing. In conventional computing a bit encodes information either as a zero or a one, but a qubit can hold both zero and one at the same time, thus potentially speeding up certain types of calculations. Understanding how spinons and chargons behave in QSLs could help advance research in this area of next-gen computing.

Another motivation for understanding the inner workings of QSLs is that they have been predicted to be a precursor to exotic superconductivity. Crommie plans to test that prediction with Mos help at the ALS.

Part of the beauty of this topic is that all the complex interactions within a QSL somehow combine to form a simple ghost particle that just bounces around inside the crystal, he said. Seeing this behavior was pretty surprising, especially since we werent even looking for it.

Reference: Evidence for quantum spin liquid behaviour in single-layer 1T-TaSe2 from scanning tunnelling microscopy by Wei Ruan, Yi Chen, Shujie Tang, Jinwoong Hwang, Hsin-Zon Tsai, Ryan L. Lee, Meng Wu, Hyejin Ryu, Salman Kahn, Franklin Liou, Caihong Jia, Andrew Aikawa, Choongyu Hwang, Feng Wang, Yongseong Choi, Steven G. Louie, Patrick A. Lee, Zhi-Xun Shen, Sung-Kwan Mo & Michael F. Crommie, 19 August 2021, Nature Physics.DOI: 10.1038/s41567-021-01321-0

Researchers from SLAC National Accelerator Laboratory; Stanford University; Argonne National Laboratory; the Massachusetts Institute of Technology; the Chinese Academy of Sciences, Shanghai Tech University, Shenzhen University, Henan University of China; and the Korea Institute of Science and Technology and Pusan National University of Korea contributed to this study. (Co-first author Wei Ruan is now an assistant professor of physics at Fudan University in China; co-first author Yi Chen is currently a postdoctoral fellow at the Center for Quantum Nanoscience, Institute for Basic Science of Korea.)

This work was supported by the DOE Office of Science, and used resources at Berkeley Labs Advanced Light Source and Argonne National Laboratorys Advanced Photon Source. The Advanced Light Source and Advanced Photon Source are DOE Office of Science user facilities.

Additional support was provided by the National Science Foundation.

More here:
This Exotic Particle Had an Out-of-Body Experience These Surprised Scientists Took a Picture of It - SciTechDaily

Expert: Now is the time to prepare for the quantum computing revolution – TechRepublic

Though quantum computing is likely five to 10 years away, waiting until it happens will put your organization behind. Don't play catch-up later.

TechRepublic's Karen Roby spoke with Christopher Savoie, CEO and co-founder of Zapata Computing, a quantum application company, about the future of quantum computing. The following is an edited transcript of their conversation.

SEE: The CIO's guide to quantum computing (free PDF) (TechRepublic)

Christoper Savoie: There are two types of quantum-computing algorithms if you will. There are those that will require what we call a fault-tolerant computing system, one that doesn't have error, for all intents and purposes, that's corrected for error, which is the way most classical computers are now. They don't make errors in their calculations, or at least we hope they don't, not at any significant rate. And eventually we'll have these fault-tolerant quantum computers. People are working on it. We've proven that it can happen already, so that is down the line. But it's in the five- to 10-year range that it's going to take until we have that hardware available. But that's where a lot of the promises for these exponentially faster algorithms. So, these are the algorithms that will use these fault-tolerant computers to basically look at all the options available in a combinatorial matrix.

So, if you have something like Monte Carlo simulation, you can try significantly all the different variables that are possible and look at every possible combination and find the best optimal solution. So, that's really, practically impossible on today's classical computers. You have to choose what variables you're going to use and reduce things and take shortcuts. But with these fault-tolerant computers, for significantly many of the possible solutions in the solution space, we can look at all of the combinations. So, you can imagine almost an infinite amount or an exponential amount of variables that you can try out to see what your best solution is. In things like CCAR [Comprehensive Capital Analysis and Review], Dodd-Frank [Dodd-Frank Wall Street Reform and Consumer Protection Act] compliance, these things where you have to do these complex simulations, we rely on a Monte Carlo simulation.

So, trying all of the possible scenarios. That's not possible today, but this fault tolerance will allow us to try significantly all of the different combinations, which will hopefully give us the ability to predict the future in a much better way, which is important in these financial applications. But we don't have those computers today. They will be available sometime in the future. I hate putting a date on it, but think about it on the decade time horizon. On the other hand, there are these nearer-term algorithms that run on these noisy, so not error-corrected, noisy intermediate-scale quantum devices. We call them NISQ for short. And these are more heuristic types of algorithms that are tolerant to noise, much like neural networks are today in classical computing and [artificial intelligence] AI. You can deal a little bit with the sparse data and maybe some error in the data or other areas of your calculation. Because it's an about-type of calculation like neural networks do. It's not looking at the exact answers, all of them and figuring out which one is definitely the best. This is an approximate algorithm that iterates and tries to get closer and closer to the right answer.

SEE: Hiring Kit: Video Game Designer (TechRepublic Premium)

But we know that neural networks work this way, deep neural networks. AI, in its current state, uses this type of algorithm, these heuristics. Most of what we do in computation nowadays and finance is heuristic in its nature and statistical in its nature, and it works good enough to do some really good work. In algorithmic trading, in risk analysis, this is what we use today. And these quantum versions of that will also be able to give us some advantage and maybe an advantage overwe've been able to show in recent workthe purely classical version of that. So, we'll have some quantum-augmented AI, quantum-augmented [machine learning] ML. We call it a quantum-enhanced ML or quantum-enhanced optimization that we'll be able to do.

So, people think of this as a dichotomy. We have these NISQ machines, and they're faulty, and then one day we'll wake up and we'll have this fault tolerance, but it's really not that way. These faulty algorithms, if you will, these heuristics that are about, they will still work and they may work better than the fault-tolerant algorithms for some problems and some datasets, so this really is a gradient. It really is. You'd have a false sense of solace, maybe two. "Oh well, if that's 10 years down the road we can just wait and let's wait till we wake up and have fault tolerance." But really the algorithms are going to be progressing. And the things that we develop now will still be useful in that fault-tolerant regime. And the patents will all be good for the stuff that we do now.

So, thinking that, "OK, this is a 10 year time horizon for those fault-tolerant computers. Our organization is just going to wait." Well, if you do, you get a couple of things. You're not going to have the workforce in place to be able to take advantage of this. You're probably not going to have the infrastructure in place to be able to take advantage of this. And meanwhile, all of your competitors and their vendors have acquired a portfolio of patents on these methodologies that are good for 20 years. So, if you wait five years from now and there's a patent four years down the line, that's good for 24 years. So there really is, I think, an incentive for organizations to really start working, even in this NISQ, this noisier regime that we're in today.

Karen Roby: You get a little false sense of security, as you mentioned, of something, oh, you say that's 10 years down the line, but really with this, you don't have the luxury of catching up if you wait too long. This is something that people need to be focused on now for what is down the road.

SEE: Quantum entanglement-as-a-service: "The key technology" for unbreakable networks (TechRepublic)

Christoper Savoie: Yes, absolutely. And in finance, if you have a better ability to detect risks then than your competitors; you're at a huge advantage to be able to find alpha in the market. If you can do that better than others, you're going to be at a huge advantage. And if you're blocked by people's patents or blocked by the fact that your workforce doesn't know how to use these things, you're really behind the eight ball. And we've seen this time and time again with different technology evolutions and revolutions. With big data and our use of big data, with that infrastructure, with AI and machine learning. The organizations that have waited generally have found themselves behind the eight ball, and it's really hard to catch up because this stuff is changing daily, weekly, and new inventions are happening. And if you don't have a workforce that's up and running and an infrastructure ready to accept this, it's really hard to catch up with your competitors.

Karen Roby: You've touched on this a little bit, but really for the finance industry, this can be transformative, really significant what quantum computing can do.

Christoper Savoie: Absolutely. At the end of the day, finance is math, and we can do better math and more accurate math on large datasets with quantum computing. There is no question about that. It's no longer an "if." Google has, with their experiment, proven that at some point we're going to have a machine that is definitely going to be better at doing math, some types of math, than classical computers. With that premise, if you're in a field that depends on math, that depends on numbers, which is everything, and statistics, which is finance, no matter what side you're on. If you're on the risk side or the investing side, you're going to need to have the best tools. And that doesn't mean you have to be an algorithmic trader necessarily, but even looking at tail risk and creating portfolios and this kind of thing. You're dependent on being able to quickly ascertain what that risk is, and computing is the only way to do that.

SEE: The quantum decade: IBM predicts the 2020s will see quantum begin to solve real problems (TechRepublic)

And on the regulatory side, I mentioned CCAR. I think as these capabilities emerge, it allows the regulators to ask for even more scenarios to be simulated, those things that are a big headache for a lot of companies. But it's important because our global financial system depends on stability and predictability, and to be able to have a computational resource like quantum that's going to allow us to see more variables or more possibilities or more disaster scenarios. It can really help. "What is the effect of, say, a COVID-type event on the global financial system?" To be more predictive of that and more accurate at doing that is good for everybody. I think all boats rise, and quantum is definitely going to give us that advantage as well.

Karen Roby: Most definitely. And Christopher, before I let you go, if you would just give us a quick snapshot of Zapata Computing and the work that you guys do.

Christoper Savoie: We have two really important components to try and make this stuff reality. On the one hand, we've got over 30 of the brightest young minds and algorithms, particularly for these near-term devices and how to write those. We've written some of the fundamental algorithms that are out there to be used on quantum computers. On the other hand, how do you make those things work? That's a software engineering thing. That's not really quantum science. How do you make the big data work? And that's all the boring stuff of ETL and data transformation and digitalization and cloud and multicloud and all this boring but very important stuff. So basically Zapata is a company that has the best of the algorithms, but also best-of-breed means of actually software engineering that in a modern, multicloud environment that particularly finance companies, banks, they're regulated companies with a lot of data that is sensitive and private and proprietary. So, you need to be able to work in a safe and secure multicloud environment, and that's what our software engineering side allows us to do. We have the best of both worlds there.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

Image: sakkmesterke, Getty Images/iStockphoto

Go here to read the rest:
Expert: Now is the time to prepare for the quantum computing revolution - TechRepublic

IBM partners with the University of Tokyo on quantum computer – Illinoisnewstoday.com

Tokyo IBM and the University of Tokyo have announced one of Japans most powerful quantum computers.

According to IBM, IBM Quantum System One is part of the Japan-IBM quantum partnership between the University of Tokyo and IBM, advancing Japans quest for quantum science, business and education.

IBM Quantum System One is currently in operation for researchers at both Japanese scientific institutions and companies, and access is controlled by the University of Tokyo.

IBM is committed to growing the global quantum ecosystem and facilitating collaboration between different research communities, said Dr. Dario Gil, director of IBM Research.

According to IBM, quantum computers combine quantum resources with classical processing to provide users with access to reproducible and predictable performance from high-quality qubits and precision control electronics. Users can safely execute algorithms that require iterative quantum circuits in the cloud.

see next: IBM partners with Atos on contract with Dutch Ministry of Defense

IBM Quantum System One in Japan is IBMs second system built outside the United States. In June, IBM unveiled the IBM Quantum System One, managed by the scientific research institute Fraunhofer Geselleschaft, in Munich, Germany.

IBMs commitment to quantum is aimed at advancing quantum computing and fostering a skilled quantum workforce around the world.

We are thrilled to see Japans contributions to research by world-class academics, the private sector, and government agencies, Gil said.

Together, we can take a big step towards accelerating scientific progress in different areas, Gil said.

Teruo Fujii, President of the University of Tokyo, said, In the field of rapidly changing quantum technology, it is very important not only to develop elements and systems related to quantum technology, but also to develop the next generation of human resources. To achieve a high degree of social implementation.

Our university has a wide range of research capabilities and has always promoted high-level quantum education from the undergraduate level. Now, with IBM Quantum System One, we will develop the next generation of quantum native skill sets. Further refine it.

In 2020, IBM and the University of Tokyo Quantum Innovation Initiative Consortium (QIIC) aims to strategically accelerate the research and development activities of quantum computing in Japan by bringing together the academic talents of universities, research groups and industries nationwide.

Last year, IBM also announced partnerships with several organizations focusing on quantum information science and technology. Cleveland Clinic, NS Science and Technology Facilities Council in the United Kingdom, And that University of Illinois at Urbana-Champaign..

see next: Public cloud computing provider

Original post:
IBM partners with the University of Tokyo on quantum computer - Illinoisnewstoday.com