Archive for the ‘Quantum Computer’ Category

Billionaire Investor Vinod Khosla Speaks Out On AI’s Future and the COVID-19 Economy – EnterpriseAI

Vinod Khosla, a co-founder of the former Sun Microsystems and a longtime technology entrepreneur, venture capitalist and IT sage, makes billions of dollars betting on new technologies.

Khosla shared some of his technology and investment thoughts at a recent tech conference about the future of AI in business, AI chip design and quantum computing -- and even gave some advice to AI developers and companies about how they can successfully navigate the tumultuous times of the COVID-19 pandemic. Khosla gave his remarks at an Ask Me Anything Industry Luminary Keynote at the virtual AI Hardware Summit earlier in October. The Q&A was hosted by Rene Haas, the president of Arms IP products group, and a former executive with AI chipmaker Nvidia.

Khosla, who is ranked #353 on the Forbes 400 2020 list, has a net worth today of $2.6 billion, largely earned through his investment successes in the tech field. He founded his VC firm, Khosla Ventures, in 2004.

Here are edited segments from that 30-minute Q&A, which centered on questions asked by viewers of the virtual conference:

Rene Haas: What has been the most significant technological advancement in AI in the last year or two? And how do you anticipate it is going to change the landscape of business?

Vinod Khosla

Vinod Khosla: What's surprised me the most is bifurcation along two lines one that argues that deep learning goes all the way, and others who argue that AGI (artificial general intelligence) requires very different kinds [of uses]. My bet is that each will be good at certain functions. Now, I don't worry about AGI. Being a philosopher, I do worry about AI and AGI being used for most valuable economic functions human beings do. That's where the big opportunity is. What surprised me most is there's been great progress in language models and algorithms. But the outsize role of hardware in building models that are much more powerful, trillions of parameters per model, and how effective they can be, has been surprising. I'm somewhat biased because we have large investors in open AI. On the flip side, we are large investors in companies like Vicarious, which are taking that AGI in a very different approach.

Haas: Building on that a little bit, there are a lot of AI hardware startup companies. Some are well funded, some with high burn rates. When you think about competing with the software support ecosystem, like Nvidia has, how can startups really rely on the strength of their architecture alone? What are the kinds of things that you look at it in terms of guidelines for startups in this space?

Khosla: There's many different markets, you have to be clear. There is a training market in the data center. There's an inferencing market in the data center. There's a market for edge devices where the criteria are very different. And then there's this emerging area of what quantum computing might do in hardware. We can talk about any of these, but what's really interesting to me is how much innovation we are seeing. Companies like Nvidia and the big cloud providers, especially Google and others, have very strong efforts.

And probably the thing we've learned in semiconductors, having access to process technology and process nodes that others don't thats where the software ecosystem gives them such a large advantage. It's hard for startups to compete. Now, I could be wrong, but we've tended to avoid digital architectures, for the data center or for inferencing. We've looked at a dozen of those and chosen not to jump in. Because there's bigger players with huge software and process and resource advantages. On the analog side, it's a whole different ballgame. We've invested in analog inference. There's been multiple analog efforts. I think some haven't addressed enough of the problem to get a large enough power advantage.

So, the bottom line for a startup, is that to do better than Nvidia or one of the other larger players or cloud providers, then you've got to talk about 20X to 100X advantage in TeraOPS per watt. I think if you're not in the hundred TeraOPS per watt range, it's going to be hard to sustain a large advantage. And I see most digital efforts sort of in this one to 10 TeraOPS per watt power range. So I find the edge much more promising than the data center.

Haas: What about the difficulties of startups or companies trying to enter this field? Much of it is horizontal in nature. Do they need some kind of vertical stack or some tie into the ecosystems? Do the same challenges apply, relative to being a horizontal versus vertical business or do you think there are some different opportunities there?

Khosla: I think there will be classes of algorithms. There's clearly one class of algorithms around deep learning and things like that. The question of how architecture maps to different types of algorithms, and algorithmic approaches, is a little too early to predict, and that will determine what architectures work best.

On the edge, what's clearly going to be important is power efficiency. The really volume markets are under five watts and $5 and a couple of hundred TeraOPS. That's the price point I look at as differentiated enough for edge devices to do a lot of interesting things. Every speaker, every microphone, every sensor. You start to see price points that go from tens of pennies to a few dollars that go into these very high volume devices. I think that would be a different architecture than the stuff in the data center.

In the data center, whether inferencing and training are the same architecture or the same software stack even, I still think it's open for debate. I think in inferencing, cost matters and efficiency matters. In training, especially for the really large algorithms, probably not so much. So, hard to tap.

And then there's this really surprise thing of what quantum computing will do, and what kinds of algorithms that will run. The things we are most interested in is very specialized applications for quantum computing. We have one effort in drug discovery for quantum computing. I think material science with quantum computing is going to be interesting, possibly some financial services products based on quantum computing. So, plenty of these interesting options. I think for a while we'll see more of a bifurcation, but if I were to predict five years from now I think we'll see more unification around the types of algorithms that do certain economic tasks well.

Rene Haas

Haas: Quantum is something that has been written about for a long time and now you're starting to see some things product-wise that are looking a bit more real. As an investor, and looking at private company opportunities around quantum, do you feel like the time is now to start investing in companies that are doing things around the hardware space in quantum? Or do you look at it and say it's still years away from being commercially viable?

Khosla: In the big company world, it's definitely time for the big companies to be investing, and they're investing heavily. But that's Microsoft, Google, IBM and others. There's also a whole slew of startups where the market and products have emerged slower. And whenever things emerge slower especially on the hardware side, the big companies have an advantage because they can catch up. Whenever it takes lots and lots of resources, then the big companies have an advantage. Autonomous driving is the one area where that's mostly true, but not completely true. We've seen some radical innovation out of startups there.

So, it depends on the pace of development of a technology or deployment. I do think the time is very ripe for quantum software applications, specialized applications, to develop. But given how complex quantum is to use, such as the the interface between quantum and the regular computing world, and the full stack of software and how it runs algorithms, I think specialized algorithms will do better there.

Haas: You're obviously involved in AI chip startups. Looking at the last four years of AI chip startups, are you bullish, in general, looking back? And if so, which areas are you most excited about?

Khosla: When there's radical innovation, it's still interesting. We've seen a lot of startups, but I wouldn't say we've seen radical innovation in architectures or performance or power efficiency. And when I say power efficiency, it's really TeraOPS per watt, which is performance per watt that is really the key metric. If you see the kinds of large jumps, like 20X, 50X, 100X, then that's really interesting. Still, there's less room for it in the data center, more room for it in the edge, but every time I say something like this then some really clever person surprises me with a counter-narrative that actually is pretty compelling. So would I say I'm open for architectures? Yes. Radical changes, yes, and I think that will happen, but it's just very hard to predict today. The predictability on where things go is still low on innovation. But I always say, improbables are not unimportant. We just don't know which improbable is important. In the meantime, the traditional digital data center, even the digital edge, will probably belong to the larger players.

I do want to encourage the folks out there trying to build products. When we did the Nextgen product to compete with Intel, we very quickly got to 50% market share of the under $1,000 PC market, where we were competing on an x86 architecture with Intel. So surprises are possible, and people who take specialized approaches in market segments, there can be very interesting innovation to be done.

Haas: How large is the economic opportunity around AI and what do you think drives it?

Khosla: I'm probably more bullish. Whether you call it, AI or AGI, I think this area will be able to do most economically valuable human functions within the next decade. Probably a lot sooner. They will take time, integrating into regular workflows and traditional systems and all that. But the way I look at it, if we can replace human judgment in a task, you're saving far more money than selling a chip or a computer or something. So, if you can replace a security analyst and do their job, or have one security analyst do the job of five security analysts, or have one physician do the job of five physicians, you're saving gobs of money. And then you get to share in the human labor saving, which is where the large opportunities are. That could belong to both these combination software and hardware systems, I think that opportunity is orders of magnitude larger than any estimate I've seen today.

Haas: 2020 has been a very turbulent year. What advice would you give to tech entrepreneurs who are pushing through a recession and the remarkable situation involving the COVID-19 pandemic, while trying to build a product and build a company? What advice would you give to those entrepreneurs?

Khosla: I think the best ideas survive turbulent times. I find recessions are really the times when bigger companies cut back on some of their spending. I haven't seen that happen in this particular area. That's when people with the best ideas or with passion for a particular vision, leave those companies. So, I do see very good startups during turbulent times in general. Now, one has to be just pragmatic and adapt to the times. When money's cheap, you raise lots of money. When money is not cheap or not easily available you spend less, and take more time doing some fundamental work and getting it right. Which by the way is usually a better strategy than raising lots of money.

I do think that there is lots of opportunity. I think they have to adapt to the times and be much more thoughtful, maybe even more radical in their approach. Take larger leaps because you can take more time before you start spending the money to go to market. One of the things to keep in mind with most technologies thinking about the technology has huge implications downstream, but takes very little money. It takes very special talent. Then there's the building of the technology. And then there's the selling, and the sales or marketing usually ends up costing the most. Now's a good time to trade off for more compelling product and postpone some of the sales and marketing while the markets are uncertain. You can't afford to spend lots of money on that. So you have to adjust strategy as an entrepreneur and entrepreneurs do that fairly well.

Haas: What is your own investment philosophy, particularly when it comes to tech companies, and how does your overall portfolio, reflect that philosophy?

Khosla: We like the higher-risk, higher-upside things. I find investors generally reduce risk for good reasons, but make the consequences of success relatively inconsequential. I personally prefer larger risk, which is why I like analog right now, and make the consequential success, be it 50X or 100X better than what's available in the digital domain. I do see plenty of those kinds of opportunities still. I am not discouraged. I'm actually quite encouraged about the opportunities in this area. But, entrepreneurs usually find specialized paths to get to that first MVP product, that early traction, and then use it to broaden.

Haas: Model performance has been increasing slowly in the field of AI. Can you share your insights about that?

Khosla: In certain dimensions, I think that's true. When a technology plays out a certain way, it makes rapid progress in the beginning and then starts to peter out. Software models themselves are getting to a level of saturation. The progress on the hardware side, just scaling hardware, has been stunningly valuable as GTC-3 shows. It may give more of an advantage to the large cloud providers the people who can build, 500,000 CPU, GPU systems. But that's not for everyday use. I think that still needs to be told.

There are alternative approaches that still need to be discovered. I gave you the example of Vicarious, the robotics company we've invested in. Instead of needing 10 million or 100 million cats to recognize a cat, they're saying can we do it from 10 cats? So, maybe data becomes a lot less important. And what implications does that have for hardware architectures? It's very clear to me seeing the early results at Vicarious that it is entirely possible for AI systems to learn as rapidly and with as few examples as humans do, if the architecture is different than deep learning.

My bet is different approaches will be very good at different points, and we'll see that kind of specialization of architectures. A long time ago, 25 to 30 years ago, when you looked at Lego blocks, it came in large yellow, white, red, black and blue blocks. And there were three or four types of components. I think that's where software algorithms in AI may be today. Now, you couldn't build the Sydney Opera House out of Lego blocks back then, but then they got all these specialized components. The possibilities explode exponentially, so the combinations allow a lot more flexibility on what can happen, what systems can do. So, it might be we just need different types of algorithms to explore the capability of end-use systems. And that might have large implications for which hardware architectures work.

Hardware scaling may matter in some of these and clever architectures may matter in others. That's why I'm tracking what quantum computing may do for algorithms. Not just your standard quantum computing Shor's algorithm, etc. but real applications like drug discovery or material science. Or could you do better battery material? Those are really interesting now.

Haas: What advice do you have for first time hardware entrepreneurs, with strong architecture ideas, with really smart engineers, who don't really have a track record, and who haven't done this before -- how do you advise them to position themselves to get into this segment?

Khosla: Silicon Valley is very good at recognizing thoughtful, clever people -- they don't have to have a track record. Most successful entrepreneurs don't have track records. So, I wouldn't be afraid of that. I don't think you need a lot of management experience. Building great teams is probably the single piece of advice I give to entrepreneurs. Great and multi-dimensional teams to go after the problem, even if they haven't done it yet. Also, how the cleverness of your architecture isnt as important as the end results you deliver. Can you deliver that 20X, 50X over what the traditional players will do for your market? I think people underappreciate how much of an advantage you need in your architecture to make it worthwhile to do that startup.

And one more thing. There's a whole lot of tricks both on the models on the software side, on the hardware side. You can do hardware tricks and there's half a dozen which are very common in hardware and half a dozen that are pretty common in software, like reducing the model size. Everybody really gets there. Others have fundamental long-lasting advantages and if you're doing the startup, focus not on the tricks that give you a 5X improvement, because others will catch up to those tricks, either on software or hardware. Instead, focus on what will be the fundamental innovations five years from now, where you'll still have an advantage.

Related

Read the rest here:
Billionaire Investor Vinod Khosla Speaks Out On AI's Future and the COVID-19 Economy - EnterpriseAI

Could Quantum Computing Progress Be Halted by Background Radiation? – Singularity Hub

Doing calculations with a quantum computer is a race against time, thanks to the fragility of the quantum states at their heart. And new research suggests we may soon hit a wall in how long we can hold them together thanks to interference from natural background radiation.

While quantum computing could one day enable us to carry out calculations beyond even the most powerful supercomputer imaginable, were still a long way from that point. And a big reason for that is a phenomenon known as decoherence.

The superpowers of quantum computers rely on holding the qubitsquantum bitsthat make them up in exotic quantum states like superposition and entanglement. Decoherence is the process by which interference from the environment causes them to gradually lose their quantum behavior and any information that was encoded in them.

It can be caused by heat, vibrations, magnetic fluctuations, or any host of environmental factors that are hard to control. Currently we can keep superconducting qubits (the technology favored by the fields leaders like Google and IBM) stable for up to 200 microseconds in the best devices, which is still far too short to do any truly meaningful computations.

But new research from scientists at Massachusetts Institute of Technology (MIT) and Pacific Northwest National Laboratory (PNNL), published last week in Nature, suggests we may struggle to get much further. They found that background radiation from cosmic rays and more prosaic sources like trace elements in concrete walls is enough to put a hard four-millisecond limit on the coherence time of superconducting qubits.

These decoherence mechanisms are like an onion, and weve been peeling back the layers for the past 20 years, but theres another layer that left unabated is going to limit us in a couple years, which is environmental radiation, William Oliver from MIT said in a press release. This is an exciting result, because it motivates us to think of other ways to design qubits to get around this problem.

Superconducting qubits rely on pairs of electrons flowing through a resistance-free circuit. But radiation can knock these pairs out of alignment, causing them to split apart, which is what eventually results in the qubit decohering.

To determine how significant of an impact background levels of radiation could have on qubits, the researchers first tried to work out the relationship between coherence times and radiation levels. They exposed qubits to irradiated copper whose emissions dropped over time in a predictable way, which showed them that coherence times rose as radiation levels fell up to a maximum of four milliseconds, after which background effects kicked in.

To check if this coherence time was really caused by the natural radiation, they built a giant shield out of lead brick that could block background radiation to see what happened when the qubits were isolated. The experiments clearly showed that blocking the background emissions could boost coherence times further.

At the minute, a host of other problems like material impurities and electronic disturbances cause qubits to decohere before these effects kick in, but given the rate at which the technology has been improving, we may hit this new wall in just a few years.

Without mitigation, radiation will limit the coherence time of superconducting qubits to a few milliseconds, which is insufficient for practical quantum computing, Brent VanDevender from PNNL said in a press release.

Potential solutions to the problem include building radiation shielding around quantum computers or locating them underground, where cosmic rays arent able to penetrate so easily. But if you need a few tons of lead or a large cavern in order to install a quantum computer, thats going to make it considerably harder to roll them out widely.

Its important to remember, though, that this problem has only been observed in superconducting qubits so far. In July, researchers showed they could get a spin-orbit qubit implemented in silicon to last for about 10 milliseconds, while trapped ion qubits can stay stable for as long as 10 minutes. And MITs Oliver says theres still plenty of room for building more robust superconducting qubits.

We can think about designing qubits in a way that makes them rad-hard, he said. So its definitely not game-over, its just the next layer of the onion we need to address.

Image Credit: Shutterstock

See the rest here:
Could Quantum Computing Progress Be Halted by Background Radiation? - Singularity Hub

Study Expands Types of Physics, Engineering Problems That Can Be Solved by Quantum Computers – HPCwire

Sept. 1, 2020 A well-known quantum algorithm that is useful in studying and solving problems in quantum physics can be applied to problems in classical physics, according to a new study in the journal Physical Review Afrom University of WisconsinMadison assistant professor of physicsJeff Parker.

Quantum algorithms a set of calculations that are run on a quantum computer as opposed to a classical computer used for solving problems in physics have mainly focused on questions in quantum physics. The new applications include a range of problems common to physics and engineering, and expands on the types of questions that can be asked in those fields.

The reason we like quantum computers is that we think there are quantum algorithms that can solve certain kinds of problems very efficiently in ways that classical computers cannot, Parker says. This paper presents a new idea for a type of problem that has not been addressed directly in the literature before, but it can be solved efficiently using these same quantum computer types of algorithms.

The type of problem Parker was investigating is known as generalized eigenvalue problems, which broadly describe trying to find the fundamental frequencies or modes of a system. Solving them is crucial to understanding common physics and engineering questions, such as the stability of a bridges design or, more in line with Parkers research interests, the stability and efficiency of nuclear fusion reactors.

As the system being studied becomes more and more complex more components moving throughout three-dimensional space so does the numerical matrix that describes the problem. A simple eigenvalue problem can be solved with a pencil and paper, but researchers have developed computer algorithms to tackle increasingly complex ones. With the supercomputers available today, more and more difficult physics problems are finding solutions.

If you want to solve a three-dimensional problem, it can be very complex, with a very complicated geometry, Parker says. You can do a lot on todays supercomputers, but there tends to be a limit. Quantum algorithms may be able to break that limit.

The specific quantum algorithm that Parker studied in this paper, known as quantum phase estimation, had been previously applied to so-called standard eigenvalue problems. However, no one had shown that they could be applied to the generalized eigenvalue problems that are also common in physics. Generalized eigenvalue problems introduce a second matrix that ups the mathematical complexity.

Parker took the quantum algorithm and extended it to generalized eigenvalue problems. He then looked to see what types of matrices could be used in this problem. If the matrix is sparse meaning, if most of the numerical components that make it up are zero it means this problem could be solved efficiently on a quantum computer.

What I showed is that there are certain types of generalized eigenvalue problems that do lead to a sparse matrix and therefore could be efficiently solved on a quantum computer, Parker says. This type includes the very natural problems that often occur in physics and engineering, so this study provides motivation for applying these quantum algorithms more to generalized eigenvalue problems, because it hasnt been a big focus so far.

Parker emphasizes that quantum computers are in their infancy, and these classical physics problems are still best approached through classical computer algorithms.

This study provides a step in showing that the application of a quantum algorithm to classical physics problems can be useful in the future, and the main advance here is it shows very clearly another type of problem to which quantum algorithms can be applied, Parker says.

The study was completed in collaboration with Ilon Joseph at Lawrence Livermore National Laboratory. Funding support was provided by the U.S. Department of Energy to Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344 and U.S. DOE Office of Fusion Energy Sciences Quantum Leap for Fusion Energy Sciences (FWP SCW1680).

For additional images, visit https://www.physics.wisc.edu/2020/08/25/new-study-expands-types-of-physics-engineering-problems-that-can-be-solved-by-quantum-computers/

Source: University of WisconsinMadison

Continue reading here:
Study Expands Types of Physics, Engineering Problems That Can Be Solved by Quantum Computers - HPCwire

How Andersen Cheng plans to defend against the quantum computer – The Independent

A

ndersen Cheng has a way with striking and memorable analogies. Boris Johnsons government is committing 1bn to building a Frankensteins monster, he says. Im trying to build a cage without any government funding to stop it running wild. The monster in question is the quantum computer, which is a hackers dream. The cage is what Post-Quantum was set up last year to create.

Cheng was born in Hong Kong but came to England to do his O-levels and A-levels. His parents sent him to a school in Devon. They wanted me to be as far from London as possible, he says. He duly learned to drive a tractor and milk cows, but went on to study engineering at Imperial College and do an MBA. When he started working in the City at the end of the Eighties as a computer auditor, there were only six portable compact computers in the whole company and disdain for the techies from people still using calculators.

Cheng became head of credit risk at JP Morgan in the midst of the dotcom bubble. He recalls how Boo.com burnt through $150m in 18 months. There just wasnt enough broadband speed for all those virtual mannequins spinning around, he says. After a spell in private equity, Cheng decided to break away and set up on his own as a consultant in the fast-growing realm of cryptography, working on top secret projects for the British government. It was so classified even the project name was secret, he says.

See the original post:
How Andersen Cheng plans to defend against the quantum computer - The Independent

What is the quantum internet? Everything you need to know about the weird future of quantum networks – ZDNet

It might all sound like a sci-fi concept, but building quantum networks is a key ambition for many countries around the world. Recently the US Department of Defense (DoE) published the first blueprint of its kind, laying out a step-by-step strategy to make the quantum internet dream come true, at least in a very preliminary form, over the next few years.

The US joined the EU and China in showing a keen interest in the concept of quantum communications. But what is the quantum internet exactly, how does it work, and what are the wonders that it can accomplish?

WHAT IS THE QUANTUM INTERNET?

The quantum internet is a network that will let quantum devices exchange some information within an environment that harnesses the weird laws of quantum mechanics. In theory, this would lend the quantum internet unprecedented capabilities that are impossible to carry out with today's web applications.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

In the quantum world, data can be encoded in the state of qubits, which can be created in quantum devices like a quantum computer or a quantum processor. And the quantum internet, in simple terms, will involve sending qubits across a network of multiple quantum devices that are physically separated. Crucially, all of this would happen thanks to the whacky properties that are unique to quantum states.

That might sound similar to the standard internet. But sending qubits around through a quantum channel, rather than a classical one, effectively means leveraging the behavior of particles when taken at their smallest scale so-called "quantum states", which have caused delight and dismay among scientists for decades.

And the laws of quantum physics, which underpin the way information will be transmitted in the quantum internet, are nothing short of unfamiliar. In fact, they are strange, counter-intuitive, and at times even seemingly supernatural.

And so to understand how the quantum ecosystem of the internet 2.0 works, you might want to forget everything you know about classical computing. Because not much of the quantum internet will remind you of your favorite web browser.

WHAT TYPE OF INFORMATION CAN WE EXCHANGE WITH QUANTUM?

In short, not much that most users are accustomed to. At least for the next few decades, therefore, you shouldn't expect to one day be able to jump onto quantum Zoom meetings.

Central to quantum communication is the fact that qubits, which harness the fundamental laws of quantum mechanics, behave very differently to classical bits.

As it encodes data, a classical bit can effectively only be one of two states. Just like a light switch has to be either on or off, and just like a cat has to be either dead or alive, so does a bit have to be either 0 or 1.

Not so much with qubits. Instead, qubits are superposed: they can be 0 and 1 simultaneously, in a special quantum state that doesn't exist in the classical world. It's a little bit as if you could be both on the left-hand side and the right-hand side of your sofa, in the same moment.

The paradox is that the mere act of measuring a qubit means that it is assigned a state. A measured qubit automatically falls from its dual state, and is relegated to 0 or 1, just like a classical bit.

The whole phenomenon is called superposition, and lies at the core of quantum mechanics.

Unsurprisingly, qubits cannot be used to send the kind of data we are familiar with, like emails and WhatsApp messages. But the strange behavior of qubits is opening up huge opportunities in other, more niche applications.

QUANTUM (SAFER) COMMUNICATIONS

One of the most exciting avenues that researchers, armed with qubits, are exploring, is security.

When it comes to classical communications, most data is secured by distributing a shared key to the sender and receiver, and then using this common key to encrypt the message. The receiver can then use their key to decode the data at their end.

The security of most classical communication today is based on an algorithm for creating keys that is difficult for hackers to break, but not impossible. That's why researchers are looking at making this communication process "quantum". The concept is at the core of an emerging field of cybersecurity called quantum key distribution (QKD).

QKD works by having one of the two parties encrypt a piece of classical data by encoding the cryptography key onto qubits. The sender then transmits those qubits to the other person, who measures the qubits in order to obtain the key values.

SEE: The UK is building its first commercial quantum computer

Measuring causes the state of the qubit to collapse; but it is the value that is read out during the measurement process that is important. The qubit, in a way, is only there to transport the key value.

More importantly, QKD means that it is easy to find out whether a third party has eavesdropped on the qubits during the transmission, since the intruder would have caused the key to collapse simply by looking at it.

If a hacker looked at the qubits at any point while they were being sent, this would automatically change the state of the qubits. A spy would inevitably leave behind a sign of eavesdropping which is why cryptographers maintain that QKD is "provably" secure.

SO, WHY A QUANTUM INTERNET?

QKD technology is in its very early stages. The "usual" way to create QKD at the moment consists of sending qubits in a one-directional way to the receiver, through optic-fibre cables; but those significantly limit the effectiveness of the protocol.

Qubits can easily get lost or scattered in a fibre-optic cable, which means that quantum signals are very much error-prone, and struggle to travel long distances. Current experiments, in fact, are limited to a range of hundreds of kilometers.

There is another solution, and it is the one that underpins the quantum internet: to leverage another property of quantum, called entanglement, to communicate between two devices.

When two qubits interact and become entangled, they share particular properties that depend on each other. While the qubits are in an entangled state, any change to one particle in the pair will result in changes to the other, even if they are physically separated.The state of the first qubit, therefore, can be "read" by looking at the behavior of its entangled counterpart. That's right: even Albert Einstein called the whole thing "spooky action at a distance".

And in the context of quantum communication, entanglement could in effect, teleport some information from one qubit to its entangled other half, without the need for a physical channel bridging the two during the transmission.

HOW DOES ENTANGLEMENT WORK?

The very concept of teleportation entails, by definition, the lack of a physical network bridging between communicating devices. But it remains that entanglement needs to be created in the first place, and then maintained.

To carry out QKD using entanglement, it is necessary to build the appropriate infrastructure to first create pairs of entangled qubits, and then distribute them between a sender and a receiver. This creates the "teleportation" channel over which cryptography keys can be exchanged.

Specifically, once the entangled qubits have been generated, you have to send one half of the pair to the receiver of the key. An entangled qubit can travel through networks of optical fibre, for example; but those are unable to maintain entanglement after about 60 miles.

Qubits can also be kept entangled over large distances via satellite, but covering the planet with outer-space quantum devices is expensive.

There are still huge engineering challenges, therefore, to building large-scale "teleportation networks" that could effectively link up qubits across the world. Once the entanglement network is in place, the magic can start: linked qubits won't need to run through any form of physical infrastructure anymore to deliver their message.

During transmission, therefore, the quantum key would virtually be invisible to third parties, impossible to intercept, and reliably "teleported" from one endpoint to the next. The idea will resonate well with industries that deal with sensitive data, such as banking, health services or aircraft communications. And it is likely that governments sitting on top secret information will also be early adopters of the technology.

WHAT ELSE COULD WE DO WITH THE QUANTUM INTERNET?

'Why bother with entanglement?' you may ask. After all, researchers could simply find ways to improve the "usual" form of QKD. Quantum repeaters, for example, could go a long way in increasing communication distance in fibre-optic cables, without having to go so far as to entangle qubits.

That is without accounting for the immense potential that entanglement could have for other applications. QKD is the most frequently discussed example of what the quantum internet could achieve, because it is the most accessible application of the technology. But security is far from being the only field that is causing excitement among researchers.

The entanglement network used for QKD could also be used, for example, to provide a reliable way to build up quantum clusters made of entangled qubits located in different quantum devices.

Researchers won't need a particularly powerful piece of quantum hardware to connect to the quantum internet in fact, even a single-qubit processor could do the job. But by linking together quantum devices that, as they stand, have limited capabilities, scientists expect that they could create a quantum supercomputer to surpass them all.

SEE: Guide to Becoming a Digital Transformation Champion (TechRepublic Premium)

By connecting many smaller quantum devices together, therefore, the quantum internet could start solving the problems that are currently impossible to achieve in a single quantum computer. This includes expediting the exchange of vast amounts of data, and carrying out large-scale sensing experiments in astronomy, materials discovery and life sciences.

For this reason, scientists are convinced that we could reap the benefits of the quantum internet before tech giants such as Google and IBM even achieve quantum supremacy the moment when a single quantum computer will solve a problem that is intractable for a classical computer.

Google and IBM's most advanced quantum computers currently sit around 50 qubits, which, on its own, is much less than is needed to carry out the phenomenal calculations needed to solve the problems that quantum research hopes to address.

On the other hand, linking such devices together via quantum entanglement could result in clusters worth several thousands of qubits. For many scientists, creating such computing strength is in fact the ultimate goal of the quantum internet project.

WHAT COULDN'T WE DO WITH THE QUANTUM INTERNET?

For the foreseeable future, the quantum internet could not be used to exchange data in the way that we currently do on our laptops.

Imagining a generalized, mainstream quantum internet would require anticipating a few decades (or more) of technological advancements. As much as scientists dream of the future of the quantum internet, therefore, it is impossible to draw parallels between the project as it currently stands, and the way we browse the web every day.

A lot of quantum communication research today is dedicated to finding out how to best encode, compress and transmit information thanks to quantum states. Quantum states, of course, are known for their extraordinary densities, and scientists are confident that one node could teleport a great deal of data.

But the type of information that scientists are looking at sending over the quantum internet has little to do with opening up an inbox and scrolling through emails. And in fact, replacing the classical internet is not what the technology has set out to do.

Rather, researchers are hoping that the quantum internet will sit next to the classical internet, and would be used for more specialized applications. The quantum internet will perform tasks that can be done faster on a quantum computer than on classical computers, or which are too difficult to perform even on the best supercomputers that exist today.

SO, WHAT ARE WE WAITING FOR?

Scientists already know how to create entanglement between qubits, and they have even been successfully leveraging entanglement for QKD.

China, a long-time investor in quantum networks, has broken records on satellite-induced entanglement. Chinese scientists recently established entanglement and achieved QKD over a record-breaking 745 miles.

The next stage, however, is scaling up the infrastructure. All experiments so far have only connected two end-points. Now that point-to-point communication has been achieved, scientists are working on creating a network in which multiple senders and multiple receivers could exchange over the quantum internet on a global scale.

The idea, essentially, is to find the best ways to churn out lots of entangled qubits on demand, over long distances, and between many different points at the same time. This is much easier said than done: for example, maintaining the entanglement between a device in China and one in the US would probably require an intermediate node, on top of new routing protocols.

And countries are opting for different technologies when it comes to establishing entanglement in the first place. While China is picking satellite technology, optical fibre is the method favored by the US DoE, which is now trying to create a network of quantum repeaters that can augment the distance that separates entangled qubits.

In the US, particles have remained entangled through optical fibre over a 52-mile "quantum loop" in the suburbs of Chicago, without the need for quantum repeaters. The network will soon be connected to one of the DoE's laboratories to establish an 80-mile quantum testbed.

In the EU, the Quantum Internet Alliance was formed in 2018 to develop a strategy for a quantum internet, and demonstrated entanglement over 31 miles last year.

For quantum researchers, the goal is to scale the networks up to a national level first, and one day even internationally. The vast majority of scientists agree that this is unlikely to happen before a couple of decades. The quantum internet is without doubt a very long-term project, with many technical obstacles still standing in the way. But the unexpected outcomes that the technology will inevitably bring about on the way will make for an invaluable scientific journey, complete with a plethora of outlandish quantum applications that, for now, cannot even be predicted.

See more here:
What is the quantum internet? Everything you need to know about the weird future of quantum networks - ZDNet