Archive for the ‘Quantum Computer’ Category

Quantum internet one step closer to reality with innovative wavelength switch – E&T Magazine

Engineers from Purdue University have developed a device to address a complication which has stood in the path of developing quantum networks large enough to reliably support more than a handful of users.

The engineers' approach, described in Optica, could form part of the groundwork for establishing a quantum internet: a large number of interconnected quantum computers, quantum sensors and other quantum technologies exchanging data.

They developed a programmable switch which can be used to adjust how much data goesto each user in the network by selecting and redirecting wavelengths of light carrying the different data channels, making it possible to increase the number of users without adding to photon loss as the network grows. When photons are lost - which becomes more likely the further they have to travel through fibre-optic networks - their associated quantum information is lost.

We show a way to do wavelength routing with just one piece of equipment wavelength-selective switch to, in principle, build a network of 12 to 20 users, maybe even more, said Professor Andree Weiner, an electrical and computer engineer. Previous approaches have required physically interchanging dozens of fixed optical filters tuned to individual wavelengths, which made the ability to adjust connections between users not practically viable and photon loss more likely.

Rather than adding these fixed filters every time a new user joins the network which makes scaling an awkward process engineers can simply program the wavelength-selective switch to direct data-carrying wavelengths over to each new user. This would reduce operational and maintenance costs, in addition to making the quantum internet more efficient.

The switch could also be programmed to adjust bandwidth in response to a users needs; this is not possible with fixed optical filters. This is based on similar technology to that used for adjusting bandwidth for classical communication, a widespread practice today. Like classical light-based communications, the switch is also capable of using a flex grid to partition bandwidth to users at a variety of wavelengths and locations, rather than being restricted to a series of fixed wavelengths, each with a fixed bandwith.

Forming connections between users of a quantum internet and adjusting bandwidth means distributing entanglement: a quantum-mechanical phenomenon in which at least two particles are created with entangled states. This means that they have a fixed relationship to each other no matter the distance between them; change the state of one and the state of the others change instantaneously. Entanglement is one of the quantum phenomena at the core of quantum information and quantum computing.

When people talk about a quantum internet, its this idea of generating entanglement remotely between two different stations, such as between quantum computers, said PhD candidate Navin Lingaraju. Our method changes the rate at which entangled photons are shared between different users. These entangled photons might be used as a resource to entangle quantum computers or quantum sensors at the two different stations.

Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.

View original post here:
Quantum internet one step closer to reality with innovative wavelength switch - E&T Magazine

Inside the race to keep secrets safe from the quantum computing revolution – Telegraph.co.uk

We have done some work with the NCSC but they just do not have the budget to fund this kind of development, he says.

His fear is that the UK could experience a brain drain of cryptography talent to other countries like Canada and France that have allocated more government funding to the field.

In January, the French government announced 150m (130m) in funding for quantum safe encryption as part of a larger 1.8bn grant for quantum computing.

Insiders with links to the security services say that the Government is carrying out its own secret work on quantum safe encryption instead of relying on start-ups.

Dr Ian Levy, the technical director of the NCSC, says the organisation "continues to work closely with industry, academia and international partners" on the subject. "The NCSC is committed to ensuring the UK is well-prepared for quantum-safe cryptography," he adds.

The threat of quantum computing breaking encryption could be solved within months, however. Many organisations, including PQShield and Post-Quantum have been taking part in a global competition run by the US National Institute of Standards and Technology (NIST).

The contest, announced in 2016, is nearing completion. Early next year, NIST will announce the new standard for quantum safe encryption, essentially replacing RSA. It will change the world not for the next decade, but for the next 40 or 50 years, Cheng says.

If everything goes smoothly, in several years the encryption keeping secrets safe will be quietly swapped out so that quantum computers cannot easily crack messages.

I think the answer to the threat should be transparent for users. They should have basically the same experience they have today. They shouldn't have to install some new bit of kit, says Alan Woodward, a computer security expert and visiting professor at the University of Surrey.

But while NISTs competition is nearing its end, theres a rival scheme that has already been launched around the world.

Telecom businesses such as BT have spent millions of pounds creating specialist networks that use a system called quantum key distribution. It uses a stream of single photos to transfer the secret encryption keys used to decrypt data securely.

Instead of a new encryption algorithm, this scheme relies on kilometres of fibre optic cables to transfer keys and has been the favoured choice of physicists who prefer its reliance on photons rather than mathematics.

See the article here:
Inside the race to keep secrets safe from the quantum computing revolution - Telegraph.co.uk

Why now is the right time to invest in European quantum computing – Sifted

John Martinis, the lead scientist who built Googles first computer to achieve quantum supremacy, recently left the tech giant to join Silicon Quantum Computing, a 2017-founded startup based in Sydney.

Meanwhile Terra Quantum, a Swiss-based quantum computing startup, has celebrated another big hire with high-profile physicist Valerii Vinokur from the Argonne National Laboratory in the US.

There is a trend for academics going to startups now, and it is a good sign when the seasoned professionals start joining.

Both moves are a sign that smaller startups are able to compete with the big players in the brewing war for talent in the quantum sector, as the industry begins to get out of the lab towards commercial applications.

Markus Pflitsch, cofounder Terra Quantum, says that big shot professors joining startups is a sign that smaller quantum companies are being taken increasingly seriously. He adds that the field is so new that new companies may well be just as successful as the likes of IBM, Google and Microsoft.

There is a trend for academics going to startups now, and it is a good sign when the seasoned professionals start joining, says Pflitsch.

Competition is fierce though.

European governments recent moves to pour funding into quantum projects (France recently pledged to spend 1.8bn in the sector and Germany 2bn) is partly motivated by a desire to avoid leading academics from the region, says Christophe Jurczak, founder of Quantonation.

The French government feels very strongly that the country is suffering from a brain drain in the field of AI, and they dont want that to happen with quantum while there is more time to prepare, says Jurczak, who was instrumental in helping formulate the French plan.

Quantum computing companies are raising ever-larger funding rounds and using the money for hiring.

Riverlane, a Cambridge-based startup which raised a 14.6m last month, is looking to double its team of 26 this year. Cambridge Quantum Computing, which raised a $45m early VC round in December, has gone from 37 employees in 2018 to close to 90 now and is hiring for 30+ more roles. Finnish superconducting quantum computer maker IQM, which raised a 39m Series A round in November, has more than doubled its headcount in the last year.

The shortage in the industry is no longer the money, it is the brainpower, says Pflitsch, who now has a staff of around 80 at Terra Quantum. The talent is so important, we cant do it without these guys. If you have those brains at Google or Terra Quantum it doesnt matter, it is a fair battle.

Big advances are still happening at relatively unknown teams at labs all around the world. In December a group based at the University of Science and Technology of China demonstrated quantum superiority the Holy Grail of the sector by getting a photon-based quantum system to do in 20 seconds what would take a supercomputer 600m years. The demonstration outdid Googles 2018 demonstration of quantum superiority by several orders of magnitude.

Small companies can compete in quantum if they have a breakthrough.

If a lab can do that it shows that small companies can compete in quantum if they have a breakthrough, says Daniel Carew, principal at IQ Capital.

French photonics-based quantum company Pasqal is thought to have achieved a record in the simulation of quantum systems with 196 qubits in the lab. If confirmed, this would give Europe a quantum advantage.

With so much still undeveloped, the quantum computing world still has an amateur enthusiast flavour to it with big developments able to come from unexpected places, much like the early days of the personal computer in the 1970s.

Its a bit like the homebrew computer club, says Steve Brierley, chief executive of Riverlane, referencing the early hobbyist computer club that ran in a garage in Californias Menlo Park between 1975 and 1986, and which became the training ground for tech entrepreneurs like Steve Jobs and Steve Wozniak.

Theres a lot of tinkering. Its happening over the cloud rather than in someones shed, but there is that same excitement.

It is still not clear which kind of quantum computing will be the dominant technology. Many of the big bets are on superconducting quantum computers, which operate at temperatures close to absolute zero, but there is investment going into photon-based systems, where photons are bounced into a quantum state by a series of mirrors, systems using trapped ions and silicon-based systems which dont have to operate at quite the super-low temperatures of the superconducting systems.

Researchers from Microsoft and the University of Sydney recently announced they had developed a quantum computing system that uses the same kind of complementary metal-oxide-semiconductor (CMOS) chips already used in classic computing.

If they have baked the wrong technology, some companies may disappear overnight.

It is still early days of knowing which style of quantum will prevail, says Pflitsch. If they have baked the wrong technology, he says, some companies may disappear overnight

Much of the big investment so far has gone into superconducting qubits, in part because the refrigeration technology needed for this is more established and available. But if there are big breakthroughs in one of the other areas which would make the qubits more reliable or scalable this could become the dominant technology.

I think atoms and ions will be dominant first, they have many advantages as for scalability (atoms) and fidelity (ions). In the longer term, solid-state approaches (superconducting qubits, spins, photons) should catch up, says Christophe Jurczak, founder of Quantonation, the quantum-focused VC fund.

It is also entirely possible that we could end up with multiple types of quantum computer co-existing, each with a particular niche it is best suited for.

The big dream of a general quantum computer may not be what happens, says IQ Capitals Carew. It may be more like the early days of microprocessors where you had a lot of different types each with a specific function.

This is part one of a series of four articles we are running this week on Europes quantum computing industry. Part two, on Frances quantum strategy, will be published tomorrow.

See the article here:
Why now is the right time to invest in European quantum computing - Sifted

Google Teams With D-Wave in Massive Quantum Computing Leap, Cracking Simulation Problem – The Daily Hodl

Google and D-Wave Systems say theyve achieved a new milestone in the world of quantum computing.

In a press release, D-Wave says its quantum device has far outpaced a classical computer in a direct competition to complete a difficult computational problem.

The device successfully modeled the behavior of a spinning two-dimensional quantum magnet, and was able to complete the simulation at breakneck speed.

In collaboration with scientists at Google, demonstrating a computational performance advantage, increasing with both simulation size and problem hardness, to over 3 million times that of corresponding classical methods.

Notably, this work was achieved on a practical application with real-world implications, simulating the topological phenomena behind the 2016 Nobel Prize in Physics.

Quantum devices leverage the unique properties of quantum physics to perform certain calculations at revolutionary speeds.

D-Wave says its study proves that quantum computers can more efficiently and effectively tackle tough simulations.

What we see is a huge benefit in absolute terms, with the scaling advantage in temperature and size that we would hope for.

Quantum computing threatens to break the cryptographic algorithms that keep the internet and crypto assets secure. Ripple CTO Davis Schwartz, says he believes developers have about eight years to develop quantum-proof methods to keep digital infrastructures secure.

Featured Image: Shutterstock/Yurchanka Siarhei

See the article here:
Google Teams With D-Wave in Massive Quantum Computing Leap, Cracking Simulation Problem - The Daily Hodl

The Three Utilities Problem | Graph Theory Breakthrough – Popular Mechanics

Jacob Holm was flipping through proofs from an October 2019 research paper he and colleague Eva Rotenbergan associate professor in the department of applied mathematics and computer science at the Technical University of Denmarkhad published online, when he discovered their findings had unwittingly given away a solution to a centuries-old graph problem.

Holm, an assistant professor of computer science at the University of Copenhagen, was relieved no one had caught the solution first. It was a real Eureka! moment, he says. It suddenly seemed obvious.

Holm and Rotenberg were trying to find a shortcut for determining whether a graph is planarthat is, if it could be drawn flat on a surface without any of its lines crossing each other (flat drawings of a graph are also called embeddings).

Putting it very bluntly, we formally quantified why something is a terrible drawing.

To mathematicians, a graph often looks different than what most of us are taught in school. A graph in this case is any number of points, called nodes, connected by pairwise relations, called edges. In other words, an edge is a curve that connects two nodes. Under this definition, a graph can represent anything from the complex wiring inside a computer chip to a road map of a city, in which the streets of Manhattan could be represented as edges, and their intersections represented as nodes. The study of such graphs is called graph theory.

Engineers need to find planarity in a graph when, for example, they are designing a computer chip without a crossed wire. But assessing for planarity amid the addition and removal of edges is difficult without drawing the graph yourself and trying not to cross any lines (See The Three Utilities Problem below, which was originally published in an issue of The Strand Magazine in 1913).

This content is imported from {embed-name}. You may be able to find the same content in another format, or you may be able to find more information, at their web site.

Assessing for planarity becomes even more complicated in larger graphs with lots of nodes and edges, says Rotenberg. This is a real-world issue. Quantum computer chips, for instance, are highly advanced, and finding efficient ways to assess their planarity without wasting time and money is crucial to their development.

These three houses each need access to water, gas, and electricitybut for safety reasons the lines connecting the utilities and houses cannot cross. Grab a sheet of paper, draw out this scenario, and try to connect all three houses to all three utilities without any two lines crossing. Check the solution at the bottom of this page when you think you have the right answer.

In their original 2019 paper published on the preprint server arXivwhere research often first sees the light of day before peer reviewHolm and Rotenberg classified a type of embedding called a balanced or good embedding.

Holm explains that these good embeddings tend to balance the [time] costs of inserting edges so that no possible edge insertion costs too much compared to the rest. This is a concept borrowed for balanced decision trees in computer science, which are designed with evenly dispersed branches for minimized search time. Put another way, good embeddings are easier to add new edges to without violating planarity.

If you were to look at it, Holm says, a good embedding would be simple, unconvoluted. The standard example is the so-called Ladder Graph. A balanced embedding of this graph looks exactly like a ladder. But Holm says: In an unbalanced embedding, it is hardly recognizable.

It seems subjective to say the Ladder Graph is good and its alternatives are bad, but Holm and Rotenberg articulated in their paper why those statements were mathematically true. Putting it very bluntly, we formally quantified why something is a terrible drawing, says Rotenberg, referring to a bad embedding. What the pair didnt realize at the time was that their class of good embeddings played an essential role in speeding up the process of dynamic planarity testing.

When adding a new edge to a planar graph is required, there are two scenarios: There is a safe way to add the edge, possibly after modifying the drawing, or no drawing admitting the edge exists. But in some cases, the embedding of a graph itself might be disguising a way the edge could be inserted in planar fashion. To reveal those alternative paths, mathematicians flip an embedding to change its orientation while keeping it mathematically identical, because the relationship between the connected nodes and edges hasnt changed.

These flips might make it possible to add edges between two newly arranged nodes, edges that would have otherwise violated planarity. Holm and Rotenberg discovered the flips that lead to successful edge insertion and deletion tended to fall into their class of so-called good embeddings. Similarly, these good embeddings require fewer flips overall to successfully add new edges. A win-win.

Infinite Powers: How Calculus Reveals the Secrets of the Universe

Zero : The Biography of a Dangerous Idea

The Art of Statistics: How to Learn from Data

The Joy of x: A Guided Tour of Math, from One to Infinity

The pair have suggested numerous applications for their work, including chip design, surface meshes, and road networks, but Rotenberg has admitted: What attracts us to this problem is its puzzle-like nature. The two are cautious to predict more commercial applications because completing flips in real-world graph designs can be challenging.

However, they say that their approach to assessing dynamic graphs (i.e., graphs that change via insertions and deletions) could impact how mathematicians approach similar problems. Essentially, while their algorithm assesses planarity, it also tracks and calculates changes to the graphs, performing what is called a recourse analysis, says Rotenberg.

But such data gathering isnt superfluous. Rotenberg argues their solution shows that recourse analysis could have algorithmic applications in addition to being interesting in its own right, because here, it led to their efficient planarity test.

Analyzing dynamic mathematical concepts is an open field, she says, but therein lies the potential. The breakthroughs might have already happenedtheyre just hidden in the process.

Solution to the Three Utilities Problem: Its actually impossible in two-dimensional space.

Editor's Note: This story first appears in the March/April 2021 issue of Popular Mechanics magazine.

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io

Read the original here:
The Three Utilities Problem | Graph Theory Breakthrough - Popular Mechanics