Archive for the ‘Quantum Computer’ Category

Europe moves to exclude neighbors from its quantum and space research – Science Magazine

A department overseen by European Union research commissioner Mariya Gabriel wants to safeguard strategic research by barring non-EU researchers.

By Nicholas WallaceMar. 11, 2021 , 4:25 PM

In a sign of growing national tensions over the control of strategic research, the European Commission is trying to block countries outside the European Union from participating in quantum computing and space projects under Horizon Europe, its new research funding program.

The proposed calls, which must still be approved by delegates from the 27 EU member states in the coming weeks, would shut out researchers in countries accustomed to full access to European research programs, including Switzerland, the United Kingdom, and Israel. European Economic Area (EEA) countries Norway, Lichtenstein, and Iceland would be barred from space research calls while remaining eligible for quantum computing projects.

Research advocates see the proposed restrictions as self-defeating for all parties, including the European Union. It would be a classic lose-lose, with researchers in all countries having to work harder, and spend more, to make progress in these fields, says Vivienne Stern, director of UK Universities International. The unexpected news has upset some leaders of existing collaborations and left them scrambling to find out whether they will need to exclude partnersor even drop out themselvesif they want their projects to be eligible for further funding. It is really a pity because we have a tight and fruitful relationship with our partners in the U.K., says Sandro Mengali, director of the Italian research nonprofit Consorzio C.R.E.O. and coordinator of an EU-funded project developing heat shields for spacecraft.

In 2018, when the European Commission first announced plans for the 85 billion, 7-year Horizon Europe program, it said it would beopen to the world. Switzerland, Israel, the EEA nations, and other countries have long paid toassociate with EU funding programs like Horizon Europegiving their researchers the right to apply for grants, just like those in EU member states. After leaving the European Union,the United Kingdom struck a dealin December 2020 to join Horizon Europe, which put out its first grant calls last month through the European Research Council.

But more recently,strategic autonomy andtechnological sovereignty have become watchwords among policymakers in Brussels, who argue the European Union should domestically produce components in key technologies, such as quantum computers and space technology. Those views influenced the Commissions research policy department, overseen by EU research commissioner Mariya Gabriel, which drafted the calls and their eligibility rules,first revealed by Science|Business. The draft says the restrictions are necessary tosafeguard the Unions strategic assets, interests, autonomy, or security.

Its a bit of a contradiction, says a Swiss government official who asked to remain anonymous because of the sensitivity of forthcoming discussions.You want to open the program to the world and work with the best. But the core group of associated countries with whom youre used to working, suddenly you exclude them and force them to work with the competitors. The official says the Commission gave no warnings the proposal was coming but believes the combination of Brexit and the COVID-19 crisis, in which Europe has struggled to secure access to vaccines, masks, and other equipment, may have further spurred Europe to guard its technologies. Negotiations on Swiss membership in Horizon Europe have not begun, but the country intends to join.

The restrictions affect 170 million in funding that could be available in the next few months. The affected areas include quantum computing, quantum communications, satellite communications, space transport, launchers, andspace technologies for European non-dependence and competitiveness. Projects relating to the Copernicus Earth-observation system and the Galileo satellite navigation programs would remain largely open to associated countries.

Shutting out the associated countries would be alost opportunity and could slow progress in quantum computing, says Lieven Vandersypen, a quantum nanoscientist at the Delft University of Technology.To me, it doesnt make sense. Vandersypen contributes to an EU-funded project that is investigating how to create the basic bits of a quantum computer from cheap and readily available silicon. The project includes U.K. and Swiss researchers at University College London and the University of Basel.They are in there for a good reason, Vandersypen says.They bring in really valuable expertise. With a few years left on the grant, the project isn't in any immediate danger. But the exclusions are bad for long-term planning, Vandersypen says.

Non-EU researchers working on a 150 million European quantum flagship initiative set up in 2018 are also upset by the sudden reversal and wonder about their future status. We discuss with our partners in Europe, they ask us, Can you join?And we dont knowthats probably the worst thing, says Hugo Zbinden, a quantum physicist at the University of Geneva and coordinator of one of these flagship projects, QRANGE, which is investigating how a quantum random number generator can be used to improve encryption.

The restrictions are not yet set in stone; national delegates could reject the draft calls and ask the Commission to open them up. But member states accepted the legal basis for the restrictions last year, when they agreed to the Horizon Europe legislation.Of course, you hope that we will be in, Zbinden says. For the time being, we are waiting for some news.

Continue reading here:
Europe moves to exclude neighbors from its quantum and space research - Science Magazine

Quantum Mechanics, the Chinese Room Experiment and the Limits of Understanding – Scientific American

Like great art, great thought experiments have implications unintended by their creators. Take philosopher John Searles Chinese room experiment. Searle concocted it to convince us that computers dont really think as we do; they manipulate symbols mindlessly, without understanding what they are doing.

Searle meant to make a point about the limits of machine cognition. Recently, however, the Chinese room experiment has goaded me into dwelling on the limits of human cognition. We humans can be pretty mindless too, even when engaged in a pursuit as lofty as quantum physics.

Some background. Searle first proposed the Chinese room experiment in 1980. At the time, artificial intelligence researchers, who have always been prone to mood swings, were cocky. Some claimed that machines would soon pass the Turing test, a means of determining whether a machine thinks.

Computer pioneer Alan Turing proposed in 1950 that questions be fed to a machine and a human. If we cannot distinguish the machines answers from the humans, then we must grant that the machine does indeed think. Thinking, after all, is just the manipulation of symbols, such as numbers or words, toward a certain end.

Some AI enthusiasts insisted that thinking, whether carried out by neurons or transistors, entails conscious understanding. Marvin Minsky espoused this strong AI viewpoint when I interviewed him in 1993. After defining consciousness as a record-keeping system, Minsky asserted that LISP software, which tracks its own computations, is extremely conscious, much more so than humans. When I expressed skepticism, Minsky called me racist.

Back to Searle, who found strong AI annoying and wanted to rebut it. He asks us to imagine a man who doesnt understand Chinese sitting in a room. The room contains a manual that tells the man how to respond to a string of Chinese characters with another string of characters. Someone outside the room slips a sheet of paper with Chinese characters on it under the door. The man finds the right response in the manual, copies it onto a sheet of paper and slips it back under the door.

Unknown to the man, he is replying to a question, like What is your favorite color?, with an appropriate answer, like Blue. In this way, he mimics someone who understands Chinese even though he doesnt know a word. Thats what computers do, too, according to Searle. They process symbols in ways that simulate human thinking, but they are actually mindless automatons.

Searles thought experiment has provoked countless objections. Heres mine. The Chinese room experiment is a splendid case of begging the question (not in the sense of raising a question, which is what most people mean by the phrase nowadays, but in the original sense of circular reasoning). The meta-question posed by the Chinese Room Experiment is this: How do we know whether any entity, biological or non-biological, has a subjective, conscious experience?

When you ask this question, you are bumping into what I call the solipsism problem. No conscious being has direct access to the conscious experience of any other conscious being. I cannot be absolutely sure that you or any other person is conscious, let alone that a jellyfish or smartphone is conscious. I can only make inferences based on the behavior of the person, jellyfish or smartphone.

Now, I assume that most humans, including those of you reading these words, are conscious, as I am. I also suspect that Searle is probably right, and that an intelligent program like Siri only mimics understanding of English. It doesnt feel like anything to be Siri, which manipulates bits mindlessly. Thats my guess, but I cant know for sure, because of the solipsism problem.

Nor can I know what its like to be the man in the Chinese room. He may or may not understand Chinese; he may or may not be conscious. There is no way of knowing, again, because of the solipsism problem. Searles argument assumes that we can know whats going on, or not going on, in the mans mind, and hence, by implication, whats going on or not in a machine. His flawed initial assumption leads to his flawed, question-begging conclusion.

That doesnt mean the Chinese room experiment has no value. Far from it. The Stanford Encyclopedia of Philosophy calls it the most widely discussed philosophical argument in cognitive science to appear since the Turing Test. Searles thought experiment continues to pop up in my thoughts. Recently, for example, it nudged me toward a disturbing conclusion about quantum mechanics, which Ive been struggling to learn over the last year or so.

Physicists emphasize that you cannot understand quantum mechanics without understanding its underlying mathematics. You should have, at a minimum, a grounding in logarithms, trigonometry, calculus (differential and integral) and linear algebra. Knowing Fourier transforms wouldnt hurt.

Thats a lot of math, especially for a geezer and former literature major like me. I was thus relieved to discover Q Is for Quantum by physicist Terry Rudolph. He explains superposition, entanglement and other key quantum concepts with a relatively simple mathematical system, which involves arithmetic, a little algebra and lots of diagrams with black and white balls falling into and out of boxes.

Rudolph emphasizes, however, that some math is essential. Trying to grasp quantum mechanics without any math, he says, is like having van Goghs Starry Night described in words to you by someone who has only seen a black and white photograph. One that a dog chewed.

But heres the irony. Mastering the mathematics of quantum mechanics doesnt make it easier to understand and might even make it harder. Rudolph, who teaches quantum mechanics and co-founded a quantum-computer company, says he feels cognitive dissonance when he tries to connect quantum formulas to sensible physical phenomena.

Indeed, some physicists and philosophers worry that physics education focuses too narrowly on formulas and not enough on what they mean. Philosopher Tim Maudlin complains in Philosophy of Physics: Quantum Theory that most physics textbooks and courses do not present quantum mechanics as a theory, that is, a description of the world; instead, they present it as a recipe, or set of mathematical procedures, for accomplishing certain tasks.

Learning the recipe can help you predict the results of experiments and design microchips, Maudlin acknowledges. But if a physics student happens to be unsatisfied with just learning these mathematical techniques for making predictions and asks instead what the theory claims about the physical world, she or he is likely to be met with a canonical response: Shut up and calculate!

In his book, Maudlin presents several attempts to make sense of quantum mechanics, including the pilot-wave and many-worlds models. His goal is to show that we can translate the Schrdinger equation and other formulas into intelligible accounts of whats happening in, say, the double-slit experiment. But to my mind, Maudlins ruthless examination of the quantum models subverts his intention. Each model seems preposterous in its own way.

Pondering the plight of physicists, Im reminded of an argument advanced by philosopher Daniel Dennett in From Bacteria to Bach and Back: The Evolution of Minds. Dennett elaborates on his long-standing claim that consciousness is overrated, at least when it comes to doing what we need to do to get through a typical day. We carry out most tasks with little or no conscious attention.

Dennett calls this competence without comprehension. Adding insult to injury, Dennett suggests that we are virtual zombies. When philosophers refer to zombies, they mean not the clumsy, grunting cannibals of The Walking Dead but creatures that walk and talk like sentient humans but lack inner awareness.

When I reviewed Dennetts book, I slammed him for downplaying consciousness and overstating the significance of unconscious cognition. Competence without comprehension may apply to menial tasks like brushing your teeth or driving a car but certainly not to science and other lofty intellectual pursuits. Maybe Dennett is a zombie, but Im not! That, more or less, was my reaction.

But lately Ive been haunted by the ubiquity of competence without comprehension. Quantum physicists, for example, manipulate differential equations and matrices with impressive competenceenough to build quantum computers!but no real understanding of what the math means. If physicists end up like information-processing automatons, what hope is there for the rest of us? After all, our minds are habituation machines, designed to turn even complex taskslike being a parent, husband or teacherinto routines that we perform by rote, with minimal cognitive effort.

The Chinese room experiment serves as a metaphor not only for physics but also for the human condition. Each of us sits alone within the cell of our subjective awareness. Now and then we receive cryptic messages from the outside world. Only dimly comprehending what we are doing, we compose responses, which we slip under the door. In this way, we manage to survive, even though we never really know what the hell is happening.

Further Reading:

Is the Schrdinger Equation True?

Will Artificial Intelligence Ever Live Up to Its Hype?

Can Science Illuminate Our Inner Dark Matter

See original here:
Quantum Mechanics, the Chinese Room Experiment and the Limits of Understanding - Scientific American

After year of reset expectations, D-Wave secures $40-million from Ottawa for quantum computing – The Globe and Mail

D-Wave is the first company to offer a commercially available quantum computer.

Reuters

One of Canadas most heavily financed technology development companies, quantum computer maker D-Wave Systems Inc., has secured a $40-million financial contribution from the federal government.

The funding, through Ottawas Strategic Innovation Fund, follows a year of reset expectations for D-Wave, a leader in the global race to develop computers whose chips draw their power by harnessing natural properties of subatomic particles to perform complex calculations faster than conventional computers.

Burnaby, B.C.-based D-Wave is the first company to offer a commercially available quantum computer, but after 20-plus years of development and more than US$300-million in funds raised, it is still in the early stages of building a sustainable business.

Story continues below advertisement

Last year D-Wave promoted Silicon Valley veteran executive Alan Baratz to chief executive officer, replacing Vern Brownell, to step up commercialization efforts. The company also parted ways with other top executives and long-time board members.

Mr. Baratz, who led Sun Microsystems Inc.s effort in the 1990s to transform Java from a nascent programming language into the internets main software-writing platform, directed D-Wave to stop selling its shed-sized computers, which listed for US$15-million and had just a handful of customers including NASA, Google, Lockheed Martin and the U.S. Los Alamos National Laboratory.

Instead, D-Wave has focused on selling online access to the technology and expanded its software applications, which Mr. Baratz had started developing after joining as chief product officer in 2017. Customers including Volkswagen and biotechnology startups have used D-Waves technology to find answers to dense optimization problems, such as improving traffic flows in big cities, identifying proteins that could become breakthrough drugs and improving the efficiency of painting operations on vehicle production assembly lines.

D-Wave also completed a costly US$40-million refinancing last year that wiped out most of the value of some long-time investors, including the U.S. Central Intelligence Agencys venture capital arm, Amazon CEO Jeff Bezos and fund giant Fidelity Investments. The capital restructuring cut D-Waves valuation to less than US$170-million, down from US$450-million, The Globe reported in October. Investors that ponied up, including Public Sector Pension Investment Board, D-Waves top shareholder, BDC Capital and Goldman Sachs, maintained their relative stakes, limiting their writedowns.

Over the years [D-wave has] had to raise money and more money and more money ... and as such you end up getting diluted over time because every third quarter it seems like you run out of the $50-million that you raised, Kevin Rendino, CEO and portfolio manager of D-Wave investor 180 Degree Capital Corp., told his investors last November. D-Wave has been a source of bitter disappointment for all of us.

Meanwhile, D-Wave faces years and tens of millions of dollars more in costs to continue developing its core technology. The government aid will support a $120-million project to advance D-Waves hardware and software and will help place Canada at the forefront of quantum technology development, and will create new jobs and opportunities to help Canadians and advance the economy, Franois-Philippe Champagne, Minister of Innovation, Science and Industry, said in a release.

During a press conference to discuss the funding, the minister was asked if the government would review potential takeovers of quantum computing companies, as the U.S. government is considering doing. Mr. Champagne provided a non-committal response, saying Im sure you would expect us to be eyes wide open when it comes to whatever we would need to take in terms of steps to protect.[intellectual property] that has been developed in Canada.

Story continues below advertisement

Were always out there looking at how we can improve to make sure that new technologies and inventions and improvements and IP that has been developed in Canada stays in Canada.

D-Wave faces a slew of competitors including Google, Microsoft, Intel, IBM and Honeywell that are also trying to build the first quantum machine that can outperform classical or conventional computers. In addition, a new class of startups including Torontos Xanadu Quantum Technologies Inc. and College Park, Md.-based IonQ Inc. believe they can build quantum chips that dont have to be supercooled to function, as D-Waves system and others in development do. IonQ said this week it would go public through a special purpose acquisition company become the first publicly traded quantum computing-focused company.

Mr. Baratz said in an emailed statement that since D-Waves launch last September of of its latest quantum chip and expanded efforts to sell online access to its computers weve been encouraged by the positive customer response to the value delivered by a quantum system designed for practical, in-production business-scale applications. Were eager to see even more developers, academics, and companies leverage it to solve larger, more complex problems.

Your time is valuable. Have the Top Business Headlines newsletter conveniently delivered to your inbox in the morning or evening. Sign up today.

Read more here:
After year of reset expectations, D-Wave secures $40-million from Ottawa for quantum computing - The Globe and Mail

Can Photonic Computing Solve The Rising Cost & Energy Issues Of AI? – Analytics India Magazine

As per Open AI data, the amount of computational power needed to train large AI models has grown massively doubling every three and a half months since 2021. GPT-3, which requires 3.14E23 FLOPS of computing for training, is a good case in point.

Typically, to carry out high-performance computing tasks, conventional AI chips are equipped with transistors that work with electrons. Although they perform a wide array of complex high performing tasks, energy consumption and engineering glitches pose a challenge. Thus, the growing need for computing power has set researchers on a quest to find a workaround to boost these chips power without increasing energy consumption.

And thats when experts turned to photons and light particles that can easily substitute electrons in AI chips to reduce the heat, leading to a massive reduction in energy consumption and a dramatic upgrade in processor speed.

While electrons perform calculations by reducing the information to a series of 1s and 0s, photonic chips split and mix beams of light within tiny channels to carry out the tasks. Compared to regular AI chips, photonics chips are only designed to perform a certain kind of mathematical calculation, critical for running large AI models.

Lightmatter, an MIT-backed startup, last year developed an AI chip Envise that leverages photons (light particles) to perform computing tasks.

Lights travel faster than electrons. The concept of using light as a substitute for carrying out heavy tasks (aka photonics computing/optical computing) dates back to the 1980s, when Nokia Bell Labs, an American industrial research and scientific development company, tried to develop a light-based processor. However, due to the impracticality of creating a working optical transistor, the concept didnt take off.

We experience optical technology in cameras, CDs, and even in Blue-Ray discs. But these photons are usually converted into electrons to deploy in chips. Four decades later, photonic computing gained momentum when IBM and researchers from the University of Oxford Muenster developed the system that uses light instead of electricity to perform several AI model-based computations.

Alongside, Lightmatters new AI chip has created a buzz in the industry. According to the company website, Envise can run the largest neural networks three times higher inferences/second than the Nvidia DGX-A100, with seven times the inferences/second/Watt on BERT-Base with the SQuAD dataset.

Japan-based NTT company has also been developing an optical computer believed to outpace quantum computing to solve optimisation problems. Last year, Chinese quantum physicist, Chao-Yang Lu, has also announced light-based quantum computing.

Other companies like US-based Honeywell and IonQ have also been working around the issue by using trapped ions.

Such developments have led the experts to believe photonics computing will gain ground once the big tech companies throw their weight behind it and understand the importance of using light for their AI chips.

On the other hand, like any other remarkable technology, photonics computing also comes with certain challenges. Despite its less energy-consumption, photons chips are considered less accurate and precise than electron-based chips. Much of this could be attributed to its analogue-based calculations, making it perfect for running pre-trained models and deep neural networks.

On the designing aspect, silicon-based computer chips dont go well with photo particles that limit their usage in computing.

The cost issues and environmental impact of digital chips might set the stage for photonics computing to rise as a substitute. With startups like Lightmatter and giants like IBM committing resources to this computing paradigm, AI might get a photonic boost.

Visit link:
Can Photonic Computing Solve The Rising Cost & Energy Issues Of AI? - Analytics India Magazine

What is Quantum Computing | Microsoft Azure

It's the use of quantum mechanics to run calculations on specialized hardware.

To fully define quantum computing, we need to define some key terms first.

The quantum in "quantum computing" refers to the quantum mechanics that the system uses to calculate outputs. In physics, a quantum is the smallest possible discrete unit of any physical property. It usually refers to properties of atomic or subatomic particles, such as electrons, neutrinos, and photons.

A qubit is the basic unit of information in quantum computing. Qubits play a similar role in quantum computing as bits play in classical computing, but they behave very differently. Classical bits are binary and can hold only a position of 0 or 1, but qubits can hold a superposition of all possible states.

Quantum computers harness the unique behavior of quantum physicssuch as superposition, entanglement, and quantum interferenceand apply it to computing. This introduces new concepts to traditional programming methods.

In superposition, quantum particles are a combination of all possible states. They fluctuate until they're observed and measured. One way to picture the difference between binary position and superposition is to imagine a coin. Classical bits are measured by "flipping the coin" and getting heads or tails. However, if you were able to look at a coin and see both heads and tails at the same time, as well as every state in between, the coin would be in superposition.

Entanglement is the ability of quantum particles to correlate their measurement results with each other. When qubits are entangled, they form a single system and influence each other. We can use the measurements from one qubit to draw conclusions about the others. By adding and entangling more qubits in a system, quantum computers can calculate exponentially more information and solve more complicated problems.

Quantum interference is the intrinsic behavior of a qubit, due to superposition, to influence the probability of it collapsing one way or another. Quantum computers are designed and built to reduce interference as much as possible and ensure the most accurate results. To this end, Microsoft uses topological qubits, which are stabilized by manipulating their structure and surrounding them with chemical compounds that protect them from outside interference.

A quantum computer has three primary parts:

For some methods of qubit storage, the unit that houses the qubits is kept at a temperature just above absolute zero to maximize their coherence and reduce interference. Other types of qubit housing use a vacuum chamber to help minimize vibrations and stabilize the qubits.

Signals can be sent to the qubits using a variety of methods, including microwaves, laser, and voltage.

Quantum computer uses and application areas

A quantum computer can't do everything faster than a classical computer, but there are a few areas where quantum computers have the potential to make a big impact.

Quantum computers work exceptionally well for modeling other quantum systems because they use quantum phenomena in their computation. This means that they can handle the complexity and ambiguity of systems that would overload classical computers. Examples of quantum systems that we can model include photosynthesis, superconductivity, and complex molecular formations.

Classical cryptographysuch as the RivestShamirAdleman (RSA) algorithm thats widely used to secure data transmissionrelies on the intractability of problems such as integer factorization or discrete logarithms. Many of these problems can be solved more efficiently using quantum computers.

Optimization is the process of finding the best solution to a problem given its desired outcome and constraints. In science and industry, critical decisions are made based on factors such as cost, quality, and production timeall of which can be optimized. By running quantum-inspired optimization algorithms on classical computers, we can find solutions that were previously impossible. This helps us find better ways to manage complex systems such as traffic flows, airplane gate assignments, package deliveries, and energy storage.

Machine learning on classical computers is revolutionizing the world of science and business. However, training machine learning models comes with a high computational cost, and that has hindered the scope and development of the field. To speed up progress in this area, we're exploring ways to devise and implement quantum software that enables faster machine learning.

A quantum algorithm developed in 1996 dramatically sped up the solution to unstructured data searches, running the search in fewer steps than any classical algorithm could.

Azure Quantum resources

Build quantum solutions today as an early adopter of Azure Quantum Preview, a full-stack open cloud ecosystem. Access software, hardware, and pre-built solutions and start developing on a trusted, scalable, and secure platform.

Continued here:
What is Quantum Computing | Microsoft Azure