Archive for the ‘Quantum Computer’ Category

What is a quantum computer? Explained with a simple example.

by YK Sugi

Hi everyone!

The other day, I visited D-Wave Systems in Vancouver, Canada. Its a company that makes cutting-edge quantum computers.

I got to learn a lot about quantum computers there, so Id like to share some of what I learned there with you in this article.

The goal of this article is to give you an accurate intuition of what a quantum computer is using a simple example.

This article will not require you to have prior knowledge of either quantum physics or computer science to be able to understand it.

Okay, lets get started.

Edit (Feb 26, 2019): I recently published a video about the same topic on my YouTube channel. I would recommend watching it (click here) before or after reading this article because I have added some additional, more nuanced arguments in the video.

Here is a one-sentence summary of what a quantum computer is:

There is a lot to unpack in this sentence, so let me walk you through what it is exactly using a simple example.

To explain what a quantum computer is, Ill need to first explain a little bit about regular (non-quantum) computers.

Now, a regular computer stores information in a series of 0s and 1s.

Different kinds of information, such as numbers, text, and images can be represented this way.

Each unit in this series of 0s and 1s is called a bit. So, a bit can be set to either 0 or 1.

A quantum computer does not use bits to store information. Instead, it uses something called qubits.

Each qubit can not only be set to 1 or 0, but it can also be set to 1 and 0. But what does that mean exactly?

Let me explain this with a simple example. This is going to be a somewhat artificial example. But its still going to be helpful in understanding how quantum computers work.

Now, suppose youre running a travel agency, and you need to move a group of people from one location to another.

To keep this simple, lets say that you need to move only 3 people for now Alice, Becky, and Chris.

And suppose that you have booked 2 taxis for this purpose, and you want to figure out who gets into which taxi.

Also, suppose here that youre given information about whos friends with who, and whos enemies with who.

Here, lets say that:

And suppose that your goal here is to divide this group of 3 people into the two taxis to achieve the following two objectives:

Okay, so this is the basic premise of this problem. Lets first think about how we would solve this problem using a regular computer.

To solve this problem with a regular, non-quantum computer, youll need first to figure out how to store the relevant information with bits.

Lets label the two taxis Taxi #1 and Taxi #0.

Then, you can represent who gets into which car with 3 bits.

For example, we can set the three bits to 0, 0, and 1 to represent:

Since there are two choices for each person, there are 2*2*2 = 8 ways to divide this group of people into two cars.

Heres a list of all possible configurations:

A | B | C0 | 0 | 00 | 0 | 10 | 1 | 00 | 1 | 11 | 0 | 01 | 0 | 11 | 1 | 01 | 1 | 1

Using 3 bits, you can represent any one of these combinations.

Now, using a regular computer, how would we determine which configuration is the best solution?

To do this, lets define how we can compute the score for each configuration. This score will represent the extent to which each solution achieves the two objectives I mentioned earlier:

Lets simply define our score as follows:

(the score of a given configuration) = (# friend pairs sharing the same car) - (# enemy pairs sharing the same car)

For example, suppose that Alice, Becky, and Chris all get into Taxi #1. With three bits, this can be expressed as 111.

In this case, there is only one friend pair sharing the same car Alice and Becky.

However, there are two enemy pairs sharing the same car Alice and Chris, and Becky and Chris.

So, the total score of this configuration is 1-2 = -1.

With all of this setup, we can finally go about solving this problem.

With a regular computer, to find the best configuration, youll need to essentially go through all configurations to see which one achieves the highest score.

So, you can think about constructing a table like this:

A | B | C | Score0 | 0 | 0 | -10 | 0 | 1 | 1 <- one of the best solutions0 | 1 | 0 | -10 | 1 | 1 | -11 | 0 | 0 | -11 | 0 | 1 | -11 | 1 | 0 | 1 <- the other best solution1 | 1 | 1 | -1

As you can see, there are two correct solutions here 001 and 110, both achieving the score of 1.

This problem is fairly simple. It quickly becomes too difficult to solve with a regular computer as we increase the number of people in this problem.

We saw that with 3 people, we need to go through 8 possible configurations.

What if there are 4 people? In that case, well need to go through 2*2*2*2 = 16 configurations.

With n people, well need to go through (2 to the power of n) configurations to find the best solution.

So, if there are 100 people, well need to go through:

This is simply impossible to solve with a regular computer.

How would we go about solving this problem with a quantum computer?

To think about that, lets go back to the case of dividing 3 people into two taxis.

As we saw earlier, there were 8 possible solutions to this problem:

A | B | C0 | 0 | 00 | 0 | 10 | 1 | 00 | 1 | 11 | 0 | 01 | 0 | 11 | 1 | 01 | 1 | 1

With a regular computer, using 3 bits, we were able to represent only one of these solutions at a time for example, 001.

However, with a quantum computer, using 3 qubits, we can represent all 8 of these solutions at the same time.

There are debates as to what it means exactly, but heres the way I think about it.

First, examine the first qubit out of these 3 qubits. When you set it to both 0 and 1, its sort of like creating two parallel worlds. (Yes, its strange, but just follow along here.)

In one of those parallel worlds, the qubit is set to 0. In the other one, its set to 1.

Now, what if you set the second qubit to 0 and 1, too? Then, its sort of like creating 4 parallel worlds.

In the first world, the two qubits are set to 00. In the second one, they are 01. In the third one, they are 10. In the fourth one, they are 11.

Similarly, if you set all three qubits to both 0 and 1, youd be creating 8 parallel worlds 000, 001, 010, 011, 100, 101, 110, and 111.

This is a strange way to think, but it is one of the correct ways to interpret how the qubits behave in the real world.

Now, when you apply some sort of computation on these three qubits, you are actually applying the same computation in all of those 8 parallel worlds at the same time.

So, instead of going through each of those potential solutions sequentially, we can compute the scores of all solutions at the same time.

With this particular example, in theory, your quantum computer would be able to find one of the best solutions in a few milliseconds. Again, thats 001 or 110 as we saw earlier:

A | B | C | Score0 | 0 | 0 | -10 | 0 | 1 | 1 <- one of the best solutions0 | 1 | 0 | -10 | 1 | 1 | -11 | 0 | 0 | -11 | 0 | 1 | -11 | 1 | 0 | 1 <- the other best solution1 | 1 | 1 | -1

In reality, to solve this problem, you would need to give your quantum computer two things:

Given these two things, your quantum computer will spit out one of the best solutions in a few milliseconds. In this case, thats 001 or 110 with a score of 1.

Now, in theory, a quantum computer is able to find one of the best solutions every time it runs.

However, in reality, there are errors when running a quantum computer. So, instead of finding the best solution, it might find the second-best solution, the third best solution, and so on.

These errors become more prominent as the problem becomes more and more complex.

So, in practice, you will probably want to run the same operation on a quantum computer dozens of times or hundreds of times. Then pick the best result out of the many results you get.

Even with the errors I mentioned, the quantum computer does not have the same scaling issue a regular computer suffers from.

When there are 3 people we need to divide into two cars, the number of operations we need to perform on a quantum computer is 1. This is because a quantum computer computes the score of all configurations at the same time.

When there are 4 people, the number of operations is still 1.

When there are 100 people, the number of operations is still 1. With a single operation, a quantum computer computes the scores of all 2 ~= 10 = one million million million million million configurations at the same time.

As I mentioned earlier, in practice, its probably best to run your quantum computer dozens of times or hundreds of times and pick the best result out of the many results you get.

However, its still much better than running the same problem on a regular computer and having to repeat the same type of computation one million million million million million times.

Special thanks to everyone at D-Wave Systems for patiently explaining all of this to me.

D-Wave recently launched a cloud environment for interacting with a quantum computer.

If youre a developer and would like actually to try using a quantum computer, its probably the easiest way to do so.

Its called Leap, and its at https://cloud.dwavesys.com/leap. You can use it for free to solve thousands of problems, and they also have easy-to-follow tutorials on getting started with quantum computers once you sign up.

Footnote:

Read more:
What is a quantum computer? Explained with a simple example.

UVA Pioneers Study of Genetic Diseases With Mind-Bending Quantum Computing – University of Virginia

University of Virginia School of Medicinescientists are harnessing the mind-bending potential of quantum computers to help us understand genetic diseases even before quantum computers are a thing.

UVAs Stefan Bekiranov and colleagues have developed an algorithm to allow researchers to study genetic diseases using quantum computers once there are much more powerful quantum computers to run it. The algorithm, a complex set of operating instructions, will help advance quantum computing algorithm development and could advance the field of genetic research one day.

Quantum computers are still in their infancy. But when they come into their own, possibly within a decade, they may offer computing power on a scale unimaginable using traditional computers.

We developed and implemented a genetic sample classification algorithm that is fundamental to the field of machine learning on a quantum computer in a very natural way using the inherent strengths of quantum computers, Bekiranov said. This is certainly the first published quantum computer study funded by the National Institute of Mental Health and may be the first study using a so-called universal quantum computer funded by the National Institutes of Health.

Traditional computer programs are built on 1s and 0s, either-or. But quantum computers take advantage of a freaky fundamental of quantum physics: Something can be and not be at the same time. Rather than 1 or 0, the answer, from a quantum computers perspective, is both, simultaneously. That allows the computer to consider vastly more possibilities, all at once.

The challenge is that the technology is, to put it lightly, technically demanding. Many quantum computers have to be kept at near absolute zero, the equivalent of more than 450 degrees below zero Fahrenheit. Even then, the movement of molecules surrounding the quantum computing elements can mess up the calculations, so algorithms not only have to contain instructions for what to do, but for how to compensate when errors creep in.

Our goal was to develop a quantum classifier that we could implement on an actual IBM quantum computer. But the major quantum machine learning papers in the field were highly theoretical and required hardware that didnt exist. We finally found papers from Dr. Maria Schuld, who is a pioneer in developing implementable, near-term, quantum machine-learning algorithms. Our classifier builds on those developed by Dr. Schuld, Bekiranov said. Once we started testing the classifier on the IBM system, we quickly discovered its current limitations and could only implement a vastly oversimplified, or toy, problem successfully, for now.

The new algorithm essentially classifies genomic data. It can determine if a test sample comes from a disease or control sample exponentially faster than a conventional computer. For example, if they used all four building blocks of DNA (A, G, C or T) for the classification, a conventional computer would execute 3 billion operations to classify the sample. The new quantum algorithm would need only 32.

That will help scientists sort through the vast amount of data required for genetic research. But its also proof-of-concept of the usefulness of the technology for such research.

Bekiranov and collaborator Kunal Kathuria were able to create the algorithm because they were trained in quantum physics, a field that even scientists often find opaque. Such algorithms are more likely to emerge from physics or computer science departments than medical schools. (Both Bekiranov and Kathuria conducted the study in the School of MedicinesDepartment of Biochemistry and Molecular Genetics. Kathuria is currently at the Lieber Institute for Brain Development.)

Because of the researchers particular set of skills, officials at the National Institutes of Healths National Institute of Mental Health supported them in taking on the challenging project. Bekiranov and Kathuria hope what they have developed will be a great benefit to quantum computing and, eventually, human health.

Relatively small-scale quantum computers that can solve toy problems are in existence now, Bekiranov said. The challenges of developing a powerful universal quantum computer are immense. Along with steady progress, it will take multiple scientific breakthroughs. But time and again, experimental and theoretical physicists, working together, have risen to these challenges. If and when they develop a powerful universal quantum computer, I believe it will revolutionize computation and be regarded as one of greatest scientific and engineering achievements of humankind.

The scientists have published their findings in the scientific journalQuantum Machine Intelligence. The algorithm-development team consisted of Kathuria, Aakrosh Ratan, Michael McConnell and Bekiranov.

The work was supported by NIH grants 3U01MH106882-04S1, 5U01MH106882-05 and P30CA044579.

To keep up with the latest medical research news from UVA, subscribe to theMaking of Medicineblog.

Read this article:
UVA Pioneers Study of Genetic Diseases With Mind-Bending Quantum Computing - University of Virginia

U of A physicists develop technology to transform information from microwaves to optical light – Folio – University of Alberta

Physicists at the University of Alberta have developed technology that can translate data from microwaves to optical lightan advance that has promising applications in the next generation of super-fast quantum computers and secure fibre-optic telecommunications.

Many quantum computer technologies work in the microwave regime, while many quantum communications channels, such as fibre and satellite, work with optical light, explained Lindsay LeBlanc, who holds the Canada Research Chair in Ultracold Gases for Quantum Simulation. We hope that this platform can be used in the future to transduce quantum signals between these two regimes.

The new technology works by introducing a strong interaction between microwave radiation and atomic gas. The microwaves are then modulated with an audio signal, encoding information into the microwave. This modulation is passed through the gas atoms, which are then probed with optical light to encode the signal into the light.

This transfer of information from the microwave domain to the optical domain is the key result, said LeBlanc. The wavelengths of these two carrier signals differ by a factor of 50,000. It is not easy to transduce the signal between these regimes, but this transfer proves this is possible.

LeBlanc and researchers in her lab, including graduate student Andrei Tretiakov and undergraduate student Timothy Lee, worked closely with physicist John P. Davisand his research group, including graduate student Clinton Potts, to develop the technology.

LeBlanc and Davis are part of Quanta, an NSERC CREATE program designed to train graduate students in emerging quantum technologies.

This idea arose by having talks and meeting within the Quanta groupand it turned out to work as well or better than we first expected, said LeBlanc.

This sort of discovery-led research can be very fruitful, and lead us to new possibilities.

Funding for the project was provided by Alberta Innovates.

The study, Atomic Microwave-to-Optical Signal Transduction via Magnetic-Field Coupling in a Resonant Microwave Cavity, was published in Applied Physics Letters.

Original post:
U of A physicists develop technology to transform information from microwaves to optical light - Folio - University of Alberta

The Hyperion-insideHPC Interviews: Dr. Michael Resch Talks about the Leap from von Neumann: ‘I Tell My PhD Candidates: Go for Quantum’ – insideHPC

Dr. Michael M. Resch of the University of Stuttgart has professorships, degrees, doctorates and honorary doctorates from around the world, he has studied and taught in Europe and the U.S., but for all the work he has done in supercomputing for the past three-plus decades, he boils down his years in HPC to working with the same, if always improving, von Neumann architecture. Hes eager for the next new thing: quantum. Going to quantum computing, we have to throw away everything and we have to start anew, he says. This is a great time.

In This Update. From The HPC User Forum Steering Committee

By Steve Conway and Thomas Gerard

After the global pandemic forced Hyperion Research to cancel the April 2020 HPC User Forum planned for Princeton, New Jersey, we decided to reach out to the HPC community in another way by publishing a series of interviews with members of the HPC User Forum Steering Committee. Our hope is that these seasoned leaders perspectives on HPCs past, present and future will be interesting and beneficial to others. To conduct the interviews, Hyperion Research engaged insideHPC Media.

We welcome comments and questions addressed to Steve Conway, sconway@hyperionres.com or Earl Joseph, ejoseph@hyperionres.com.

This interview is with Michael M. Resch. Prof. Dr. Dr. h.c. mult. He is dean of the faculty for energy-process and biotechnology of the University of Stuttgart, director of the High Performance Computing Center Stuttgart (HLRS), the Department for High Performance Computing, and the Information Center (IZUS), all at the University of Stuttgart, Germany. He was an invited plenary speaker at SC07. He chairs the board of the German Gauss Center for Supercomputing (GCS) and serves on the advisory councils for Triangle Venture Capital Group and several foundations. He is on the advisory board of the Paderborn Center for Parallel Computing (PC2). He holds a degree in technical mathematics from the Technical University of Graz, Austria and a Ph.D. in engineering from the University of Stuttgart. He was an assistant professor of computer science at the University of Houston and was awarded honorary doctorates by the National Technical University of Donezk (Ukraine) and the Russian Academy of Science.

He was interviewed by Dan Olds, HPC and big data consultant at Orionx.net.

The HPC User Forum was established in 1999 to promote the health of the global HPC industry and address issues of common concern to users. More than 75 HPC User Forum meetings have been held in the Americas, Europe and the Asia-Pacific region since the organizations founding in 2000.

Olds: Hello, Im Dan Olds on behalf of Hyperion Research and insideHPC, and today Im talking to Michael Resch, who is an honorable professor at the HPC Center in Stuttgart, Germany. How are you, Michael?

Resch: I am fine, Dan. Thanks.

Olds: Very nice to talk to you. I guess lets start at the beginning. How did you get involved in HPC in the first place?

Resch: That started when I was a math student and I was invited to work as a student research assistant and, by accident, that was roughly the month when a new supercomputer was coming into the Technical University of Graz. So, I put my hands on that machine and I never went away again.

Olds: You sort of made that machine yours, I guess?

Resch: We were only three users. There were three user groups and I was the most important user of my user group because I did all the programming.

Olds: Fantastic, thats a way to make yourself indispensable, isnt it?

Resch: In a sense.

Olds: So, can you kind of summarize your HPC background over the years?

Resch: I started doing blood flow simulations, so I at first looked into this very traditional Navier-Stokes equation that was driving HPC for a long time. Then I moved on to groundwater flow simulations pollution of groundwater, tunnel construction work, and everything until after like five years I moved to the University of Stuttgart, where I started to work with supercomputers, more focusing on the programming side, the performance side, than on the hardware side. This is sort of my background in terms of experience.

In terms of education, I studied a mixture of mathematics, computer science and economics, and then did a Ph.D. in engineering, which was convenient if youre working in Navier-Stokes equations. So, I try to bring all of these things together to make an impact in HPC.

Olds: What are some of the biggest changes youve seen in HPC over your career?

Resch: Well, the biggest change is probably that when I started, as I said, there were three user groups. These were outstanding experts in their field, but supercomputing was nothing for the rest of the university. Today, everybody is using HPC. Thats probably the biggest change, that we are moving from something where you had one big system and a few experts around that system, and you moved to a larger number of systems and tens of thousands of experts working with them.

Olds: And, so, the systems have to get bigger, of course.

Resch: Well, certainly, they have to get bigger. And they have to get, I would say, more usable. Thats another feature, that now things are more hidden from the user, which makes it easier to use them. But at the same time, it takes away some of the performance. There is this combination of hiding things away from the user and then the massive parallelism that we saw, and thats the second most important thing that I think we saw in the last three decades. That has made it much more difficult to get high sustained performance.

Olds: Where do you see HPC headed in the future? Is there anything that has you particularly excited or concerned?

Resch: [Laughs] Im always excited and concerned. Thats just normal. Thats what happens when you go into science and thats normal when you work with supercomputers. I see, basically, two things happening. The first thing is that people will merge everything that has to do with data and everything that has to do with simulation. I keep saying its data analytics, machine learning, artificial intelligence. Its sort of a development from raw data to very intelligent handling of data. And these data-intensive things start to merge with simulation, like we see people trying to understand what they did over the last 20 years by employing artificial intelligence to work its way through the data trying to find what we have already done and what should we do next, things like that.

The second thing that is exciting is quantum computing. Its exciting because its out of the ordinary, in a sense. You might say that over the last 32 years the only thing I did was work with improved technology and improved methods and improved algorithms or whatever, but I was still working in the same John von Neumann architecture concept. Going to quantum computing we have to throw away everything and we have to start anew. This is a great time. I keep telling my Ph.D. candidates, go for quantum computing. This is where you make an impact. This is where you have a wide-open field of things you can explore and this is what is going to make the job exciting for the next 10, 12, 15 years or so.

Olds: Thats fantastic and your enthusiasm for this really comes through. Your enthusiasm for HPC, for the new computing methods, and all that. And, thank you so much for taking the time.

Resch: It was a pleasure. Thank you.

Olds: Thank you, really appreciate it.

Originally posted here:
The Hyperion-insideHPC Interviews: Dr. Michael Resch Talks about the Leap from von Neumann: 'I Tell My PhD Candidates: Go for Quantum' - insideHPC

Microsoft Executive Vice President Jason Zander: Digital Transformation Accelerating Across the Energy Spectrum; Being Carbon Negative by 2030; The…

WASHINGTON--(BUSINESS WIRE)--Microsoft Executive Vice President Jason Zander says the company has never been more busy partnering with the energy industry on cloud technologies and energy transition; the combination of COVID-19 and the oil market shock has condensed years of digital transformation into a two-month period; the companys return to its innovative roots and its goal to have removed all of the companys historic carbon emissions by 2050 in the latest edition of CERAWeek Conversations.

In a conversation with IHS Markit (NYSE: INFO) Vice Chairman Daniel Yergin, Zanderwho leads the companys cloud services business, Microsoft Azurediscusses Microsofts rapid and massive deployment of cloud-based apps that have powered work and commerce in the COVID-19 economy; how cloud technologies are optimizing business and vaccine research; the next frontiers of quantum computing and its potential to take problems that would take, literally, a thousand years, you might be able to solve in 10 seconds, and more.

The complete video is available at: http://www.ceraweek.com/conversations

Selected excerpts:Interview Recorded Thursday, July 16, 2020

(Edited slightly for brevity only)

Watch the complete video at: http://www.ceraweek.com/conversations

Weve already prepositioned in over 60 regions around the world hundreds of data center, millions and millions of server nodestheyre already there. If you can imagine COVID, if you had to go back and do a procurement exercise and figure out a place to put the equipment, and the supply chains were actually shut down for a while because of COVID. Thats why I say, even three to five years ago we as industries would have been pretty challenged to respond as quickly as we had.

Thats on the more tactical end of the spectrum. On the other end weve also done a lot of things around data sets and advanced data work. How do we find a cure? Weve done things like [protein] folding at home and making sure that those things could be hosted on the cloud. These are thingsthat will be used in the search of a vaccine for the virus. Those are wildly different spectrums from the tactical 'we need to manage and do logistics' to 'we need a search for things that are going to get us all back to basically normal.'

Theres also a whole bunch of stimulus packages and payment systems that are getting created and deployed. Weve had financial services companies that run on top of the cloud. They may have been doing a couple of hundred big transactions a day; weve had them do tens to hundreds of thousands a day when some of this kicked in.

The point is with the cloud I can just go to the cloud, provision it, use it, and eventually when things cool back down, I can just shut it off. I dont have to worry about having bought servers, find a place for them to live, hiring people to take care of them.

There was disruption in supply chain also. Many of us saw this at least in the Statesif you think even the food supply chain, every once in a while, youd see some hiccups. Theres a whole bunch of additional work that weve done around how do we do even better planning around that, making sure we can hit the right levels of scale in the future? God forbid we should have another one of these, but I think we can and should be responsible to make sure that weve got it figured out.

The policy and investment sideit has never been more important for us to collaborate with healthcare, universities, and with others. Weve kicked off a whole bunch of new partnerships and work that will benefit us in the future. This was a good wake up call for all of us in figuring out how to marshal and be able to respond even better in the future.

Weve had a lot of cases where people have been moving out of their own data centers and into ours. Let us basically take care of that part of the system. We can run it cheaply and efficiently. Im seeing a huge amount of data center accelerationfolks that really want to move even faster on getting their workloads removed. Thats true for oil and gas but its also true for the financial sector and retail.

Specifically, for oil and gas, one of the things that were trying to do in particular is bring this kind of cloud efficiency, this kind of AI, and especially help out with places where you are doing exploration. What these have in common is the ability to take software especially from the [independent software vendors] that work in the spacereservoir simulation, explorationand marry that to these cloud resources where I can spin things up and spin things down. I can take advantage of that technology that Ive got, and I am more efficient. I am not spending capex; I can perhaps do even more jobs than I was doing before. That allows me to go do that scale. If youre going to have less resources to do something, you of course want to increase your hit rate; increase your efficiency. Those are some of the core things that were seeing.

A lot of folks, especially in oil and gas, have some of the most sophisticated high-performance computing solutions that are out there today. What we want to be able to do with the cloud is to be able to enable you to do even more of those solutions in a much more efficient way. Weve got cases where people have been able to go from running one reservoir simulation job a day on premises [to] where they can actually go off to the cloud and since we have all of this scale and all of this equipment, you can spin up and do 100 in one day. If that is going to be part of how you drive your efficiency, then being able to subscribe to that and go up and down its helping you do that job much more efficiently than you used to and giving you a lot more flexibility.

Were investing in a $1 billion fund over the next four years for carbon removal technology. We also are announcing a Microsoft sustainability calculator for cloud customers. Basically, you can help get transparency into your Scope 1,2, and 3 carbon emissions to get control. You can think of us as we want to hit this goal, we want to do it ourselves, we want to figure out how we build technology to help us do that and then we want to share that technology with others. And then all along the way we want to partner with energy companies so that we can all be partnering together on this energy transition.

From a corporate perspective weve made pledges around being carbon negative, but then also working with our energy partners. The way that we look at this is youre going to have continued your requirements and improvements in standards of living around the entire planet. One of the core, critical aspects to that is energy. The world needs more energy, not less. There are absolutely the existing systems that we have out there that we need to continue to improve, but they are also a core part of how things operate.

What we want to do is have a very responsible program where were doing things like figuring out how to go carbon negative and figuring out ways that we as a company can go carbon negative. At the same time, taking those same techniques and allowing others to do the same and then partnering with energy companies around energy transformation. We still want the investments in renewables. We want to figure out how to be more efficient at the last mile when we think about the grid. I generally find that when you get that comprehensive answer back to our employees, they understand what we are doing and are generally supportive.

Coming up is a digital feedback loop where you get enough data thats coming through the system that you can actually start to be making smart decisions. Our expectation is well have an entire connected environment. Now we start thinking about smart cities, smart factories, hospitals, campuses, etc. Imagine having all of that level of data thats coming through and the ability to do smart work shedding or shaping of electrical usage, things where I can actually control brownout conditions and other things based on energy usage. Theres also the opportunity to be doing smart sharing of systems where we can do very efficient usage systemsintelligent edge and edge deployments are a core part of that.

How do we keep all the actual equipment that people are using safe? If you think about 5G and additional connectivity, were getting all this cool new technology thats there. You have to figure out a way in which youre leveraging silicon, youre leveraging software and the best in securityand were investing in all three.

The idea of being able to harness particle physics to do computing and be able to figure out things in minutes that would literally take centuries to go pull off otherwise in classical computing is kind of mind-blowing. Were actually working with a lot of the energy companies on figuring out how could quantum inspired algorithms make them more efficient today. As we get to full scale quantum computing then they would run natively in hardware and would be able to do even more amazing things. That one has just the potential to really, really change the world.

The meta point is problems that would take, literally, a thousand years, you might be able to solve in 10 seconds. Weve proven how that kind of technology can work. The quantum-inspired algorithms therefore allow us to take those same kind of techniques, but we can run them on the cloud today using some of the classic cloud computers that are there. Instead of taking 1,000 years, maybe its something that we can get done in 10 days, but in the future 10 seconds.

About CERAWeek Conversations:

CERAWeek Conversations features original interviews and discussion with energy industry leaders, government officials and policymakers, leaders from the technology, financial and industrial communitiesand energy technology innovators.

The series is produced by the team responsible for the worlds preeminent energy conference, CERAWeek by IHS Markit.

New installments will be added weekly at http://www.ceraweek.com/conversations.

Recent segments also include:

A complete video library is available at http://www.ceraweek.com/conversations.

About IHS Markit (www.ihsmarkit.com)

IHS Markit (NYSE: INFO) is a world leader in critical information, analytics and solutions for the major industries and markets that drive economies worldwide. The company delivers next-generation information, analytics and solutions to customers in business, finance and government, improving their operational efficiency and providing deep insights that lead to well-informed, confident decisions. IHS Markit has more than 50,000 business and government customers, including 80 percent of the Fortune Global 500 and the worlds leading financial institutions. Headquartered in London, IHS Markit is committed to sustainable, profitable growth.

IHS Markit is a registered trademark of IHS Markit Ltd. and/or its affiliates. All other company and product names may be trademarks of their respective owners 2020 IHS Markit Ltd. All rights reserved.

Read more:
Microsoft Executive Vice President Jason Zander: Digital Transformation Accelerating Across the Energy Spectrum; Being Carbon Negative by 2030; The...