Archive for the ‘Quantum Computing’ Category

RIT offers new minor in emerging field of quantum information science and technology | RIT – Rochester Institute of Technology

Rochester Institute of Technology students can soon begin earning a minor in an emerging field that could disrupt the science, technology, engineering, and math (STEM) disciplines. RIT students can now take classes toward a minor in quantum information and technology science.

This is a hot field garnering a lot of attention and we are excited to offer students a chance to gain some technical depth in quantum so they can take this knowledge and go the next step with their careers, said Ben Zwickl, associate professor in RITs School of Physics and Astronomy and advisor for the minor. It will provide a pathway for students from any STEM major to take two core courses that introduce them to quantum and some of its applications, as well as strategically pick some upper-level courses within or outside their program.

Quantum physics seeks to understand the rules and effects of manipulating the smallest amount of energy at the subatomic level. Scientists and engineers are attempting to harness the strange, unintuitive properties of quantum particles to make advances in computing, cryptography, communications, and many other applications. Developers of the minor said there is a growing industry that will need employees knowledgeable about quantum physics and its applications.

Were seeing a lot of giant tech companies like IBM, Intel, Microsoft, and Google get involved with quantum, but theres also a lot of venture capital going to startup companies in quantum, said Gregory Howland, assistant professor in the School of Physics and Astronomy. Howland will teach one of the minors two required courses this fallPrinciples and Applications of Quantum Technology. You have both sides of it really blossoming now.

The minor, much like the field itself, is highly interdisciplinary in nature, with faculty from the College of Science, Kate Gleason College of Engineering, College of Engineering Technology, and Golisano College of Computing and Information Sciences offering classes that count toward the minor. The minor grew out of RITs Future Photon Initiative and funding from the NSFs Quantum Leap Challenge Institutes program.

Associate Professor Sonia Lopez Alarcon from RITs Department of Computer Engineering will teach the other required courseIntroduction to Quantum Computing and Information Sciencestarting this spring. She said taking these courses will provide valuable life skills in addition to lessons about cutting-edge science and technology.

Theyll learn more than just the skills from the courses, theyll learn how to get familiar with a topic thats not in the textbooks officially yet, said Lopez Alarcon. Thats a very important skill for industry. Companies want to know theyre hiring people with the ability to learn about something that is emerging, especially in science and technology because its such a rapidly changing field.

The faculty involved noted that they hope to attract a diverse group of students to enroll in the minor. They said that although the disciplines feeding into quantum have struggled with inclusion related to gender and race and ethnicity, they will work with affinity groups on campus to try to recruit students to the program and ultimately advance the fields inclusivity.

To learn more about the minor, contact Ben Zwickl.

Continue reading here:
RIT offers new minor in emerging field of quantum information science and technology | RIT - Rochester Institute of Technology

ANL Special Colloquium on The Future of Computing – HPCwire

There are, of course, a myriad of ideas regarding computings future. At yesterdays Argonne National Laboratorys Directors Special Colloquium, The Future of Computing, guest speaker Sadasivan Shankar, did his best to convince the audience that the high-energy cost of the current computing paradigm not (just) economic cost; were talking entropy here is fundamentally undermining computings progress such that it will never be able to solve todays biggest challenges.

The broad idea is that the steady abstracting away of informational content from each piece of modern computings complicated assemblage (chips, architecture, programming) inexorably increases the cumulative energy cost, leading toward a hard ceiling. Leaving aside, for a moment, the decline in Moores law (just a symptom really), it is the separation (abstraction) of information from direct computation thats the culprit argues Shankar. Every added step adds energy cost.

Nature, on the other hand, bakes information into things. Consider, said Shankar, how a string of amino acids folds into its intended 3-D conformation on a tiny energy budget and in a very short time just by interacting with its environment, and contrast that with the amount of compute required i.e. energy expended to accurately predict protein folding from a sequence of amino acids. Shankar, research technology manager at the SLAC National Laboratory and adjunct Stanford professor, argues computing must take a lesson from nature and strive to pack information more tightly into applications and compute infrastructure.

Information theory is a rich field with a history of rich debate. Turning theory into practice has often proven more difficult and messy. Shankar (and his colleagues) have been developing a formal framework for classifying the levels of information content in human-made computation schemes and natural systems in a way that permits direct comparison between the two. The resulting scale has eight classification levels (0-7).

Theres a lot to digest in Shankars talk. Rather than going off the rails here with a garbled explanation its worth noting that Argonne has archived the video and Shankar has a far-along paper thats expected in a couple of months. No doubt some of his ideas will stir conversation. Given that Argonne will be home to Aurora, the exascale supercomputer now being built at the lab, it was an appropriate site for a talk on the future of computing.

Before jumping into what the future may hold, heres a quick summary of Shankars two driving points 1) Moores law, or more properly the architecture and semiconductor technology on which it rests, is limited and 2) the growing absolute energy cost of information processing using traditional methods (von Neumann) are limiting:

A big part of the answer to question of how computing must progress, suggested Shankar, is to take a page from Feynmans reverberating idea not just for quantum computing and emulate the way nature computes, pack[ing] all of the information needed for the computing into the things themselves or at least by reducing abstraction as much as possible.

Argonne assembled an expert panel to bat Shankars ideas around. The panel included moderator Rick Stevens (associate laboratory director and Argonne distinguished fellow), Salman Habib (director, Argonne computational science division and Argonne distinguished fellow), Yanjing Li (assistant professor, department of computer science, University of Chicago), and Fangfang Xia (computer scientist, data science and learning division, ANL).

Few quibbled with the high-energy cost of computing as described by Shankar but they had a variety of perspectives on moving forward. One of the more intriguing comments came from Xia, an expert in neuromorphic computing. He suggested using neuromorphic systems to discover new algorithms is a potentially productive approach.

My answer goes back to the earlier point Sadas and Rick made which is, if were throwing away efficiency in the information power conversion process, why dont we stay with biological system for a bit longer. Theres this interesting field called synthetic biological intelligence. They are trying to do these brain-computer interfaces, not in a Neurolink way, because thats still shrouded in uncertainty. But there is a company and they grow these brain cells in a petri dish. Then they connect this to an Atari Pong game. And you can see that after just 10 minutes, these brain cells self-organize into neural networks, and they can learn to play the game, said Xia.

Keep in mind, this is 10 minutes in real life, its not a simulation time. Its only dozens of games, just like how we pick up games. So this data efficiency is enormous. What I find particularly fascinating about this is that in this experiment there was no optimization goal. There is no loss function you have to tweak. The system, when connected in this closed loop fashion, will just learn in an embodied way. That opens so many possibilities, you think about all these dishes, just consuming glucose, you can have them to learn latent representations, maybe to be used in digital models.

Li, a computer architecture expert, noted that general purpose computing infrastructure has existed for a long time.

I remember this is the same architecture of processor design I learned at school, and I still teach the same materials today. For the most part, when were trying to understand how CPUs work, and even some of the GPUs, those have been around for a long time. I dont think there has been a lot of very revolutionary kind of changes for those architectures. Theres a reason for that, because we have developed, good tool chains, the compiler tool change people are educated to understand and program and build those systems. So anytime we want to make a big change [it has] to be competitive and as usable as what we know of today, Li said.

On balance, she expects more incremental changes. I think its not going to be just a big jump and well get there tomorrow. We have to build on small steps looking at building on existing understanding and also evolving along with the application requirements. I do think that there will be places where we can increase energy efficiency. If were looking at the memory hierarchy, for example, we know caches and that it helps us with performance. But its also super inefficient from an energy performance standpoint. But this has worked for a long time, because traditional applications have good locality, but we are increasingly seeing new applications where [there] may not be as many localities so theres a way for innovation in the memory hierarchy path. For example, we can design different memory, kind of reference patterns and infrastructures or applications that do not activate locality, for example. That will be one way of making the whole computing system much more efficient.

Li noted the trend toward specialized computing was another promising approach: If we use a general-purpose computing system like a CPU, theres overhead that goes into fetching the instructions, decoding them. All of those are overheads are not directly solving the problem, but its just what you need to get the generality you need to solve all problems. Increasing specialization towards offloading different specialized tasks would be another kind of interesting perspective of approaching this problem.

There was an interesting exchange between Shankar and Stevens over the large amount of energy consumed in training todays large natural language processing models.

Shankar said, Im quoting from literature on deep neural networks or any of these image recognition networks. They scale quadratically with the number of data points. One of the latest things that is being hyped about in the last few weeks is a trillion parameter, natural language processing [model]. So here are the numbers. To train one of those models, it takes the energy equivalent to four cars being driven a whole year, just to train the model, including the manufacturing cost of the car. That is how much energy is spent in the training on this, so there is a real problem, right?

Not so fast countered Stevens. Consider using the same numbers for how much energy is going into Bitcoin, right? So the estimate is maybe something like 5 percent of global energy production. At least these neural network models are useful. Theyre not just used for natural language processing. You can use it for distilling knowledge. You can use them for imaging and so forth. I want to shift gears a little bit. Governments around the world and VCs are putting a lot of money into quantum computing, and based on what you were talking about, its not clear to me that thats actually the right thing we should be doing. We have lots of opportunities for alternative computing models, alternative architectures that could open up spaces that we know in principle can work. We have classical systems that can do this, he said.

Today, theres an army of computational scientists around the world seeking ways to advance computing, some of them focused on the energy aspect of the problem, others focused on other areas such on performance or capacity. It will be interesting to see if the framework and methodology embodied on Shankars forthcoming paper not only provokes discussion but also provides a concrete methodology for comparing computing system efficiency.

Link to ANL video: https://vimeo.com/event/2081535/17d0367863

Brief Shankar Bio

Sadasivan (Sadas) Shankar is Research Technology Manager at SLAC National Laboratory and Adjunct Professor in Stanford Materials Science and Engineering. He is also an Associate in the Department of Physics in Harvard Faculty of Arts and Sciences, and was the first Margaret and Will Hearst Visiting Lecturer in Harvard University and the first Distinguished Scientist in Residence at the Harvard Institute of Applied Computational Sciences. He has co-instructed classes related to materials, computing, and sustainability and was awarded Harvard University Teaching Excellence Award. He is involved in research in materials, chemistry, and specialized AI methods for complex problems in physical and natural sciences, and new frameworks for studying computing. He is a co-founder and the Chief Scientist in Material Alchemy, a last mile translational and independent venture for sustainable design of materials.

Dr. Shankar was a Senior Fellow in UCLA-IPAM during a program on Machine Learning and Many-body Physics, invited speaker in The Camille and Henry Dreyfus Foundation on application of Machine Learning for chemistry and materials, Carnegie Science Foundation panelist for Brain and Computing, National Academies speaker on Revolutions in Manufacturing through Mathematics, invited to White House event for Materials Genome, Visiting Lecturer in Kavli Institute of Theoretical Physics in UC-SB, and the first Intel Distinguished Lecturer in Caltech and MIT. He has given several colloquia and lectures in universities all over the world. Dr. Shankar also worked in the semiconductor industry in the areas of materials, reliability, processing, manufacturing, and is a co-inventor in over twenty patent filings. His work was also featured in the journal Science and as a TED talk.

Go here to read the rest:
ANL Special Colloquium on The Future of Computing - HPCwire

$5 million from Boeing will support UCLA quantum science and technology research | UCLA – UCLA Newsroom

UCLA has received a $5 million pledge from Boeing Co. to support faculty at the Center for Quantum Science and Engineering.

The center, which is jointly operated by the UCLA College Division of Physical Sciences and the UCLA Samueli School of Engineering, brings together scientists and engineers at the leading edge of quantum information science and technology. Its members have expertise in disciplines spanning physics, materials science, electrical engineering, computer science, chemistry and mathematics.

We are grateful for Boeings significant pledge, which will help drive innovation in quantum science, said MiguelGarca-Garibay, UCLAs dean of physical sciences. This remarkable investment demonstrates confidence that UCLAs renowned faculty and researchers will spur progress in this emerging field.

UCLA faculty and researchers are already working on exciting advances in quantum science and engineering, Garca-Garibaysaid. And the divisions new one-year masters program, which begins this fall, will help meet the huge demand for trained professionals in quantum technologies.

Quantum science explores the laws of nature that apply to matter at the very smallest scales, like atoms and subatomic particles. Scientists and engineers believe that controlling quantum systems has vast potential for advancing fields ranging from medicine to national security.

Harnessing quantum technologies for the aerospace industry is one of the great challenges we face in the coming years, said Greg Hyslop, Boeings chief engineer and executive vice president of engineering, test and technology. We are committed to growing this field of study and our relationship with UCLA moves us in that direction.

In addition to its uses in aerospace, examples of quantum theory already in action include superconducting magnets, lasers and MRI scans. The next generation of quantum technology will enable powerful quantum computers, sensors and communication systems and transform clinical trials, defense systems, clean water systems and a wide range of other technologies.

Quantum information science and technology promises society-changing capabilities in everything from medicine to computing and beyond, said Eric Hudson, UCLAs David S. Saxon Presidential Professor of Physics and co-director of the center. There is still, however, much work to be done to realize these benefits. This work requires serious partnership between academia and industry, and the Boeing pledge will be an enormous help in both supporting cutting-edge research at UCLA and creating the needed relationships with industry stakeholders.

The Boeing gift complements recent support from the National Science Foundation, including a $25 million award in 2020 to the multi-universityNSF Quantum Leap Challenge Institute for Present and Future Quantum Computation, which Hudson co-directs. And in 2021, the UCLA center received a five-year,$3 million traineeship grantfor doctoral students from the NSF.

Founded in 2018, the Center for Quantum Science and Engineering draws from the talents and creativity of dozens of faculty members and students.

Boeings support is a huge boost for quantum science and engineering at UCLA, said Mark Gyure, executive director of the center and a UCLA adjunct professor of electrical and computer engineering at the UCLA Samueli School of Engineering. Enhancing the Center for Quantum Science and Engineering will attract additional world-class faculty in this rapidly growing field and, together with Boeing and other companies in the region, establish Los Angeles and Southern California as a major hub in quantum science and technology.

Go here to see the original:
$5 million from Boeing will support UCLA quantum science and technology research | UCLA - UCLA Newsroom

Learn Quantum Computing with Python and Q# – iProgrammer

Author: Dr. Sarah Kaiser and Dr. Chris GranadePublisher: ManningDate: June 2021Pages: 384ISBN: 978-1617296130Print:1617296139Kindle:B098BNK1T9Audience: Developers interested in quantum computingRating: 4.5Reviewer: Mike JamesQuantum - it's the future...

...or not, depending on your view of the idea. The idea is fairly simple even if the implementation turns out to be next to impossible. Quantum Mechanics is a strange theory, but it is one that seems to work, and the idea of using its insights to compute things is fairly reasonable. After all, QM is the way the world works things out as it creates reality. This book is an attempt to convey the ideas of quantum computing to the average programmer with minimal math. I say minimal because getting the idea isn't really possible without math and implementing the ideas involves math, so you can't avoid it.

I started off with the idea that this task, quantum computing with minimal math wasn't doable and at the end of reading the book I'm even more convinced that it isn't the way to go. Quantum computing is, as already suggested, heavy on math. If you can't handle the math then you are going to have a tough time understanding what is going on. More to the point, ideas that I have in my head that are summarized by math occupy pages of the book that avoids that math. Far from being more complex, the math makes it simpler and provides shortcuts to thinking that makes thinking about it actually possible.

I have to say that my BSc degree was heavy on QM and more recently I did an advanced course on quantum computing, so I was expecting this book to be a quick read and a refresher. Far from it. I had to read, and re-read several times, descriptions of things that I thought I knew in an effort to make the connection between the long descriptions and the simple math in my head. I'm sure that this is going to be the experience of many readers who lack the math in the head and are trying to see the general principles in the very wordy explanations. This is not the book's fault. If there could be a book that did the job this would be it - well written with a dash of humour, interest and passion - but I don't think it works.

The first section is called Getting Started and this is a very slow and gentle intro to the basics of what quantum computing is all about - qubits, states, randomness and so on. The examples are quantum encryption, key distribution, non-local games and teleportation. They all sound exciting, but the reality is fairly simple once you get the idea. All of the programs in this section are in Python.

Part 2 is about algorithms and it is expressed in Q#. On balance I think that the entire book would be better just using Q#, but it's a matter of opinion. A whole chapter is devoted to the Deutsch-Jozsa algorithm which, if you understand QM, is one of the easiest of the quantum algorithms to understand. It is also the simplest such algorithm that shows an advantage over a classical algorithm. It took me a short time to understand using the math when I first encountered it, but here it took me some hours to dig thought the non-math explanation and at the end I still don't think that you get the idea that its all based on parity. Classically parity is difficult to measure, but in QM its a natural measurement.

Part 3 is called Applied Quantum Computing and I was looking forward to this because the only really deep quantum algorithms I learned back in the day were Grover's and Shor's. I was hoping to broaden my horizons. The first chapter covers quantum annealing and this was interesting because it's not a mainstream area of quantum computing but one that has many practical applications. The only problem is quantum annealing is really too close to quantum analog computing for my tastes. It is basically a universal quantum simulator that can solve many ground state problems - invaluable but not inspiring. After this I encountered two more algorithms - Grover's and Shor's. Well, yes, any book on quantum computing has to cover them, but there is nothing else. Are we really expending huge efforts on building quantum computers just to implement two algorithms? My guess is that the answer is no - we are expending huge effort to run just Shor's algorithm so that we can crack codes. This book does little to convince me that quantum computers have much more to offer, but I hope I'm wrong.

My final verdict is that this is about as good a non-math-oriented introduction to quantum computing gets. Be warned, there are equations and mathematics that keep peeking through at every turn. You cannot avoid it, but you don't need much math to cope. What I would conclude, however, is that it is much easier to learn the math first and then learn the QM that is needed for quantum computing. In my opinion the math makes it easier.

To keep up with our coverage of books for programmers, follow@bookwatchiprogon Twitteror subscribe to IProgrammer'sBooksRSSfeedfor each day's new addition to Book Watch and for new reviews.

See the article here:
Learn Quantum Computing with Python and Q# - iProgrammer

BT tests quantum radio receivers that could boost 5G coverage – TechRadar

BT is trialling a new hyper-sensitive quantum radio receiver that could boost the capabilities of 5G and Internet of Things (IoT) networks by reducing energy consumption and boosting coverage.

The receivers use excited atoms to achieve 100 times greater sensitivity than conventional radio equipment thanks to a quantum effect called electromagnetically induced transparency that forms a highly sensitive electric field detector.

Because the atomic radio frequency (RF) receivers are more sensitive, they could be deployed in areas where its impractical or not cost-effective to deploy mobile infrastructure. This would make nationwide 5G coverage a reality.

Meanwhile lower energy consumption would transform the economics of massive IoT projects that rely on long battery life.

The longer an IoT device can be left in the field without needing to be touched or replaced, the greater the return on investment.

BTs engineers successfully sent digitally-encoded messages using the technology via EEs 3.6GHz spectrum. The use of commercially-licensed frequencies could accelerate the timetable for the receivers to be used in the real world. Researchers are now working to miniaturise the equipment and find the optimum frequency modulation and signal processing so it can be used in the future.

BTs investment in cutting edge R&D plays a central role in ensuring the UK remains a network technology leader, said Howard Watson, BT chief technology officer (CTO). Our programme has huge potential to boost the performance of our next generation EE network and deliver an even better service to our customers. Although its early days for the technology, were proud to be playing an instrumental role in developing cutting edge science.

BTs interest in quantum technology has seen it and Toshiba build the worlds first commercial quantum-secured metro network using standard fibre cables in London.

The UK government has expressed a desire to be at the forefront of the field, believing quantum computing can play a vital role in the connected economy and accelerate Industrial Internet of things (IIoT) deployments. A National Quantum Computing Centre (NQCC) is expected to open in 2022 as part of the 1 billion National Quantum Technologies Programme.

See original here:
BT tests quantum radio receivers that could boost 5G coverage - TechRadar