Archive for the ‘Quantum Computer’ Category

D-Wave Announces Promotion of Dr. Alan Baratz to CEO – GlobeNewswire

BURNABY, British Columbia, Dec. 09, 2019 (GLOBE NEWSWIRE) -- D-Wave Systems Inc., the leader in quantum computing systems, software, and services, today announced that Dr. Alan Baratz will assume the role of chief executive officer (CEO), effective January 1, 2020. Baratz joined D-Wave in 2017 and currently serves as the chief product officer and executive vice president of research and development for D-Wave. He takes over from the retiring CEO, Vern Brownell.

Baratzs promotion to CEO follows the launch of Leap, D-Waves quantum cloud service, in October 2018, and comes in advance of the mid-2020 launch of the companys next-generation quantum system, Advantage.

Baratz has driven the development, delivery, and support of all of D-Waves products, technologies, and applications in recent years. He has over 25 years of experience in product development and bringing new products to market at leading technology companies and software startups. As the first president of JavaSoft at Sun Microsystems, Baratz oversaw the growth and adoption of the Java platform from its infancy to a robust platform supporting mission-critical applications in nearly 80 percent of Fortune 1000 companies. He has also held executive positions at Symphony, Avaya, Cisco, and IBM. He served as CEO and president of Versata, Zaplet, and NeoPath Networks, and as a managing director at Warburg Pincus LLC. Baratz holds a doctorate in computer science from the Massachusetts Institute of Technology.

I joined D-Wave to bring quantum computing technology to the enterprise. Now more than ever, I am convinced that making practical quantum computing available to forward-thinking businesses and emerging quantum developers through the cloud is central to jumpstarting the broad development of in-production quantum applications, said Baratz, chief product officer and head of research and development. As I assume the CEO role, Ill focus on expanding the early beachheads for quantum computing that exist in manufacturing, mobility, new materials creation, and financial services into real value for our customers. I am honored to take over the leadership of the company and work together with the D-Wave team as we begin to deliver real business results with our quantum computers.

The company also announced that CEO Vern Brownell has decided to retire at the end of the year in order to spend more time at his home in Boston with his family. Baratz will become CEO at that time. During Brownells tenure, D-Wave developed four generations of commercial quantum computers, raised over $170 million in venture funding, and secured its first customers, including Lockheed Martin, Google and NASA, and Los Alamos National Laboratory. Brownell will continue to serve as an advisor to the board.

There are very few moments in your life when you have the opportunity to build an entirely new market. My 10 years at D-Wave have been rich with breakthroughs, like selling the first commercial quantum computer. I am humbled to have been a part of building the quantum ecosystem, said Brownell, retiring D-Wave CEO. Alan has shown tremendous leadership in our technology and product development efforts, and I am working with him to transition leadership of the entire business. This is an exciting time for quantum computing and an exciting time for D-Wave. I cant imagine a better leader than Alan at the helm for the next phase of bringing practical quantum computing to enterprises around the world.

With cloud access and the development of more than 200 early applications, quantum computing is experiencing explosive growth. We are excited to recognize Alans work in bringing Leap to market and building the next-generation Advantage system. And as D-Wave expands their Quantum-as-a-Service offerings, Alans expertise with growing developer communities and delivering SaaS solutions to enterprises will be critical for D-Waves success in the market, said Paul Lee, D-Wave board chair. I want to thank Vern for his 10 years of contributions to D-Wave. He was central in our ability to be the first to commercialize quantum computers and has made important contributions not only to D-Wave, but also in building the quantum ecosystem.

About D-Wave Systems Inc.D-Wave is the leader in the development and delivery of quantum computing systems, software, and services and is the worlds first commercial supplier of quantum computers. Our mission is to unlock the power of quantum computing for the world. We do this by delivering customer value with practical quantum applications for problems as diverse as logistics, artificial intelligence, materials sciences, drug discovery, cybersecurity, fault detection, and financial modeling. D-Waves systems are being used by some of the worlds most advanced organizations, including Volkswagen, DENSO, Lockheed Martin, USRA, USC, Los Alamos National Laboratory, and Oak Ridge National Laboratory. With headquarters near Vancouver, Canada, D-Waves US operations are based in Palo Alto, CA and Bellevue, WA. D-Wave has a blue-chip investor base including PSP Investments, Goldman Sachs, BDC Capital, DFJ, In-Q-Tel, BDC Capital, PenderFund Capital, 180 Degree Capital Corp., and Kensington Capital Partners Limited. For more information, visit: http://www.dwavesys.com.

Contact D-Wave Systems Inc.dwave@launchsquad.com

More here:

D-Wave Announces Promotion of Dr. Alan Baratz to CEO - GlobeNewswire

Quantum supremacy is here, but smart data will have the biggest impact – Quantaneo, the Quantum Computing Source

Making fast and powerful quantum computing available through the cloud can enable tasks to be processed millions of times faster, and could shape lives and businesses as we know it. For example, applications using quantum computing could reduce or prevent traffic congestion, cybercrimes, and cancer. However, reaching the quantum supremacy landmark doesnt mean that Google can take its foot off the gas. Rather, the company has thrown down the gauntlet and the race to commercialize quantum computing is on. Delivering this killer technology is still an uphill battle to harness the power of highly fickle machines and move around quantum bits of information, which is inherently error-prone.

To deliver quantum cloud services, whether for commercial or academic research, Google must tie together units of quantum information (qubits) and wire data, which is part of every action and transaction across the entire IT infrastructure. If quantum cloud services get to the big league, it will still rely on traffic flows based on wire data to deliver value to users. This raises a conundrum for IT and security professionals who must assure services and deliver a flawless user experience. On one hand, the quantum cloud service solves a million computations in parallel and in real time. On the other hand, the results are delivered through wire data across a cloud, SD-WAN, or 5G network. It does not matter if a quantum computer today or tomorrow can crank out an answer 100 million times faster than a regular computer chip if an application that depends on it experiences performance problems or a threat actor is lurking in your on-premises data centre or penetrated the IT infrastructure first and last lines of defence.

No matter what the quantum computing world will look like in the future, IT teams such as NetOps and SecOps will still need to use wire data to gain end-to-end visibility into their on-premises data centres and cloud environment. Wire data is used to fill the visibility gap and see what others cant; to gain actionable intelligence to detect cyber-attacks or quickly solve service degradations. Quantum computing may increase speed, but it also adds a new dimension of infrastructure complexity and the potential for something breaking anywhere along the service delivery path. With that said, reducing risk therefore requires removing service delivery blind spots. A proven way to do that is by turning wire data into smart data to cut through infrastructure complexity and gain visibility without borders. When that happens, the IT organization will fully understand with precise accuracy the issues impacting service performance and security.

In the rush to embrace quantum computing, wire data therefore cannot, and should not, be ignored. Wire data can be turned into contextually, useful smart data. With a smart data platform, the IT organization can help make quantum computing a success by protecting user experience across different industries including automotive, manufacturing and healthcare. Therefore, while Google is striving for high quality qubits and blazing new quantum supremacy trails, success ultimately relies on using smart data for service assurance and security in an age of infinite devices, cloud applications and exponential scalability.

Ron Lifton, Senior Enterprise Solutions Manager, NETSCOUT

Original post:

Quantum supremacy is here, but smart data will have the biggest impact - Quantaneo, the Quantum Computing Source

Quantum Trends And The Internet of Things – Forbes

Depositphotos

As a new decade approaches, we are in a state of technological flux across many spectrums. One area to take note of is quantum computing. We are starting to evolve beyond classical computing into a new data era called quantum computing. It is envisioned that quantum computing (still in a development stage) will accelerate us into the future by impacting the landscape of artificial intelligence and data analytics. The quantum computing power and speed will help us solve some of the biggest and most complex challenges we face as humans.

Gartner describes quantum computing as: [T]he use of atomic quantum states to effect computation. Data is held in qubits (quantum bits), which have the ability to hold all possible states simultaneously. Data held in qubits is affected by data held in other qubits, even when physically separated. This effect is known as entanglement. In a simplified description, quantum computers use quantum bits or qubits instead of using binary traditional bits of ones and zeros for digital communications.

There is an additional entanglement relating to quantum, and that is its intersection with the Internet of Things (IoT). Loosely defined, the Internet of Things (IoT) refers to the general idea of things that are readable, recognizable, locatable, addressable, and/or controllable via the Internet. It encompasses devices, sensors, people, data, and machines and the interactions between them. Business Insider Intelligence forecasted that by 2023, consumers, companies and governments will install 40 billion IoT devices globally.

As we rapidly continue to evolve into the IoT and the new digital economy, both edge devices and data are proliferating at amazing rates. The challenge now is how do we monitor and ensure quality service of the IoT? Responsiveness, scalability, processes, and efficiency are needed to best service any new technology or capability. Especially across trillions of sensors.

Specifically, quantum technologies will influence: optimization of computing power, computing models, network latency, interoperability, artificial intelligence (human/computer interface), real-time analytics and predictive analytics, increased storage and data memory power, secure cloud computing, virtualization, and the emerging 5G telecommunications infrastructure. For 5G, secure end-to end communications are fundamental and quantum encryption (which generates secure codes) may be the solution for rapidly growing IoT connectivity.

Security of the IoT is a paramount issue. Currently cryptographic algorithms are being used to help secure the communication (validation and verification) in the IoT. But because they rely on public key schemes, their encryption could be broken by sophisticated hackers using quantum computers in the not so distant future.

On the other side of the coin, quantum computing has the ability to create an almost un-hackable network of devices and data. The need to securely encrypt and protect IoT connected devices and power them with exponential speed and analytical capabilities is an imperative for both government and the private sector.

As quantum computing and IoT merge, there will also be an evolving new ecosystem of policy Issues. These include, ethics, interoperability protocols, cybersecurity, privacy/surveillance, complex autonomous systems, best commercial practices.

As quantum computing capabilities advance, we should act now to prepare IoT for the quantum world. There are many areas to explore in research and development and eventually implementation. The coming decade will provide both imperatives and opportunities to explore quantum implications.

Chuck Brooks is a globally recognized thought leader and evangelist for Cybersecurity and Emerging Technologies. LinkedIn named Chuck as one of The Top 5 Tech People to Follow on LinkedIn. He was named by Thompson Reuters as a Top 50 Global Influencer in Risk, Compliance, and by IFSEC as the #2 Global Cybersecurity Influencer in 2018. He is also a Cybersecurity Expert for The Network at the Washington Post, Visiting Editor at Homeland Security Today, and a Contributor to FORBES.

Chuck Brooks, is also Chair of the IoT and Quantum Computing Committee of Quantum Security Alliance. Quantum Security Alliance was formed to bring academia, industry, researchers, and US government entities together to identify, define, collaborate, baseline, standardize and protect sovereign countries, society, and individuals from the far-reaching impacts of Quantum Computing.

See the rest here:

Quantum Trends And The Internet of Things - Forbes

Fugaku Remakes Exascale Computing In Its Own Image – The Next Platform

When originally conceived, Japans Post-K supercomputer was supposed to be the countrys first exascale system. Developed by Fujitsu and the RIKEN Center for Computational Science, the system, now known as Fugaku, is designed to be two orders of magnitude faster than its predecessor, the 11.3-petaflops (peak) K computer. But a funny thing happened on the way to exascale. By the time the silicon dust had settled on the A64FX, the chip that will power Fugaku, it had morphed into a pre-exascale system.

The current estimate is that the RIKEN-bound supercomputer will top out at about 400 peak petaflops at double precision. Given that the system has to fit in a 30 MW to 40 MW power envelope, thats about all you can squeeze out of the 150,000 single-socket nodes that will make up the machine. Which is actually rather impressive. The A64FX prototype machine, aka micro-Fugaku, is currently the most energy-efficient supercomputer in the world, delivering 16.9 gigaflops per watt. However, extrapolating that out to an exaflop machine with those same (or very similar) processors would require something approaching 60 MW to 80 megawatts.

But according to Satoshi Matsuoka, director of the RIKEN lab, the performance goal of achieving two orders of magnitude improvement over the K computer will be achieved from an application performance perspective. That was the plan from the beginning, Matsuoka tells The Next Platform.

To imply that 100-fold application boost amounts to exascale capability is a bit of stretch, but if Fugaku effectively performs at that level relative to the performance of applications on the K machine, that is probably more important to RIKEN users. It should be pointed out that not all applications are going to enjoy that magnitude of speedup. The table below illustrates the expected performance boost for nine target applications relative to the K computer.

Even though Fugaku has only 20 times the raw performance and energy efficiency of its predecessor, the 100X performance improvement is the defining metric, says Matsuoka. That kind of overachievement (again, on some codes) is the result of certain capabilities baked into the A64FX silicon, in particular the use of Arms Scalable Vector Extension (SVE), which provides something akin to an integrated 512-bit-wide vector processor on-chip, delivering about three teraflops of peak oomph.

Perhaps even more significant is the 32 GB of HBM2 stacked memory glued onto the A64FX package, which delivers 29X the bandwidth of the memory system on the K computer. The choice to dispense with conventional memory and go entirely with HBM2 was the result of the recognition that many HPC applications these days are memory-bound rather than compute bound. In fact, achieving better balance between flops and memory bandwidth was a key design point for Fugaku. The compromise here is that 32 GB is not much capacity, especially for applications that need to work with really large datasets.

The other aspect of Fugaku that could earn it exascale street cred is in the realm of lower precision floating point. Although the system will deliver 400 peak petaflops at double precision (FP64), it will provide 800 petaflops at single precision (FP32) and 1.6 exaflops at half precision (FP16). The half precision support alludes to AI applications that can make extensive use of 16-bit floating point arithmetic to build artificial neural networks. Fugaku may even manage to hit an exaflop or better on the HPL-AI benchmark, which makes extensive use of FP16 to run for High Performance Linpack (HPL).

When run on the 200 petaflops Summit machine at Oak Ridge National Laboratory, HPL-AI delivered 445 petaflops on Linpack, which was three times faster than the result performed solely with FP64. More to the point, if the same iterative refinement techniques using FP16 can be used on real applications, its possible that actual HPC codes can be accelerated to exascale levels.

The more straightforward use of reduced precision math, employing both FP16 and FP32, is for training AI models. Again, work on Summit proved that lower precision math could attain exascale-level computing on these machines. In this particular case, developers employed the Tensor Cores on the systems V100 GPUs to use a neural network to classify extreme weather patterns, achieving peak performance of 1.13 exaops and sustained performance of 0.999 exaops.

Whether reduced precision exaflops or exaops qualifies as exascale computing is a semantic exercise more than anything else. Of course, thats not going to be very satisfying for computer historians or even for analysts and journalists attempting to track HPC capability in real-time.

But perhaps thats as it should be as. The attainment of a particular peak performance or Linpack performance numbers does little to inform the state of supercomputing. And given the increasing importance of AI workloads, which are not based on 64-bit computing, its not surprising that HPC is moving away from these simplistic measures. The expected emergence of neuromorphic and quantum computing in the coming decade will further muddy the waters.

That said, users will continue to rely primarily on 64-bit flops to run HPC simulations, which will continue to be heavily used by the scientists and engineers for the foreseeable future.

With that in mind, RIKEN is already planning for its post-Fugaku system, which Matsuoka says is tentatively scheduled to make its appearance in 2028. According to him, RIKEN is planning to do an analysis on how it can build something 20X more powerful than Fugaku. He says the challenge is that current technologies wont extrapolate to such a system in any practical manner. Which once again means they will have to innovate at the architectural level, but this time without the benefit of Moores Law.

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.

Subscribe now

View original post here:

Fugaku Remakes Exascale Computing In Its Own Image - The Next Platform

OrbitsEdge teams up with HPE to build data centres in Space – Data Economy

Last week, Amazons AWS re:Invent 2019 conference welcomed more than 60,000 attendees, spread out across six venues on the Las Vegas Strip, which promised to make re:Invent 2019 the biggest re:Invent yet.

Here is a list of just some of the announcements the cloud giants made over the course of the conference:

AWS Local Zone

AWS announced the opening of an AWS Local Zone in LosAngeles (LA). AWS Local Zones are a new type of AWS infrastructure deploymentthat place compute, storage, database, and other select services close tocustomers, giving developers in LA the ability to deploy applications thatrequire single-digit millisecond latencies to end-users in LA.

Amazon EC2

The cloud giants unveiled nine new Amazon Elastic ComputeCloud (EC2) innovations. AWS added to its industry-leading compute andnetworking innovations with new Arm-based instances (M6g, C6g, R6g) powered byAWS-designed processors in Graviton2, machine learning inference instances(Inf1) powered by AWS-designed Inferentia chips.

AWS Outposts

AWS announced general availability of AWS Outposts, fully managed and configurable compute and storage racks built with AWS-designed hardware that allow customers to run compute and storage on-premises while connecting to AWSs broad array of services in the cloud.

AWS Outposts bring native AWS services, infrastructure, andoperating models to virtually any data centre, co-location space, oron-premises facility.

AWS Wavelength

AWS announced AWS Wavelength, which provides developers theability to build applications that serve end-users with single-digitmillisecond latencies over the 5G network.

Newsletter

Time is precious, but news has no time. Sign up today to receive daily free updates in your email box from the Data Economy Newsroom.

Wavelength embeds AWS compute and storage services at theedge of telecommunications providers 5G networks, enabling developers to serveuse-cases that require ultra-low latency like machine learning inference at theedge, autonomous industrial equipment, smart cars and cities, Internet ofThings (IoT), and Augmented and Virtual Reality.

Quantum ComputingService

AWS announced three initiatives as a part of the companysplans to help advance quantum computing technologies:

Theres no question the world will be a better place ifeveryone can innovate more quickly and efficiently, said Charlie Bell, SVP,Amazon Web Services.

And if stuff just works better. For that reason, Im excited that we are sharing what weve learned with you in the Amazon Builders Library.

Read the latest from the Data Economy Newsroom:

See the original post here:

OrbitsEdge teams up with HPE to build data centres in Space - Data Economy