Archive for the ‘Quantum Computer’ Category

Quantum Trends And The Internet of Things – Forbes

Depositphotos

As a new decade approaches, we are in a state of technological flux across many spectrums. One area to take note of is quantum computing. We are starting to evolve beyond classical computing into a new data era called quantum computing. It is envisioned that quantum computing (still in a development stage) will accelerate us into the future by impacting the landscape of artificial intelligence and data analytics. The quantum computing power and speed will help us solve some of the biggest and most complex challenges we face as humans.

Gartner describes quantum computing as: [T]he use of atomic quantum states to effect computation. Data is held in qubits (quantum bits), which have the ability to hold all possible states simultaneously. Data held in qubits is affected by data held in other qubits, even when physically separated. This effect is known as entanglement. In a simplified description, quantum computers use quantum bits or qubits instead of using binary traditional bits of ones and zeros for digital communications.

There is an additional entanglement relating to quantum, and that is its intersection with the Internet of Things (IoT). Loosely defined, the Internet of Things (IoT) refers to the general idea of things that are readable, recognizable, locatable, addressable, and/or controllable via the Internet. It encompasses devices, sensors, people, data, and machines and the interactions between them. Business Insider Intelligence forecasted that by 2023, consumers, companies and governments will install 40 billion IoT devices globally.

As we rapidly continue to evolve into the IoT and the new digital economy, both edge devices and data are proliferating at amazing rates. The challenge now is how do we monitor and ensure quality service of the IoT? Responsiveness, scalability, processes, and efficiency are needed to best service any new technology or capability. Especially across trillions of sensors.

Specifically, quantum technologies will influence: optimization of computing power, computing models, network latency, interoperability, artificial intelligence (human/computer interface), real-time analytics and predictive analytics, increased storage and data memory power, secure cloud computing, virtualization, and the emerging 5G telecommunications infrastructure. For 5G, secure end-to end communications are fundamental and quantum encryption (which generates secure codes) may be the solution for rapidly growing IoT connectivity.

Security of the IoT is a paramount issue. Currently cryptographic algorithms are being used to help secure the communication (validation and verification) in the IoT. But because they rely on public key schemes, their encryption could be broken by sophisticated hackers using quantum computers in the not so distant future.

On the other side of the coin, quantum computing has the ability to create an almost un-hackable network of devices and data. The need to securely encrypt and protect IoT connected devices and power them with exponential speed and analytical capabilities is an imperative for both government and the private sector.

As quantum computing and IoT merge, there will also be an evolving new ecosystem of policy Issues. These include, ethics, interoperability protocols, cybersecurity, privacy/surveillance, complex autonomous systems, best commercial practices.

As quantum computing capabilities advance, we should act now to prepare IoT for the quantum world. There are many areas to explore in research and development and eventually implementation. The coming decade will provide both imperatives and opportunities to explore quantum implications.

Chuck Brooks is a globally recognized thought leader and evangelist for Cybersecurity and Emerging Technologies. LinkedIn named Chuck as one of The Top 5 Tech People to Follow on LinkedIn. He was named by Thompson Reuters as a Top 50 Global Influencer in Risk, Compliance, and by IFSEC as the #2 Global Cybersecurity Influencer in 2018. He is also a Cybersecurity Expert for The Network at the Washington Post, Visiting Editor at Homeland Security Today, and a Contributor to FORBES.

Chuck Brooks, is also Chair of the IoT and Quantum Computing Committee of Quantum Security Alliance. Quantum Security Alliance was formed to bring academia, industry, researchers, and US government entities together to identify, define, collaborate, baseline, standardize and protect sovereign countries, society, and individuals from the far-reaching impacts of Quantum Computing.

See the rest here:

Quantum Trends And The Internet of Things - Forbes

Fugaku Remakes Exascale Computing In Its Own Image – The Next Platform

When originally conceived, Japans Post-K supercomputer was supposed to be the countrys first exascale system. Developed by Fujitsu and the RIKEN Center for Computational Science, the system, now known as Fugaku, is designed to be two orders of magnitude faster than its predecessor, the 11.3-petaflops (peak) K computer. But a funny thing happened on the way to exascale. By the time the silicon dust had settled on the A64FX, the chip that will power Fugaku, it had morphed into a pre-exascale system.

The current estimate is that the RIKEN-bound supercomputer will top out at about 400 peak petaflops at double precision. Given that the system has to fit in a 30 MW to 40 MW power envelope, thats about all you can squeeze out of the 150,000 single-socket nodes that will make up the machine. Which is actually rather impressive. The A64FX prototype machine, aka micro-Fugaku, is currently the most energy-efficient supercomputer in the world, delivering 16.9 gigaflops per watt. However, extrapolating that out to an exaflop machine with those same (or very similar) processors would require something approaching 60 MW to 80 megawatts.

But according to Satoshi Matsuoka, director of the RIKEN lab, the performance goal of achieving two orders of magnitude improvement over the K computer will be achieved from an application performance perspective. That was the plan from the beginning, Matsuoka tells The Next Platform.

To imply that 100-fold application boost amounts to exascale capability is a bit of stretch, but if Fugaku effectively performs at that level relative to the performance of applications on the K machine, that is probably more important to RIKEN users. It should be pointed out that not all applications are going to enjoy that magnitude of speedup. The table below illustrates the expected performance boost for nine target applications relative to the K computer.

Even though Fugaku has only 20 times the raw performance and energy efficiency of its predecessor, the 100X performance improvement is the defining metric, says Matsuoka. That kind of overachievement (again, on some codes) is the result of certain capabilities baked into the A64FX silicon, in particular the use of Arms Scalable Vector Extension (SVE), which provides something akin to an integrated 512-bit-wide vector processor on-chip, delivering about three teraflops of peak oomph.

Perhaps even more significant is the 32 GB of HBM2 stacked memory glued onto the A64FX package, which delivers 29X the bandwidth of the memory system on the K computer. The choice to dispense with conventional memory and go entirely with HBM2 was the result of the recognition that many HPC applications these days are memory-bound rather than compute bound. In fact, achieving better balance between flops and memory bandwidth was a key design point for Fugaku. The compromise here is that 32 GB is not much capacity, especially for applications that need to work with really large datasets.

The other aspect of Fugaku that could earn it exascale street cred is in the realm of lower precision floating point. Although the system will deliver 400 peak petaflops at double precision (FP64), it will provide 800 petaflops at single precision (FP32) and 1.6 exaflops at half precision (FP16). The half precision support alludes to AI applications that can make extensive use of 16-bit floating point arithmetic to build artificial neural networks. Fugaku may even manage to hit an exaflop or better on the HPL-AI benchmark, which makes extensive use of FP16 to run for High Performance Linpack (HPL).

When run on the 200 petaflops Summit machine at Oak Ridge National Laboratory, HPL-AI delivered 445 petaflops on Linpack, which was three times faster than the result performed solely with FP64. More to the point, if the same iterative refinement techniques using FP16 can be used on real applications, its possible that actual HPC codes can be accelerated to exascale levels.

The more straightforward use of reduced precision math, employing both FP16 and FP32, is for training AI models. Again, work on Summit proved that lower precision math could attain exascale-level computing on these machines. In this particular case, developers employed the Tensor Cores on the systems V100 GPUs to use a neural network to classify extreme weather patterns, achieving peak performance of 1.13 exaops and sustained performance of 0.999 exaops.

Whether reduced precision exaflops or exaops qualifies as exascale computing is a semantic exercise more than anything else. Of course, thats not going to be very satisfying for computer historians or even for analysts and journalists attempting to track HPC capability in real-time.

But perhaps thats as it should be as. The attainment of a particular peak performance or Linpack performance numbers does little to inform the state of supercomputing. And given the increasing importance of AI workloads, which are not based on 64-bit computing, its not surprising that HPC is moving away from these simplistic measures. The expected emergence of neuromorphic and quantum computing in the coming decade will further muddy the waters.

That said, users will continue to rely primarily on 64-bit flops to run HPC simulations, which will continue to be heavily used by the scientists and engineers for the foreseeable future.

With that in mind, RIKEN is already planning for its post-Fugaku system, which Matsuoka says is tentatively scheduled to make its appearance in 2028. According to him, RIKEN is planning to do an analysis on how it can build something 20X more powerful than Fugaku. He says the challenge is that current technologies wont extrapolate to such a system in any practical manner. Which once again means they will have to innovate at the architectural level, but this time without the benefit of Moores Law.

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.

Subscribe now

View original post here:

Fugaku Remakes Exascale Computing In Its Own Image - The Next Platform

OrbitsEdge teams up with HPE to build data centres in Space – Data Economy

Last week, Amazons AWS re:Invent 2019 conference welcomed more than 60,000 attendees, spread out across six venues on the Las Vegas Strip, which promised to make re:Invent 2019 the biggest re:Invent yet.

Here is a list of just some of the announcements the cloud giants made over the course of the conference:

AWS Local Zone

AWS announced the opening of an AWS Local Zone in LosAngeles (LA). AWS Local Zones are a new type of AWS infrastructure deploymentthat place compute, storage, database, and other select services close tocustomers, giving developers in LA the ability to deploy applications thatrequire single-digit millisecond latencies to end-users in LA.

Amazon EC2

The cloud giants unveiled nine new Amazon Elastic ComputeCloud (EC2) innovations. AWS added to its industry-leading compute andnetworking innovations with new Arm-based instances (M6g, C6g, R6g) powered byAWS-designed processors in Graviton2, machine learning inference instances(Inf1) powered by AWS-designed Inferentia chips.

AWS Outposts

AWS announced general availability of AWS Outposts, fully managed and configurable compute and storage racks built with AWS-designed hardware that allow customers to run compute and storage on-premises while connecting to AWSs broad array of services in the cloud.

AWS Outposts bring native AWS services, infrastructure, andoperating models to virtually any data centre, co-location space, oron-premises facility.

AWS Wavelength

AWS announced AWS Wavelength, which provides developers theability to build applications that serve end-users with single-digitmillisecond latencies over the 5G network.

Newsletter

Time is precious, but news has no time. Sign up today to receive daily free updates in your email box from the Data Economy Newsroom.

Wavelength embeds AWS compute and storage services at theedge of telecommunications providers 5G networks, enabling developers to serveuse-cases that require ultra-low latency like machine learning inference at theedge, autonomous industrial equipment, smart cars and cities, Internet ofThings (IoT), and Augmented and Virtual Reality.

Quantum ComputingService

AWS announced three initiatives as a part of the companysplans to help advance quantum computing technologies:

Theres no question the world will be a better place ifeveryone can innovate more quickly and efficiently, said Charlie Bell, SVP,Amazon Web Services.

And if stuff just works better. For that reason, Im excited that we are sharing what weve learned with you in the Amazon Builders Library.

Read the latest from the Data Economy Newsroom:

See the original post here:

OrbitsEdge teams up with HPE to build data centres in Space - Data Economy

Quantum Computers Are About to Forever Change Car Navigation – autoevolution

We presently take great pride in the way we can find directions to anywhere. Gone are the days were the paper-printed maps were our only guides in foreign places, as now all it takes to get from point A to point wherever is a swipe of the finger.

All present-day navigation solutions can direct a car depending on a variety of factors on a number of routes. The problem is none of them take into account what the other cars are doing in real time, and, just when you were about to gloat for having dodged a bottleneck, you find other drivers, lots of them, had the exact same advice served to them by navigation apps.

Quantum computing might help with that, as they are countless times faster, and exactly such a solution was tested by Volkswagen earlier this month at the Web Summit in Portugal.

Using an algorithm called Quantum Routing and a D-Wave quantum computer, Volkswagen showed that nine public transit buses can successfully avoid traffic jams by knowing in real-time where such queues are being formed.

Volkswagen believes quantum computing has the potential to revolutionize how we use and learn from data in the real world, said in a statement Thomas Bartol, senior vice president of Information Technology and Services for Volkswagen Group of America.

Even though the technology is still in its early stages, this demonstration shows its potential, and how Volkswagen plans to play a leading role in bringing these solutions to market.

The tech demonstrated by the Germans in Portugal is nowhere near mass implementation. Volkswagen did announce that it is planning to bring the tools it already showed to market maturity, but it's unclear in what timeframe.

For now, the carmaker is looking for other clogged cities to explore.

The rest is here:

Quantum Computers Are About to Forever Change Car Navigation - autoevolution