Archive for the ‘Quantum Computer’ Category

The 3 Most Undervalued Quantum Computing Stocks to Buy Now … – InvestorPlace

Quantum computing emerges as a pivotal frontier as the tech landscape continually evolves. This article focuses on three undervalued quantum computing stocks, each with growth potential. Despite their undervaluation, these trailblazers are making significant strides in quantum technology, setting the stage for a potential surge in stock prices.

Moving forward, well delve deeper into these undervalued quantum computing stocks. Each company boasts a unique narrative within the quantum computing sector, armed with distinct strategies and offerings. We aim to provide a brief investment thesis for each, equipping you with the insights needed for informed decision-making.So, stay with us as we unravel the promising potential of these stocks in the thrilling world of quantum computing.

Source: Amin Van / Shutterstock.com

IonQs (NYSE:IONQ) strong growth and partnerships with major tech companies like Amazon (NASDAQ:AMZN) make it an attractive investment. Despite recent volatility, the companys positive outlook and increasing bookings suggest a promising future.

The company announced the availability of its quantum computer, IonQ Aria, on Amazon Web Services (AWS) in May this year. This addition to AWSs quantum computing service, Amazon Braket, expands IonQs existing presence on the platform following the debut of IonQs Harmony system in 2020. IonQ Aria, with its 25 algorithmic qubits, allows users to run more complex quantum algorithms to tackle challenging problems.

The expansion of IonQs ecosystem with its partnership makes it an undervalued quantum computing stock to consider. IONQ stocks performance also makes it a momentum play. Its up over 330% year-to-date, and its sales grew 115% quarter-over-quarter.

Source: rafapress / Shutterstock.com

Microsoft (NASDAQ:MSFT) has been doing well, but I would still rank it as one of the undervalued quantum computing stocks. This is primarily due to its competitive positioning and how it harnesses quantum technology. Simply put, its application of topological qubits is seen as a high-risk, high-reward venture. At the same time, other companies like IonQ build less experimental but perhaps less effective quantum systems.

Its theorized by some that Microsofts quantum approach will lead to lower fault tolerance and, ultimately, a faster time to market for a commercial quantum computer. It should be noted this is firmly in the realm of speculation, as MSFTs approach is yet to be proven definitively. But it has made significant headway in its R&D efforts, which suggests its firmly on the right track.

MSFT is also benefiting from the rise of generative AI with its subscription service for its Office suite of products. This led to the company reaching a new all-time high in July.

Source: Bartlomiej K. Wroblewski / Shutterstock.com

Quantum Computing Inc (NASDAQ:QUBT) offers a unique opportunity to invest in a smaller, niche player in the quantum computing field. The companys focus on hardware and software development could position it well for future growth in the quantum computing market.

QUBT is a penny stock with a market cap of $90 million. Still, it has big plans for the future. Its flagship product is the Reservoir Computer, a compact hardware device designed to make neuromorphic hardware accessible and affordable. Neuromorphic computing differs from quantum as it attempts to mirror how neurons and synapses work in a human brain. The advantage is that it lets AI models learn in parallel, as opposed to sequentially in traditional computing, thus allowing them to perform better at tasks such as pattern recognition.

We may see an arms race between neuromorphic computing and quantum as the de facto standard. In a small way, this could be compared to the race between HD DVDs and Blu-ray discs. Both situations involve competing technologies vying to become the dominant standard in their respective fields. Although quantum and neuromorphic hardware technologies are not direct competitors, from an investors point of view, it may be worth diversifying into both forms of tech as we dont know which will come out on top until later.

On the date of publication, Matthew Farley did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed are those of the writer, subject to theInvestorPlace.com Publishing Guidelines.

Matthew started writing coverage of the financial markets during the crypto boom of 2017 and was also a team member of several fintech startups. He then started writing about Australian and U.S. equities for various publications. His work has appeared in MarketBeat, FXStreet, Cryptoslate, Seeking Alpha, and the New Scientist magazine, among others.

Link:
The 3 Most Undervalued Quantum Computing Stocks to Buy Now ... - InvestorPlace

Intel’s Tunnel Falls Quantum Chip Will Help Researchers Advance … – Sourceability

Intel just released its first quantum spin qubit chip, Tunnel Falls. For now, its only available to researchers who will use the hardware to jumpstart experiments and new studies to advance the quantum computing industry.

Quantum computing has come a long way in the past decade. Today, we are closer to practical quantum hardware than ever before. Yet, this tremendous prospect remains out of reachand experts dont know how long it will take to become a reality.

That hasnt stopped startups, tech giants, and chipmakers alike from throwing their hats into the quantum computing ring. Intel is the latest to take a step forward, recently announcing a new quantum chip dubbed Tunnel Falls.

The 12-qubit silicon chip uses spin qubits to perform complex computing functions. However, it wont be available commercially. Intel plans to ship Tunnel Falls chips to leading researchers and academic institutions in hopes the collaboration will advance quantum spin qubit research.

Given that quantum computing is still in its relative infancy, academic institutions dont have the manufacturing fabrication equipment necessary to produce quantum chips at scale. Intel, as one of the worlds largest semiconductor manufacturers, does.

Putting Tunnel Falls chips directly into the hands of researchers means they can start experimenting and working on new research projects immediately. Intel hopes this will open a wide range of experiments and lead to new discoveries about the fundamentals of qubits and quantum dots.

Intels director of quantum hardware, Jim Clarke, said in a statement, Tunnel Falls is Intels most advanced silicon spin qubit chip to date and draws upon the companys decades of transistor design and manufacturing expertise. The release of the new chip is the next step in Intels long-term strategy to build a full-stack commercial quantum computing system.

While there are still fundamental questions and challenges that must be solved along the path to a fault-tolerant quantum computer, the academic community can now explore this technology and accelerate research development, he adds.

The first institutions to receive Tunnel Falls silicon include the University of Marylands Laboratory for Physical Sciences (LPS), Sandia National Laboratories, the University of Rochester, and the University of Wisconsin-Madison. Intel is also collaborating with the Qubits for Computing Foundry (QCF) program through the U.S. Army Research Office.

With this initiative, Intel aims to democratize silicon spin qubits by enabling researchers to gain hands-on experience working with scaled arrays of these qubits.

Notably, information gathered from experiments and research at partner institutions will be shared publicly. So while sharing Tunnel Falls chips is an effort to help Intel advance its quantum silicon ambitions, it is also a source of learning for the wider quantum community.

Tunnel Falls is Intels first spin qubit device being released to the research community. It comes after nearly a decade of research from Intel Labs. The chip is fabricated on 300-millimeter wafers in the companys D1 fabrication facility using Intels advanced manufacturing capabilities.

But what is a spin qubit? Rather than encoding data in traditional binary 1s and 0s, spin qubits encode information in the spin (up/down) of a single electron. Intel likens each qubit device to a single electron transistor.

Notably, this design has similar fabrication requirements as standard complementary metal oxide semiconductors (CMOS). This allows Intel to leverage innovative process control techniques to enable yield and performance. The Tunnel Falls chip has a 95% yield rate across the wafer, which provides over 24,000 quantum dot devices.

In a press release, the company says it believes, Silicon may be the platform with the greatest potential to deliver scaled-up quantum computing.

Intel believes spin qubits are the superior form of qubit technology thanks to the synergy they have with traditional cutting-edge transistors. This approach also comes with a size advantage, making each Intel qubit about one million times smaller than other qubit designs. For commercial quantum computers, millions of qubits will be needed. But Intel believes spin qubits make this possible since they can be packed into chips that resemble a CPU.

Despite the important advances happening across the quantum field, no one really knows how this technology will pan out commercially. Even so, Intel is already working on its next-generation Tunnel Falls chip. The company plans to release it as soon as 2024.

In the meantime, it will work on integrating Tunnel Falls into its full quantum stack. The Intel Quantum Software Development Kit (SDK) will also play an important role. A functional tech stack to support its quantum hardware is an enticing way for Intel to sway prospective buyers in its direction.

Ultimately, however, the commercial application isnt the most exciting part about Tunnel Falls. Putting powerful quantum hardware into the hands of researchers will increase the pace of discoveries in the quantum field and lead to new advancements in the coming days. With the industry working together, quantum computing inches ever closer to practicality.

More:
Intel's Tunnel Falls Quantum Chip Will Help Researchers Advance ... - Sourceability

Hybrid approach for solving real-world bin packing problem … – Nature.com

In this section, we describe in detail the mathematical formulation of the 3 dBPP variant tackled in this research. First, input parameters and variables that compose the problem are shown inTable1.

The 3 dBPP can be solved as an optimization problem where a suitable cost function to minimize must be defined. In our case, this cost function is represented as the sum of three objectives. The strength given to each objective, i.e. the relevance accounted for each one, is up to the user preferences just by multiplying each objective with a suitable weight. Thus, the problem can be stated as (min text { }sum _{i=1}^3omega _io_i) with (omega _i) the weights of each objective (o_i). In our study, we will not consider this bias, i.e. (omega _i=1text { }forall i).

The first and main objective minimizes the total amount of bins used to locate the packages. This can be achieved by minimizing

$$begin{aligned} o_1 = sum _{j=1}^nv_j. end{aligned}$$

(1)

Additionally, for ensuring that items are packed from the floor to the top of the bin, avoiding solutions with floating packages, a second objective is defined by minimizing the average height of the items for all bins

$$begin{aligned} o_2 = frac{1}{mH}sum _{i=1}^mleft( z_i + z'_iright) . end{aligned}$$

(2)

Besides these two objectives reformulated from the reference code28, we further add a third optional objective (o_3) to take into account the load balancing feature. This concern is particularly important when air cargo planes and sailings are the chosen conveyance30,31, for example. In those situations, packages should be uniformly distributed around a given xy-coordinate inside the bin. We can tackle this by computing the so-called taxicab or Manhattan distance between items and the desired center of mass for each bin. As a result, the gaps between items are also reduced. Concerning this, the third objective to be minimized is

$$begin{aligned} o_3 = frac{1}{m}left( frac{1}{L}sum _{i=1}^m {tilde{x}}_i + frac{1}{W}sum _{i=1}^m {tilde{y}}_iright) , end{aligned}$$

(3)

with

$$begin{aligned} {tilde{x}}_i {:}{=}left| left( x_i + frac{x_i'}{2}right) !text { mod } L -{tilde{L}} right| quad text {and}quad {tilde{y}}_i {:}{=}left| y_i + frac{y_i'}{2} -{tilde{W}} right| quad forall iin I, end{aligned}$$

(4)

where (0le x_i< nL) (bins stacked horizontally) and (0le y_i< W) (forall iin I). This objective term minimizes for each item the distance between the center of mass projection in the xy-plane and the (({tilde{L}},{tilde{W}})) coordinate of each bin.

The objectives above defined are subject to certain restrictions, which are essential to derive realistic solutions. The whole pool of constraints is separated into two categories: the ones intrinsic to the BPP definition (intrinsic restrictions), and the ones relevant from a real-world perspective (real-world BPP restrictions).

Item orientations: the fact that inside a bin each item must have only one orientation can be implemented by using

$$begin{aligned} sum _{kin K_i}r_{i,k}=1quad forall iin I. end{aligned}$$

(5)

Set of possible orientations (kin K_i) for a given item i of dimensions ((l_i,w_i,h_i)). (a) (k = 1), (b) (k = 2), (c) (k = 3), (d) (k = 4), (e) (k = 5), (f) (k = 6). SeeTable2.

Orientations give rise to the effective length, width, and height of the items along x, y and z axes

$$begin{aligned} x'_i&= l_ir_{i,1} + l_ir_{i,2} + w_ir_{i,3} + w_ir_{i,4} + h_ir_{i,5} + h_ir_{i,6} quad forall iin I, end{aligned}$$

(6)

$$begin{aligned} y'_i&= w_ir_{i,1} + h_ir_{i,2} + l_ir_{i,3} + h_ir_{i,4} + l_ir_{i,5} + w_ir_{i,6} quad forall iin I, end{aligned}$$

(7)

$$begin{aligned} z'_i&= h_ir_{i,1} + w_ir_{i,2} + h_ir_{i,3} + l_ir_{i,4} + w_ir_{i,5} + l_ir_{i,6} quad forall iin I, end{aligned}$$

(8)

and because of (5), only one term (r_{i,k}) is nonzero in each equation.

It should be deemed that there could be items with geometrical symmetries, as with cubic ones where rotations do not apply. Redundant and non-redundant orientations are considered in the reference code28. In our formulation, we previously check if these symmetries exist to define (K_i) for each item. Thanks to this, (6)(8) are simplified filtering out redundant orientations and leading to a formulation which uses less variables (thus qubits) to represent the same problem, where (kappa =sum _{i=1}^m|K_i|le 6m) variables (r_{i,k}) are needed. For (iin I_text {c}) with (I_text {c}{:}{=}{iin I,|,l_i=w_i=h_i}) (cubic items), we can set (r_{i,1}=1) and 0 otherwise, thus satisfying(5) in advance. InTable2, we can see the non-redundant orientation sets for an item i depending on its dimensions. This simple mechanism reduces the complexity of the problem, being favourable for the quantum hardware to implement.

Non-overlapping restrictions: since we are considering rigid packages, i.e. they can not overlap, a set of restrictions need to be defined to overcome these configurations. For this purpose, at least one of these situations must occur (seeFig.2)

$$begin{aligned} text {Item }itext { is at the left of item }k, (q=1)text {:}{} & {} -(2 - u_{i,j}u_{k,j}-b_{i,k,1})nL+x_i+x'_i-x_kle 0{} & {} forall i,kin I,text { }forall jin J, end{aligned}$$

(9)

$$begin{aligned} text {Item }itext { is behind item }k, (q=2)text {:}{} & {} -(2 - u_{i,j}u_{k,j}-b_{i,k,2})W+y_i+y'_i-y_kle 0{} & {} forall i,kin I,text { }forall jin J, end{aligned}$$

(10)

$$begin{aligned} text {Item }itext { is below item }k, (q=3)text {:}{} & {} -(2 - u_{i,j}u_{k,j}-b_{i,k,3})H+z_i+z'_i-z_kle 0{} & {} forall i,kin I,text { }forall jin J, end{aligned}$$

(11)

$$begin{aligned} text {Item }itext { is at the right of item }k, (q=4)text {:}{} & {} -(2 - u_{i,j}u_{k,j}-b_{i,k,4})nL+x_k+x'_k-x_ile 0{} & {} forall i,kin I,text { }forall jin J,end{aligned}$$

(12)

$$begin{aligned} text {Item }itext { is in front of item }k, (q=5)text {:}{} & {} -(2 - u_{i,j}u_{k,j}-b_{i,k,5})W+y_k+y'_k-y_ile 0{} & {} forall i,kin I,text { }forall jin J, end{aligned}$$

(13)

$$begin{aligned} text {Item }itext { is above item }k, (q=6)text {:}{} & {} -(2 - u_{i,j}u_{k,j}-b_{i,k,6})H+z_k+z'_k-z_ile 0{} & {} forall i,kin I,text { }forall jin J. end{aligned}$$

(14)

As discussed with the orientation variable (r_{i,k}) in(5), the relative position between items i and k must be unique, so

$$begin{aligned} sum _{qin Q}b_{i,k,q}=1quad forall i,kin I. end{aligned}$$

(15)

Representation of (b_{i,k,q}) activated for all relative positions (qin Q) between items i and k. See(9)(14). Both are in contact but it is not mandatory. (a) (b_{{i},{k},1}=1), (b) (b_{{i},{k},2}=1), (c) (b_{{i},{k},3}=1), (d) (b_{{i},{k},4}=1), (e) (b_{{i},{k},5}=1), (f) (b_{{i},{k},6}=1).

Item and container allocation restrictions: the following set of restrictions guarantees an appropriate behaviour during item and bin assignment. In order to avoid packing duplicates of the same item, each item must go to exactly one bin, where

$$begin{aligned} sum _{j=1}^n u_{i,j}=1quad forall iin I. end{aligned}$$

(16)

The following formula verifies if items are being packed inside bins that are already in use

$$begin{aligned} sum _{i=1}^m(1-v_j)u_{i,j}le 0quad forall jin J, end{aligned}$$

(17)

so it activates (v_j) if needed during packaging. Bins can be activated sequentially to avoid duplicated solutions ensuring that

$$begin{aligned} v_jge v_{j+1}quad forall jin Jtext { } | text { }jne n. end{aligned}$$

(18)

Bin boundary constraints: in order to contemplate bin boundaries, the following set of restrictions must be met

$$begin{aligned}{} & {} x_i+x'_i-jL le (1-u_{i,j})nL quad forall iin I,text { }forall jin J, end{aligned}$$

(19)

$$begin{aligned}{} & {} x_i-(j-1)Lu_{i,j} ge 0 quad forall iin I,text { }forall jin Jtext { }|text { }j>1, end{aligned}$$

(20)

$$begin{aligned}{} & {} y_i+y'_i-W le (1-u_{i,j})W quad forall iin I,text { }forall jin J, end{aligned}$$

(21)

$$begin{aligned}{} & {} z_i+z'_i-H le (1-u_{i,j})H quad forall iin I,text { }forall jin J, end{aligned}$$

(22)

where(19) guarantees that items i placed inside the bin j are not outside of the last bin (n-th bin) along the x axis, (20) ensures that item i is located inside of its corresponding bin j along the x axis (activated if (n>1)), (21) confirms that item i placed inside the bin j is not outside along the y axis, while(22) ensures that item i allocated inside the bin j is not outside along the z axis.

In this subsection we introduce those restrictions related with the operative perspective of the problem, i.e. the ones that consider real-world industrial situations. All of the following constraints are optional in our formulation.

Overweight restriction: the weight of each package and the maximum capacity of containers are common contextual data to avoid exceeding the maximum weight capacity of bins, so avoid overloaded containers. We can introduce this restriction as

$$begin{aligned} sum _{i=1}^mmu _iu_{i,j}le Mquad forall jin J. end{aligned}$$

(23)

This restriction is activated if the maximum capacity M is given.

Affinities among package categories: there are commonly preferences for separating some packages into different bins (negative affinities or incompatibilities) or, on the contrary, gathering them into the same container (positive affinities). Let us consider (I_alpha {:}{=}{iin Itext { }|text { }{} texttt {id}text { of }itext { is equal to }alpha }), i.e. (I_alpha subset I) is a subset of all items labelled with id equal to (alpha). Given a set of p negative affinities (A^text {neg}{:}{=}{(alpha _1,beta _1),dots ,(alpha _p,beta _p)}), then the restriction will be

$$begin{aligned} sum _{(alpha ,beta )in A^text {neg}},sum _{(i_alpha ,i_beta )in I_alpha times I_beta },sum _{j=1}^nu_{i_alpha ,j}u_{i_beta ,j}=0, end{aligned}$$

(24)

To activate this restriction, a set of incompatibilities must be given. Moreover, we can satisfy in advance (nu {:}{=}6nsum _{(alpha ,beta )in A^text {neg}}|I_alpha ||I_beta |) non-overlapping constraints (see(9)(14)), leading to a simpler formulation. Conversely, given a set of positive affinities (A^text {pos}) as stated with (A^text {neg}), then the restriction will be posed such that

$$begin{aligned} sum _{(alpha ,beta )in A^text {pos}},sum _{(i_alpha ,i_beta )in I_alpha times I_beta },sum _{j=1}^nleft( 1-u_{i_alpha ,j}u_{i_beta ,j}right) =0, end{aligned}$$

(25)

This restriction is activated if a set of positive affinities is given. If (A^text {pos}) and (A^text {neg}) are given, then both restrictions can be introduced using just one formula adding(24) and (25).

Preferences in relative positioning: relative positioning of items demands that some of them must be placed in a specific position with respect other existing items. This preference allows introducing the ordering of a set of packages according to their positions with respect to the axes. Thus, this preference assists in ordering for many real cases such as: parcel delivery (an item i that has to be delivered before item k will be preferably placed closer to the trunk door) or load bearing (no heavy package should rest over flimsy packages), among others.

Regarding this preference, we can define two different perspectives to treat relative positioning:

Positioning to avoid ((P_q^{-})): list of items (i,k) should not be in the relative position (qin Q) specified. So, (b_{i,k,q}=0) is expected, favouring configurations where the solver selects (q'in Q) with (q'ne q) for the relative positioning of items (i,k).

Positioning to favour ((P_q^{+})): list of items (i,k) should be in a certain relative position q. Activated this preference, (b_{i,k,q}=1) ought to hold and consequently, (b_{i,k,q'}=0 forall q'ne q).

Formally, these preferences are written as

$$begin{aligned} P_q^{-}{:}{=}{(i,k)in I^2text { }|text { }i

(26)

These preferences could be also treated as compulsory pre-selections. In such case, the number of variables needed would be reduced, so would the search space. If we let (smash [t]{p^{-}=sum _{qin Q}|P_q^{-}|}) and (smash [t]{p^{+}=sum _{qin Q}|P_q^{+}|}) with (smash [t]{P^{-}_qcap P^{+}_{q'}=varnothing }), based on(15), the amount of variables reduced would be given by (smash [t]{p^{-}+6p^{+}}). Moreover, (smash [t]{n(p^{-}+5p^{+})}) non-overlapping constraints (see(9)(14)) are satisfied directly and can be ignored, thus simplifying the problem. In this paper, for the sake of clarity, these preferences have been applied for load bearing purposes as hard constraints (HC), as explained in the upcoming Experimental results.

Load balancing: to activate this restriction, a target center of mass must be given. Global positions with respect to the bin as a whole (as described in objective (o_3) in(3)), are fixed using the following constraints

$$begin{aligned} pm frac{1}{n}sum _{j=1}^nleft[ x_i+frac{x_i'}{2} - n(j-1)u_{i,j}L -{tilde{L}}right] le {tilde{x}}_i quad text {and}quad pm left( y_i+frac{y_i'}{2} -{tilde{W}}right) le {tilde{y}}_i quad forall iin I. end{aligned}$$

(27)

This feature is represented inFig.3 for (({tilde{L}},{tilde{W}})=(L/2,W/2)), whose red line shows the available ({tilde{x}}_i) and ({tilde{y}}_i) values (see(4)).

Representation of available ({tilde{x}}_i) and ({tilde{y}}_i) values ensured by the constraints given in(27) for (({tilde{L}},{tilde{W}}) = (L/2,W/2)).

Regarding the complexity of the 3 dBPP proposed in this research, the total amount of variables needed to tackle an arbitrary instance is given inTable3, where our formulation scales as ({mathscr {O}}[m^2+nm]) in terms of variables. Additionally, the total amount of constraints required is provided inTable4, whose quantity grows quadratically as ({mathscr {O}}[m^2+nm]).

Go here to read the rest:
Hybrid approach for solving real-world bin packing problem ... - Nature.com

Future Cyber Threats: The four horsemen of the apocalypse – ComputerWeekly.com

As a CISO and cyber specialist, I am often asked what I see as the big cyber threats of the future. Whilst Im not a fan of crystal ball gazing for its own sake, nevertheless it can be helpful to think about what may be coming and what we can do about it.

So here are my four big threats or what we may more colourfully term the four horsemen of the apocalypse together with some thoughts how we can prepare for them so that it doesnt actually turn into the end of the world!

With the advent of AI, especially natural language algorithms like ChatGPT, and their access to everything on the internet, combined with the ability to create essentially AI plug-ins for text-to-speech and imagery, very soon well have more virtual humans online than real ones.

Today we have botnets: networks of robots that were surreptitiously installed through malware onto computing systems around the world doing the bidding of cybercriminals. With the power of millions of computers at their disposal, industrious hackers can do everything from mine crypto to offer ransomware as a service to other criminals.

Moving forward, cybercriminals and even nation states will have the ability to mobilize huge swaths of digital people seemingly operating independently but aligned with a larger mission. We see tiny examples of this today with virtual interviews resulting in unintentionally hiring a hacker or spy.

Real humans are and will remain victims to fraud and confidence schemes. Even to this day, email borne attacks, such as phishing, are highly effective. Imagine a world where parents are having interactive video calls with their children asking for money. But what if that child is actually a digital fake? Given how much information there is about you as an individual, thanks to data breaches and social media posts, very rapidly there will emerge virtual replicas. Versions of you designed to leverage you for a greater gain by crossing ethical boundaries you are not willing to take.

Quantum computing has leapt off the pages of sci-fi into reality and has been actively processing data not just for a few years now, but decades. Many companies have developed quantum computers, but the reason we have yet to see something dramatic is, in many ways, because they all use a different architecture. Its like Apple and Microsoft in 1986, separate and completely incompatible. Moreover, thanks to the nuances of quantum mechanics, networking quantum computers has proven to be difficult.

Nevertheless, both these barriers are diminishing rapidly. Soon the race for processing the most qubits will be shortened and accelerated as scientists solve the networking challenge. Overnight, the global human race will have access to thousands if not tens of thousands of qubits.

From a cybersecurity perspective, most encryption will instantly be rendered useless. All of a sudden, your secure transaction to your bank or all the data transmitted over you VPN are no longer protected. In fact, every secure interaction youve ever made is likely to have been collected, allowing adversaries to go back and decrypt all those communications. The underlying basis of blockchain crumbles, permitting the ability to rewrite financial history.

As we delve into the world of digital transformation and Web 3.0, the ecosystem of technology is becoming increasingly complex and layered. In the early days computers existed in a single room. Soon, individual computers were able to communicate. As networks expanded, along with processing speeds and availability of cheap storage, computer applications began to interact, requiring less and less standardization across platforms. With this evolution has come more points of interaction and the ability to leverage specific capabilities from a wider range of technologies, and at different layers of computing.

Today, cybersecurity is just coming to grips with the challenges of third-party and supply chain risk in computing. Companies that are currently undergoing digital transformation will likely not simply have three or four layers of suppliers, but that rather closer to twenty.

Moving forward the combined demand for pace, growth and innovation will require more and more from the computing ecosystem. These pressures will result in greater degrees of specialization in the supply chain causing it to expand rapidly. As such, it will be a primary target of cybercriminals because its manipulation can undermine trust in surface-level computing, permitting hackers to take control of any system without detection.

The role of technology and its importance in the physical world is increasing exponentially and will soon reach a point where computer-related issues, including everything from errors to hackers, will have a tangible impact in the real world.

Today, were exploring self-driving vehicles, intelligent power distribution, and automation in industrial control systems, all of which have direct physical interactions with people and places.

As we evolve, increasingly sophisticated technology will not only be embedded into everything from the mundane toaster to the most complex infrastructure but will also be interconnected and operated across a set of automated systems. For example, smart medical devices will become increasingly common and will quickly move beyond tactical monitoring to automated delivery of off the shelf medication, prioritization of emergency services, and even control access to various facilities.

While these capabilities will greatly enhance human services, improve healthcare, and reduce accidents, cyberthreats will target these systems to perform everything from theft to terrorism. Instead of your data being held ransom, hackers may hold your car for ransom, withhold access to your home for money, or deny you medication or emergency services without payment.

In the face of these seemingly insurmountable challenges, is there any light at the end of the tunnel? Thankfully, I believe there is.

For example, many companies are now developing quantum-resistant technologies, such as encryption algorithms, blockchain technology, and communication networks. These may help nullify some of the cyber risks of quantum computing the challenge will be to develop the strength of the defenses in proportion to the magnitude of the risks as quantum computing takes off.

In relation to the expanding ecosystem, although the supply chain is growing beyond comprehension, there are efforts such as Software Bill of Materials (SBOM), enhanced software updating and patching standards, and even IoT product labeling is being explored. Active expert thinking is being applied to the issue.

When dealing with the future related to smart devices and now, with ChatGPT and its ilk, smart AI, I think we have to change our perspective of how we coexist as companies and individuals with technology. Its less about being a hard target with strong defenses, and rapidly becoming all about being a resilient target rather than a victim. With solid planning and preparation, resilience is possible. Be aware of the risks and think ahead of them. Focus on having alternatives, out-of-band options, and, critically, awareness of potential threat capabilities so that your plan B and even plan C arent rendered useless.

The cyber future may sound worrying but at the same time, human ingenuity will also find ways to build new protections and mitigations.

Read more from the original source:
Future Cyber Threats: The four horsemen of the apocalypse - ComputerWeekly.com

How Quantum-Enhanced Generative AI Could Help Optimize … – Supply and Demand Chain Executive

Generative AI has been heralded as the most profoundly impactful technological innovation since the iPhone. ChatGPT in particular has captured the worlds attention with its ability to convincingly generate whatever text the user desires. It has passed numerous standardized tests, from the Bar exam to the SATs, and its essay-writing prowess poses an existential threat to the integrity of education itself.

Other tools have shown impressive results in generating art, videos, code, and more. Not surprisingly, many predict that these generative AI tools will be widely disruptive particularly for industries like media, marketing and legal that deal with text and images. Whats less obvious is how generative AI will impact supply chains.

The truth is that text and image generation is just the beginning of what generative AI can accomplish. It can also be used to generate solutions to optimization problems that abound within supply chains.

Generally speaking, any situation where you have a wide range of possible solutions, and you want to find the best solution, can be thought of as an optimization problem. A simple example is when Google Maps tries to find the fastest possible route to your destination. While this works reliably well, for more complex optimization problems, classical computers dont have an efficient way to find the best solution and can only generate approximate solutions.

In contrast, a generative model could be trained on the best existing solutions to an optimization problem for example those obtained from classical heuristics or MIP solvers and learn what makes a good solution good. Much like how ChatGPT learns from existing text to generate new text, this generative model could then generate new solutions to the optimization problem. We call this approach Generator-Enhanced Optimization (GEO).

Potential supply chain use cases include finding more efficient shipping routes, optimizing the organization of warehouses to speed up order-picking, or selecting the best combination of suppliers, distributors and vendors. Given the complexity of most global supply chains, there is ample room for optimization and cost savings as a result.

It sounds promising, but for years quantum computing has also been touted for its ability to solve optimization problems, and yet today there is not yet a documented example of quantum computers providing an advantage for optimization. However, generative AI may be the fastest avenue to realize that quantum advantage. It may also be the first place we see a practical quantum advantage at all.

To vastly oversimplify things, generative models like those behind ChatGPT work by learning patterns in massive datasets and producing new data that conforms to these patterns. In other words, they learn to replicate the probability distributions of the training data. Quantum computers have the ability to encode and sample from complex probability distributions in a way that classical computers cannot, giving them a potential advantage in generative modeling.

How is this possible? For one, quantum entanglement can encode distant correlations within a dataset in ways that would be difficult for a classical computer to simulate. Secondly, the inherently probabilistic nature of measuring a quantum state makes quantum computers the ideal vehicle for sampling from probability distributions.

The end result is the ability to generate a more diverse range of solutions to the generative modeling task. In the context of optimization, this means quantum generative models could generate new, previously unconsidered solutions.

But theres a catch. Quantum devices are currently limited by low qubit counts and high error rates. But we dont necessarily need quantum devices. However, tensor networks, originally popularized among quantum physicists for simulating quantum states on classical computers, can be used for generative modeling today. And as quantum hardware matures, these quantum-inspired models can be translated into real quantum circuits, making them forward compatible with future quantum devices.

Tensor networks have shown particular value for optimization problems with equality constraints. An equality constraint is a condition that must be satisfied exactly for the solution to be valid. Without a way to natively encode these constraints, traditional optimizers can generate many invalid solutions, resulting in inefficient and expensive searches.

On the other hand, tensor networks can be constrained in a way that only outputs valid samples, resulting in the generation of more novel and high-quality solutions to optimization problems. And while equality constraints can worsen the performance of other quantum or quantum-inspired approaches, the opposite is true with constrained tensor networks, which deliver better computational performance at a cheaper cost for each additional equality constraint.

There are many possible applications of GEO that could make the supply chain more efficient. Below are a few examples:

Of course, supply chains can vary widely from industry to industry. You may have additional optimization use cases that are unique to your business. But across the board, generating better optimization solutions has the potential to reduce costs and speed up the supply chain. Optimization could also reduce waste and cut carbon emissions a great place to start for businesses looking for ways to reduce their carbon footprint.

How great is the potential value at stake? The only way to find out is to try. We are still in the early days of generative AI and even more so with quantum-inspired generative AI. By building and deploying generative AI applications, not only do you stand to gain a competitive advantage, but you may also make discoveries that advance the field.

Its important to reiterate that tensor networks are forward-compatible with real quantum computation. Businesses that deploy tensor networks for optimization may not only gain an advantage today, but they would also be in position to gain a potentially greater advantage as quantum hardware becomes more powerful. In other words, they will become quantum ready.

Excerpt from:
How Quantum-Enhanced Generative AI Could Help Optimize ... - Supply and Demand Chain Executive