Archive for the ‘Quantum Computer’ Category

Researchers Secure Prestigious Federal Grants | News | New York … – New York Institute of Technology

Pictured from left: Weikang Cai, Jerry Cheng, Sophia Domokos, Eve Armstrong, and Yusui Chen

In recent weeks, five research projects led by New York Tech faculty have collectively secured more than $1.6 million in federal funding from the National Science Foundation (NSF) and the National Institutes of Health (NIH).

The funding will support projects spanning physics, computer science, and biomedical science, captained by faculty from the College of Arts and Sciences, College of Osteopathic Medicine (NYITCOM), and College of Engineering and Computing Sciences. Findings from these studies could help to advance quantum computing, lead to new Alzheimers disease treatments, explain how heavy elements first formed, enable mobile devices to detect cardiovascular disease, and offer insight that could revolutionize magnetic resonance imaging (MRI) and magnetic levitation (maglev) technologies.

The research projects will also engage undergraduate, graduate, and medical students, providing excellent opportunities for them to gain a deeper understanding of the scientific process and mentorship from some of the universitys brightest minds.

A research team led by Assistant Professor of Physics Yusui Chen, Ph.D., has secured an NSF grant totaling $650,0001 for a three-year project that could enhance understanding of quantum physics within real environmentsa necessary step to advancing the field of quantum computing.

Many scientists and experts believe that quantum computing could provide the necessary insight to help solve some of societys biggest issues, including climate change and deadly diseases. However, much remains unknown about how these systems operate, and uncovering their full potential first requires an advanced understanding of the physics principles that provide their theoretical framework.

Quantum computers, which are made of information storage units called qubits, are inherently subject to environmental influences. Some multi-qubit systems are influenced by a memory of past interactions with the environment, thereby affecting their future behavior (non-Markovian systems). However, few mathematical tools exist to study these dynamics, and as systems grow larger and more complex, modeling them on classic, binary computers is unfeasible.

Chen and his research team, which includes undergraduate and graduate physics, computer science, and engineering students, as well as a researcher from Rutgers University, will establish a comprehensive method to investigate these dynamics while improving the accuracy of existing quantum simulation algorithms. Their insights could deepen understanding of the fundamental physics in which quantum computers operate.

The project also includes efforts to build a pipeline of diverse talent and researchers, a critical factor in helping to advance the field of quantum information science engineering (QISE). As such, Chen will mentor undergraduate New York Tech students, particularly female students and those from traditionally underrepresented backgrounds. He will also conduct outreach to K12 schools with the aim of introducing STEM concepts and sparking younger students interest in QISE.

A project led by Assistant Professor of Physics Eve Armstrong, Ph.D., has received a three-year NSF grant totaling $360,0002 in support of her continued research into one of sciences greatest mysteries: how the universe formed from stardust.

The research will build on Armstrongs earlier NSF-funded project, which received a two-year $299,998 NSF EAGER grant in 2021.

While the Big Bang created the first and lightest elements (hydrogen and helium), the next and heavier elements (up to iron on the periodic table) formed later inside ancient, massive stars. When these stars exploded, their matter catapulted into space, seeding that space with elements. Eventually, their stardust matter formed the sun and planets, and over billions of years, Earths matter coalesced into the first life forms. However, the origins of elements heavier than iron, like gold and copper, remain unknown. While they may have formed during a supernova explosion, current computational techniques render it difficult to comprehensively study the physics of these events. In addition, supernovae are rare, occurring about once every 50 years, and the only existing data is from the last explosion in 1987.

Armstrong posits that a weather prediction technique called data assimilation may enhance understanding of these events. The technique relies on limited information to sequentially estimate weather changes over time, which may make it conducive to modeling supernovae conditions. With simulated data, in preparation for the next supernova event, Armstrong and undergraduate New York Tech students will use data assimilation to predict whether the supernova environment could have given rise to some heavy elements. If successful, these forecasts may allow scientists to determine which elements formed from supernova stardust.

Since receiving her EAGER grant in 2021, Armstrong and her students have begun using the technique for the first time with real data from the suns neutrinos (tiny, nearly massless particles that travel at near-light speeds). This is an important test to assess the techniques performance with real data, which is significantly more challenging than simulation. Their most recent paper, published in the journal Physical Review D, is promising.

Armstrongs NSF-funded project will also support her broader impacts work on science communication. Since 2021, she has led workshops for young scientists at New York Tech and the American Museum of Natural History, where participants use techniques from standup comedy, storytelling, and improvisation to create original performances. In addition, for the first time, Armstrong is teaching a formal course on improvisation for New York Tech students this semester.

Assistant Professor of Biomedical Sciences Weikang Cai, Ph.D., has received a $306,000 NIH grant3 to lead a two-year research project that will investigate how certain molecules may play a role in the progression of Alzheimers disease.

Adenosine triphosphate (ATP) is a small molecule within cells that fuels nearly all biochemical and cellular processes in living organisms. Under specific scenarios, both neurons and non-neuronal cells in the brain can release ATP outside of cells. Consequently, ATP can serve as a signaling molecule to communicate with nearby brain cells and regulate their functions. In addition, growing evidence demonstrates that astrocytes, the most abundant non-neuronal cells in the brain, may contribute to the development of Alzheimers disease.

Using a mouse model, the researchers will assess how ATP released from astrocytes is regulated with Alzheimers disease and whether eliminating astrocyte-released ATP could alter disease progression. Their findings may lead to the development of new strategies to treat or alleviate Alzheimers disease and its related symptoms.

Other members of the research team include Biomedical Sciences Instructor Qian Huang, Ph.D., and Senior Research Associate Hiu Ham Lee, M.S., who initially spearheaded the project, as well as NYITCOM students Zoya Ramzan, Lucy Yu, David Shi, Alexandra Abrams, Sky Lee, and Yash Patel, and undergraduate Life Sciences students Addison Li and Priyal Gajera. In addition, several other NYITCOM students contributed to preliminary studies leading up to the current project, including Marisa Wong, Shan Jin, Min Seong Kim, and Matthew Jiang.

In 2021, Cai also received an NIH grant to research how chronic stress inhibits ATP release, thereby reducing dopamine activity and potentially contributing to clinical depression.

Assistant Professor of Computer Science Jerry Cheng, Ph.D., has received an NSF grant totaling $159,9794 for a three-year project to establish a data analytics and machine learning (artificial intelligence) framework that could allow at-home mobile devices like smartphones to detect biomarkers for early symptoms of cardiovascular disease.

Mobile devices usually have restrictions in memory, computing power, and battery capacity for complex computations. To address this, Cheng and his research team will develop software deep learning accelerators, which will allow mobile devices to perform AI modeling. They will also develop security measures to mitigate attacks on cloud systems (computationally efficient trusted execution environment), as well as time-dependent models to analyze sensing data, such as respiratory rate, blood pressure, heart rate, etc. Graduate and undergraduate students from the College of Engineering and Computing Sciences will be recruited to assist in the project, which will also focus on promoting female engineering student participation.

Cheng has secured multiple NSF awards since arriving at New York Tech in 2019. In 2021, he received funding for mobile edge research to help ensure that smart device computing advancements do not outpace experiments in the field; in 2020, he received an award to design more efficient and secure deep learning processing machines that can reliably process and interpret extremely large-scale sets of data with little delay.

Associate Professor of Physics Sophia Domokos, Ph.D., has secured an NSF grant totaling $135,0005 for a three-year research project to explore the inner workings of matter. Domokos seeks to uncover how tiny elementary particles (quarks and gluons) interact to create new orders, like clumping together to form protons and neutrons in an atoms nucleus.

While scientists have a relatively useful mathematical explanation regarding how these tiny elementary particles behave, these models do not account for particles interacting frequently and forcefully. To address this, Domokos and her research team will use holographic duality, a string theory concept, and a mathematical structure called supersymmetry to categorize and classify the clumps of elementary particles that emerge in strongly interacting systems.

The insights they gain could shed light on the inner workings of protons and neutrons, as well as other strongly coupled systems such as high-Tc superconductors, special materials that could revolutionize key technologies like MRIs and maglev trains.

Domokos, who has recruited undergraduate students to assist in her previous NSF grant-funded research, will continue to do so for this latest study. Students will gain a deeper understanding of theoretical physics, as well as skills like solving differential equations and using scientific computation software, and first-hand experience drafting physics research papers.

1This project is funded by NSF Award ID No. 2328948 and will be completed in partnership with researcher Hang Liu, Ph.D., of Rutgers University. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NSF.

2This project is funded by NSF Award No. ID 2310066 and will be completed in partnership with University of WisconsinMadison physicistAkif Baha Balantekin, Ph.D.The content is solely the responsibility of the authors and does not necessarily represent the official views of the NSF.

3This grant was supported by the National Institute on Aging of the National Institutes of Health under Award Number 1R03AG083363. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

4This project is funded by NSF Award No. ID 2311598. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NSF.

5This project is funded by NSF Award No. ID 2310305. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NSF.

View post:
Researchers Secure Prestigious Federal Grants | News | New York ... - New York Institute of Technology

Nonnegative/Binary matrix factorization for image classification … – Nature.com

NBMF9 extracts features by decomposing data into basis and coefficient matrices. The dataset was converted into a positive matrix to prepare the input for the NBMF method. If the dataset contains n-dimensional m data, then the input is an (n times m) matrix V. The input matrix is decomposed into a base matrix W of (n times k), representing the dataset features, and a coefficient matrix H of (k times m), representing the combination of features selected to reconstruct the original matrix. Then,

$$begin{aligned} V approx WH, end{aligned}$$

(1)

where W and H are positive and binary matrices, respectively. The column number k of W corresponds to the number of features extracted from the data and can be set to any value. To minimize the difference between the left and right sides of Eq.(1), W and H are updated alternately as

$$ W: = mathop {{text{arg}}~{text{min}}}limits_{{X in mathbb{R}^{{ + n times k}} }} parallel V - XHparallel _{F} + alpha parallel Xparallel _{F} , $$

(2)

$$ H: = mathop {{text{arg}}~{text{min}}}limits_{{X in { 0,1} ^{{k times m}} }} parallel V - WXparallel _{F} , $$

(3)

where (parallel cdot parallel _F) denotes the Frobenius norm. The components of W and H are initially given randomly. The hyperparameter (alpha ) is a positive real value that prevents overfitting and is set to (alpha = 1.0 times 10^{-4}).

In previous studies, the Projected Gradient Method (PGM) was used to update Eq.(2)16. The loss function that updates Eq.(2) is defined as

$$begin{aligned} f_{W}(varvec{x}) = parallel varvec{v} - H^{text{T}} varvec{x} parallel ^{2} + alpha parallel varvec{x} parallel ^{2}, end{aligned}$$

(4)

where (varvec{x}^{text{T}}) and (varvec{v}^{text{T}}) are the row vectors of W and V, respectively. The gradient of Eq.(4) is expressed as

$$begin{aligned} nabla f_{W} = -H (varvec{v} - H^T varvec{x}) + alpha varvec{x}. end{aligned}$$

(5)

The PGM minimizes the loss functions in Eq.(4) by updating (varvec{x}):

$$begin{aligned} varvec{x}^{t+1} = Pleft[varvec{x}^t - gamma _t nabla f_W (varvec{x}^t)right], end{aligned}$$

(6)

where (gamma _t) is the learning rate and

$$begin{aligned} P[x_i] = {left{ begin{array}{ll} 0 &{} (x_i le 0), \ x_i &{} (0< x_i < x_mathrm{{max}}), \ x_mathrm{{max}} &{} (x_mathrm{{max}} le x_i), end{array}right. } end{aligned}$$

(7)

where (x_mathrm{{max}}) is the upper bound and is set to (x_mathrm{{max}}=1). Eq.(7) is a projection that keeps the components of (varvec{x}) nonnegative.

However, because H is a binary matrix, Eq.(3) can be regarded as a combinatorial optimization problem that can be minimized by using an annealing method. To solve Eq.(3) using a D-Wave machine, a quantum annealing computer, we formulated the loss function as a quadratic unconstrained binary optimization model:

$$begin{aligned} f_{H}(varvec{q}) = sum _i sum _r W_{ri}left(W_{ri} - 2 v_{r}right) q_i + 2 sum _{i

(8)

where (varvec{q}) and (varvec{v}) are the column vectors of H and V, respectively.

After the alternate updating method converges, we obtain W and H which minimize the difference between the left and right sides of Eq.(1). W consists of representative features extracted from the input data, and H represents the combination of features in W using binary values to reconstruct V. Therefore, V can be approximated as the product of W and H.

Previous studies used NBMF to extract features from facial images9. When the number of annealing steps is small, the computation time is shorter than a classical combinatorial optimization solver. However, using the D-Wave machine is disadvantageous in that the computing time increases linearly with the number of annealings, whereas the classical solver does not significantly change the computing time. The results were compared with NMF14. Unlike NBMF, matrix H in NMF is positive and not binary. While the matrix H produced by NBMF was sparser than NMF, the difference between V and WH of NBMF was approximately 2.17 times larger than NMF. Although NBMF can have a shorter data processing time than the classical method, it is inferior to NMF as a machine-learning method in accuracy. Moreover, because previous studies did not demonstrate tasks beyond data reconstruction, the usefulness of NBMF as a machine-learning model is uncertain.

In this study, we propose the application of NBMF to a multiclass classification model. Inspired by the structure of a fully connected neural network (FCNN), we define an image classification model using NBMF. In an FCNN, image data are fed into the network as input, as shown in Fig.1, and the predicted classes are obtained as the output of the network through the hidden layers.

An overview of a fully-connected neural network.

To perform fully connected network learning using NBMF, we interpret the structure shown in Fig.1 as a single-matrix decomposition. When the input and output layers of the FCNN are combined into one input layer, the network becomes a two-layer network with the same structure as NBMF. As the input to the training network by NBMF, we used a matrix consisting of image data and the corresponding class information. Class information is represented by a one-hot vector multiplied by an arbitrary real number g. The image data and class information vectors are combined row-wise and eventually transformed into an input matrix V. We use NBMF to decompose V to obtain the basis matrix W and the coefficient matrix H, as shown in Fig.2. The column vectors in H correspond to the nodes in the hidden layer of the FCNN network, and the components of W correspond to the weights of the edges. The number of feature dimensions k in the NBMF corresponds to the number of nodes in the hidden layer of the FCNN.

An overview of training by NBMF.

To obtain H, we minimize Eq.(8) by using an annealing solver, as in a previous study. However, to obtain W by minimizing Eq.(4), we propose using the projected Root Mean Square Propagation (RMSProp) method instead of the PGM used in a previous study. RMSProp is a gradient descent method that adjusts the learning and decay rates to help the solution escape local minima17. RMSProp updates the vector (varvec{h}), whose components are denoted by (h_i) as

$$begin{aligned} h^{t+1}_{i} = beta h^{t}_{i} + (1-beta ) g^{2}_{i}, end{aligned}$$

(9)

where (beta ) is the decay rate, (varvec{g} = nabla f_W), and vector (varvec{x}) is

$$begin{aligned} varvec{x}^{t+1} = varvec{x}^{t} - eta frac{1}{sqrt{varvec{h}^{t} + epsilon }} nabla f_{W}, end{aligned}$$

(10)

where (eta ) is the learning rate, and (epsilon ) is a small value that prevents computational errors. After updating (varvec{x}) using Eq.(10), we apply the projections described in Eq.(7), to ensure that the solution does not exceed the bounds. We propose this method as a projected RMSProp.

In Fig.3, we demonstrated the information contained in W. Because the row vectors of W correspond to those of V, W consists of (W_1) corresponding to the image data information, and (W_2) corresponding to the class information. We plotted four column vectors selected from W trained with MNIST handwritten digit images under the conditions (m = 300) and (k=40), as shown in Fig.3. The images in Fig.3 show the column vectors of (W_1). The blue histograms show the frequencies at which the column vectors were selected to reconstruct the training data images with each label. The orange bar graphs show the component values of the corresponding column vectors of (W_2). For example, the image in Fig.3a resembles Number 0. From the histogram next to the image, we understand that the image is often used to reconstruct the training data labeled as 0. In the bar graph on the right, the corresponding column vector of (W_2) has the largest component value at an index of 0. This indicates that the column vector corresponding to the image has a feature of Number 0. Similarly, the image in Fig.3b has a label of 9. However, the image in Fig.3c appears to have curved features. From the histogram and bar graph next to the image, it appears that the image is often used to represent labels 2 and 3. This result is consistent with the fact that both numbers have a curve, which explains why the column vector of (W_1) was used in the reconstruction of images with labels 2 and 3. The image in Fig.3d has the shape of a straight line, and the corresponding histogram shows that the image is mainly used to express label 1 and is also frequently used to express label 6. Because Number 6 has a straight-line part, the result is reasonable.

The figure shows four sets of images, (a), (b), (c), and (d), corresponding to column vectors selected from W. Each set contains an image, a histogram, and a bar graph. The image represents a column vector of (W_1), and the histogram shows how often the column vector was selected to reconstruct the training data images with each label. The orange bar graph plots the component values of the corresponding column vector of (W_2).

In our multiclass classification model using NBMF, we used the trained matrices (W_1) and (W_2) to classify the test data in the workflow shown in Fig. 4.

An overview of testing by NBMF.

First, we decompose the test data matrix (V_text{test}) to obtain (H_text{test}) by using (W_1). Here, M represents the amount of test data, which corresponds to the number of column vectors of (V_text{test}). We use Eq.(3) for decomposition. Each column vector of (H_text{test}) represents the features selected from the trained (W_1) to approximate the corresponding column vector of (V_text{test}). Second, we multiply (W_2) by (H_text{test}) to obtain (U_text{test}), which expresses the prediction of the class vector corresponding to each column vector in (V_text{test}). Finally, we applied the softmax function to the components of (U_text{test}) and considered the index with the largest component value in each column vector to be the predicted class.

View original post here:
Nonnegative/Binary matrix factorization for image classification ... - Nature.com

Quantum Computing in Automotive Market worth $5,203 million – GlobeNewswire

Chicago, Sept. 26, 2023 (GLOBE NEWSWIRE) -- Quantum Computing in Automotive Market is projected to grow from USD 143 million in 2026 to USD 5,203 million by 2035, at a CAGR of 49.0% from 2026 to 2035, according to a new report by MarketsandMarkets.

Browseand in-depth TOC on" Quantum Computing in Automotive Market" 139 Tables 34 Figures 203 Pages

Download PDF Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=63736889

Increasing government investments in developing quantum computing infrastructure and growing strategic collaborations between OEMs, auto component manufacturers & technology providers to carry out the advancements in quantum computing technology focused on complex automotive applications are the key growth factors of this market.

Route planning & traffic management are expected to dominate the quantum computing in automotive industry

Route planning and traffic management are one of the initial focus areas among automotive players for quantum computing applications. Using real-time data, smart simulation, and optimization techniques, quantum computing can bring effective route and traffic management, such as traffic pattern and flow, shortest path selection, tracking weather conditions, etc. A few automotive OEMs have collaborated with Quantum computing technology providers to develop an efficient roadmap for route optimization and traffic flow management. In 2019, Volkswagen Group launched the world's first pilot project for traffic optimization with a quantum computer by D-wave Systems Inc. in Lisbon, Portugal. Optimized route planning and traffic management are expected to reduce freight transportation costs in last-mile delivery while reducing delivery time. Additionally, route optimization using quantum computing algorithms can change ridesharing and mobility as a service (MaaS) markets with the faster movement of vehicle fleets in high-demand locations, and drivers would earn more bonuses by gaining more rides.

OEMs to dominate the quantum computing in automotive market during the forecast period

Automotive OEMs will dominate the quantum computing market as some auto giants such as Volkswagen AG, Daimler AG, BMW Group, Hyundai Motors, Ford Motors, and General Motors are early adopters of using quantum computing technology for various industrial applications. Technology leaders such as IBM Corporation, Microsoft Corporation, Alphabet Inc., etc., are extending their industry coverage to increase its practical use cases. In May 2022, PASQAL and BMW Group entered a technical collaboration agreement to analyze the applicability of quantum computing algorithms to metal forming application modelling. Additionally, OEMs are focusing on using quantum computing for applications such as product design, vehicle engineering, material research, production planning & optimization, demand forecasting, workforce management, and supply chain optimization. Further, upcoming revised emission regulations (e.g., Euro 7 in Europe) and regional stakeholder efforts to reduce fleet-level carbon emissions have boosted electric vehicle sales in recent years. Quantum computing can further fasten the battery innovation process to investigate new material compositions by examining precise chemical reactions using quantum computing simulation techniques inside the battery. Moreover, quantum computing will also be helpful for autonomous vehicles covering various aspects such as optimization of routes, object recognition, and sensor fusion. Of all possible automotive applications, quantum computing is utilized for some of them. More such advancements are expected to be seen in the year from global OEMs to cut down the new product launch time and cost reduction.

Make an Inquiry: https://www.marketsandmarkets.com/Enquiry_Before_BuyingNew.asp?id=63736889

"Europe is anticipated to be the second largest quantum computing in automotive market by 2035."

According to MarketsandMarkets analysis, Europe is projected to be the second-largest market for automotive quantum computing. The market growth is mainly attributed to the factors such as increasing investments from government bodies and increasing interest of local OEMs to develop a ground base for quantum computing in automotive applications. For instance, Union European High-Performance Computing Joint Undertaking (EuroHPC JU) has planned an investment of nearly USD 7.2 billion for developing quantum computing infrastructure in the coming years. Further, the European automotive industry is most impacted by stringency regulation on vehicular emissions, safety & comfort standards. Vehicles are installed with ADAS and comfort features; due to this, many OEMs prefer advanced lightweight materials in vehicles to reduce the vehicle's weight to achieve better fuel economy. In addition to this, European customers are rapidly shifting toward electric vehicles to curb the NoX and PM emissions.

Key Market Players

The Quantum Computing in Automotive Companies are IBM Corporation (US), Microsoft Corporation (US), D-wave systems Inc. (US), Amzon (US), Rigetti & Co, LLC (US), Alphabet Inc. (US), Accenture Plc (Ireland), Terra Quantum (Switzerland), and PASQAL (France).

Get 10% Free Customization: https://www.marketsandmarkets.com/requestCustomizationNew.asp?id=63736889

Browse Adjacent Market: Automotive and Transportation Market Research Reports & Consulting

Browse Related Reports:

Automotive Blockchain Market - Global Forecast to 2030

Semi-Autonomous & Autonomous Truck Market - Global Forecast to 2030

Artificial Intelligence in Transportation Market - Global Forecast to 2030

Autonomous / Self-Driving Cars Market - Global Forecast to 2030

Continue reading here:
Quantum Computing in Automotive Market worth $5,203 million - GlobeNewswire

Quantum Computing Inc. to Present at 8th Annual Dawson James … – GlobeNewswire

LEESBURG, VA, Sept. 27, 2023 (GLOBE NEWSWIRE) -- Quantum Computing Inc. (QCi or theCompany) (Nasdaq: QUBT), an innovative, quantum optics andnanophotonics technology company, will present at the 8th Annual Dawson James Conference on Thursday, October 12th, 2023, at the Wyndam Grand Jupiter at Harbourside Place, in Jupiter, Florida. To view the webcast of this presentation click this link.

QCi CEO and CFO will conduct in-person one-on-one meetings throughout the conference and deliver the Companys presentation as shown below.

8th Annual Dawson James Conference When: Thursday, October 12th, 2023 Time: 10:30 a.m. Preserve Ballroom B Wyndam Grand Jupiter at Harbourside Place, in Jupiter, Florida

About Quantum Computing Inc.

Quantum Computing Inc. (QCi) (Nasdaq: QUBT) is an innovative, quantum optics and nanophotonics technology company on a mission to accelerate the value of quantum computing for real-world business solutions, delivering the future of quantum computing, today. The company provides accessible and affordable solutions with real-world industrial applications, usingnanophotonic-basedquantum entropy that can be used anywhere and with little to no training, operates at normal room temperatures, low power and is not burdened with unique environmental requirements. QCi is competitively advantaged delivering its quantum solutions at greater speed, accuracy, and security at less cost. QCis core nanophotonic-based technology is applicable to both quantum computing as well as quantum intelligence, cybersecurity, sensing and imaging solutions, providing QCi with a unique position in the marketplace. QCis core entropy computing capability, the Dirac series, delivers solutions for both binary and integer-based optimization problems using over 11,000 qubits for binary problems and over 1000 (n=64) qubits for integer-based problems, each of which are the highest number of variables and problem size available in quantum computing today.Using the Companys core quantum methodologies, QCi has developed specific quantum applications for AI, cybersecurity, and remote sensing, including its Reservoir Photonic Computer series (intelligence), reprogrammable and non-repeatable Quantum Random Number Generator (cybersecurity) and LiDAR and Vibrometer (sensing) products.For more information about QCi, visitwww.quantumcomputinginc.com.

About Quantum Innovative Solutions

Quantum Innovative Solutions (QI Solutions or QIS), a wholly owned subsidiary of Quantum Computing Inc., is an Arizona-based supplier of quantum technology solutions and services to the government and defense industries. With a team of qualified and cleared staff, QIS delivers a range of solutions from entropy quantum computing to quantum communications and sensing, backed by expertise in logistics, manufacturing, R&D and training. The company is exclusively focused on delivering tailored solutions for partners in various government departments and agencies.

About Dawson James

Dawson James Securities specializes in capital raising for small and microcap public and private growth companies primarily in the Life Science/Health Care, Technology, Clean tech, and Consumer sectors. We are a full-service investment banking firm with research, institutional and retail sales, and execution trading and corporate services. By investing the time required to completely understand your business, we can provide an appropriate capital transaction structure and strategy including direct investment through our independent fund. Our team will assist in crafting your vision and shaping your message for the capital markets. Headquartered in Boca Raton, FL, Dawson James is privately held with offices in New York, Maryland, and New Jersey. http://www.dawsonjames.com

QCi Media and Investor Contact Jessica Tocco, CEO A10 Associates Tel: 765-210-0875 Jessica.Tocco@a10associates.com

Read more:
Quantum Computing Inc. to Present at 8th Annual Dawson James ... - GlobeNewswire

Where are we at with quantum computing? – Cosmos

Aberdeen, Maryland in the late 1940s was an exciting place to be. They had a computer so powerful and so energy intensive that there were rumours that when it switched on, the lights in Philadelphia dimmed.

The computer called the ENIAC took up an area almost the size of a tennis court. It needed 18,000 vacuum tubes and had cords thicker than fists crisscrossing the room connecting one section to another.

Despite its size, today its less impressive. Its computing power would be dwarfed by a desk calculator.

Professor Tom Stace, the Deputy Director of the ARC Centre of Excellence in Engineered Quantum Systems (EQUS) believes that quantum computing is best thought of not as computers like we know them today, but as big lumbering systems like the ENIAC.

ENIAC was the first digital computer, said Stace.

You see engineers programming, but that meant literally unplugging cables and plugging them into these gigantic room-size things. Thats sort of what a quantum computer looks like now. Its literally bolt cables that people have to wire up and solder together.

To understand where were at with quantum computing currently, you first have to understand their potential.

Right now, quantum computing is still in the very earliest stages of its development, despite the huge hype around quantum suggesting otherwise.

The ENIAC was useful despite its bulk, allowing programmers to do thousands of mathematical problems a second, and computations for the hydrogen bomb.

On the other hand, quantum computers are not yet suitable even for the niche roles that scientists hope they will one day fill. The idea that quantum computers might one day replace your laptop is still basically in the realm of science fiction.

But that doesnt mean that they cant one day be useful.

We know that quantum computers can solve a few sets of problems in a way that that ordinary computers just cant do, says Stace.

The famous one is factoring numbers. Finding the prime factors of a large number is genuinely a very difficult mathematical problem.

Because banks, governments, and anyone who wants to keep something secret all use factoring prime numbers for their digital security, our security systems would fall apart as soon as someone created a quantum computer that could outpace ordinary computers. Groups like the Australian Cyber Security Centre have already started putting in plans for when this eventually occurs.

Quantum computers could also fundamentally change the chemistry field, with more processing power to simulate better catalysts, fertilisers, or other industrial chemicals.

But this can only happen if quantum computers move beyond the realm they are in now what scientists call Noisy Intermediate Scale Quantum.

Computers are simply devices that can store and process data. Even the earliest computers used bits, a basic unit of information that can either be on or off.

Quantum computers are also devices that can store and process information, but instead of using bits, quantum computers use quantum bits or qubits, which dont just turn on and off but also can point to any point in between.

The key to quantum computers huge potential and also problems are these qubits.

Groups like IBM and Google have spent millions of dollars on creating quantum computers, no doubt buoyed by the riches for the company that comes first.

Their efforts so far have been relatively lacklustre.

The machines are clunky, each wire and qubit need to be individually placed or set up manually. The whole thing needs to be set up inside a freezer cooled down to almost absolute zero.

Despite all these safeguards the machines still have enough errors that its almost impossible to tell if the machines worked, or if these million-dollar systems are just producing random noise.

And even that is impressive to scientists like Stace.

Twenty years ago, if you had one qubit you got a Nature paper. Fifteen years ago, two or three qubits got you a Nature paper. Ten years ago, five qubits got you a Nature paper. Now, 70 qubits might get your Nature paper, says Stace.

Thats telling you what the frontier looks like.

Those on the frontier are aiming for supremacy quantum supremacy to be exact.

Quantum supremacy is a term given to a quantum computer that could solve a problem no classical computer could solve in a reasonable time frame. Its important to note though that this problem doesnt have to be useful. Theres been a debate in quantum circles about how useful and practical these sorts of problems, or simulations, actually are to prove quantum is better.

Googles machine called the Sycamore processor has currently got 70 qubits all lined up and connected. In 2019, the researchers had claimed theyd reached quantum supremacy. More recently, they went more specific suggesting that a top-level supercomputer would take 47 years to do the calculations that Sycamore managed to do in seconds.

IBM says its 433-qubit quantum computer called Osprey could soon start having real-world applications. However, while IBM is further ahead in number of qubits, it is still struggling with the same error issues as other quantum systems.

To get to a quantum computer that could rival supercomputers at actual tasks, you need hundreds of thousands, or millions of qubits rather than a few hundred. But the more qubits you have the more errors that end up in the system.

Quantum systems are typically single atoms or single particles of light. Naturally, these are very fragile and very prone to disturbance or noise, says UNSW quantum researcher and entrepreneur Professor Andrew Dzurak.

That noise causes errors in the qubit information.

Heat also causes errors; vibration causes errors. Even just simply looking or measuring the qubit stops it altogether.

Both Dzurak and Stace stress the importance of fixing these errors. Without it, you have a very expensive, fragile machine that cant tell you anything accurately.

How to fix these errors isnt yet certain. While IBM, Google and other big companies are using superconducting qubits, smaller groups around the world are using everything from silicon to imperfections in diamond.

Dzurak has formed a start-up called Diraq which is aiming to use traditional computer chip technology to mount the qubits, allowing easier design and the ability to pack millions of qubits on one chip.

We have a mountain to climb, and you have to go through the stages to get up that mountain, he says.

The work that is being done by [IBM and Google] in collaboration, often with university groups is important research and is moving the field forward.

Entanglement is another important aspect of quantum computers which makes them infinitely harder to make work. A quirk in quantum mechanics is that particles can become intrinsically linked, despite their distance. This means that if you measure one particle you can tell information about the other, even if youre halfway across the Universe. This is entanglement, and the more and more particles you can entangle, the more powerful your quantum computer can be.

But the more particles you entangle, the more complicated the system becomes, and the more likely it will break down.

Here the history of computers seems to be repeating.

While ENIAC in Maryland was an undisputed success, it wasnt the first design of a computer, not by a long shot. The first design of a computer called the differential engine was designed by a mathematician Charles Babbage in the 1820s.

But it wouldnt be built in Babbages lifetime.

Using only the technology available, it was impossible to fine tune the metal precisely enough to build the machine. It was doomed to fail from the start.

It wasnt until an invention of something seemingly unrelated vacuum tubes or valves that ENIAC and other types of computers could begin being built in earnest.

Its a hard thing to admit, but when it comes to quantum computers, we dont yet know whether were building the ENIAC or struggling with Babbages differential engine.

It might be the case that the components that were pursuing now arent just precise enough, in the same way that the machining tools that they had in the 19th century werent precise enough to make a mechanical computer, says Stace.

So where are we at with quantum computing? Not very far at all.

It could be that were somewhere between Charles Babbage and the valve. Weve got the idea, we know in principle we can make this thing. We just dont know if we have the engineering chops to do it.

Original post:
Where are we at with quantum computing? - Cosmos