Archive for the ‘Machine Learning’ Category

Epic Bio Reports Discovery of Exceptionally Durable Gene … – GlobeNewswire

- Vast high-throughput screening study used to train unique machine learning algorithm to design synthetic activators -

- Rational engineering produced activators that induce the most durable and mitotically stable gene activation reported to date -

SOUTH SAN FRANCISCO, Calif., Sept. 14, 2023 (GLOBE NEWSWIRE) -- Epic Bio, a biotechnology company developing therapies to modulate gene expression using compact, non-cutting dCas proteins, today announced data supporting the breakthrough potential of its Gene Expression Modulation System (GEMS) platform for epigenetic engineering. In two preprint studies posted onbioRxiv, the company reported the discovery of exceptionally durable, hypercompact gene activators, and the training of a machine learning model to generate additional synthetic activators. Epic Bio, a biotechnology company developing therapies to modulate gene expression using compact, non-cutting dCas proteins, today announced data supporting the breakthrough potential of its Gene Expression Modulation System (GEMS) platform for epigenetic engineering. In two preprint studies posted onbioRxiv, the company reported the discovery of exceptionally durable, hypercompact gene activators, and the training of a machine learning model to generate additional synthetic activators.

Novel Activators from High-Throughput Screen are Optimized

In the first paper, Discovery and engineering of hypercompact epigenetic modulators for durable gene activation, Epic Bios team reports the outcomes of the widest-known survey of naturally occurring protein sequences to identify novel activators, and subsequent engineering to overcome the primary barriers to therapeutic use.

Epic scientists designed a high-throughput screen to systematically integrate the transcriptional effects of peptide sequences from human, viral and archaeal species. The peptide sequences were incorporated into a GEMS construct and assayed for their ability to activate a synthetic genetic locus.

Resulting activators were then subjected to protein engineering to overcome the three main barriers to therapeutic use: Activator potency (strength of activation), robustness (activity against multiple different target types), and durability (persistence/heritability of the gene activation after transient delivery of the activator).

Ultimately, Epics team created activators that induce the most durable and mitotically stable gene activation reported to date. These display a novel ability to maintain target activation through dozens of cell divisions after a single transient delivery, despite occupying ~12-20% of the cargo size of the currently most commonly used activators.

Machine Learning Improves Success in Discovering Novel Activators

In the second paper, Improving few-shot learning-based protein engineering with evolutionary sampling, Epic reports on the development of a novel machine-learning approach, trained on the activators discovered in its prior work, to design entirely new synthetic activators. Improving few-shot learning-based protein engineering with evolutionary sampling, Epic reports on the development of a novel machine-learning approach, trained on the activators discovered in its prior work, to design entirely new synthetic activators.

To address the challenges of limited training data and the rarity of positive hits in this setting, Doctors Zaki Jawaid, Robin Yeo, and Timothy Daley created a novel Evolutionary Monte Carlo algorithm, called Evolutionary Monte Carlo Search, to efficiently sample the fitness landscape and propose novel, potent gene activators. Proposed activator sequences were experimentally validated for their ability to activate a synthetic genetic locus.

Researchers found that Evolutionary Monte Carlo Search was not only capable of improving the sequence diversity and novelty of designed sequences, but that it dramatically improved the hit rate of finding functional gene activators, both compared to more traditional machine-learning approaches as well as compared to the outputs of high-throughput screening.

This approach therefore holds promise for a number of diverse protein engineering challenges, and has the potential to accelerate the design of novel and active proteins for a variety of purposes including therapeutics.

About Epic Bio Epic Bio is a leading epigenetic editing company, leveraging the power of CRISPR without cutting DNA. The companys proprietary Gene Expression Modulation System (GEMS) includes the smallest Cas protein known to work in human cells, enabling in vivo delivery via a single AAV vector. Epics lead program, EPI-321, is in IND-enabling studies for treatment of facioscapulohumeral muscular dystrophy (FSHD); additional programs seek to address alpha-1 antitrypsin deficiency (A1AD), heterozygous familial hypercholesterolemia (HeFH), and other indications. Visitwww.epic-bio.com for more information or follow us onTwitterandLinkedIn.

Investor Contact

Shawn M. Cox Epic Bio Manager, Investor Relations and Corporate Communications shawn.cox@epic-bio.com

Media Contact

Lisa Raffensperger Ten Bridge Communications lisa@tenbridgecommunications.com (617) 903-8783

See the article here:
Epic Bio Reports Discovery of Exceptionally Durable Gene ... - GlobeNewswire

Scientists used machine learning to perform quantum error correction – Tech Explorist

The qubits that make up quantum computers can assume any superposition of the computational base states. This allows quantum computers to conduct new tasks in conjunction with quantum entanglement, another quantum property that joins several qubits in ways that go beyond what is possible with classical connections.

The extraordinary fragility of quantum superpositions is the primary obstacle to the practical implementation of quantum computers. In fact, errors that quickly shatter quantum superpositions are caused by minor disturbances, such as those caused, for example, by the environments pervasive presence. As a result, quantum computers lose their competitive advantage.

To overcome this obstacle, sophisticated methods for quantum error correction have been developed. While they can neutralize the effect of errors, they often come with a massive overhead in device complexity.

In a new study, scientists from the RIKEN Center for Quantum Computing used machine learning to perform error correction for quantum computers. Through this, they took a step forward in making these devices practical.

In particular, scientists used an autonomous correction system that, despite being approximate, can efficiently determine how best to make the necessary corrections.

This study used machine learning to find error correction methods with low device overhead and high error-correcting performance. To do this, they focused on an autonomous approach to quantum error correction, in which a skillfully created artificial environment replaces the requirement for performing regular error-detecting measurements. They also studied bosonic qubit encodings, which are, for example, used in some of the most promising and common quantum computing devices now accessible and built on superconducting circuits.

The vast search space for bosonic qubit encodings poses a challenging optimization problem that scientists attempt to solve with reinforcement learning. This cutting-edge machine learning technique involves an agent exploring an environment that may be abstract to learn and improve its action policy. As a result, the team discovered that an approximative qubit encoding that was unexpectedly simple could not only significantly reduce device complexity when compared to previously proposed encodings but also exceed its rivals in terms of its capacity to repair errors.

Yexiong Zeng, the first author of the paper, says,Our work not only demonstrates the potential for deploying machine learning towards quantum error correction, but it may also bring us a step closer to the successful implementation of quantum error correction in experiments.

According to Franco Nori,Machine learning can play a pivotal role in addressing large-scale quantum computation and optimization challenges. Currently, we are actively involved in several projects that integrate machine learning, artificial neural networks, quantum error correction, and quantum fault tolerance.

Journal Reference:

Visit link:
Scientists used machine learning to perform quantum error correction - Tech Explorist

Money, markets and machine learning: Unpacking the risks of adversarial AI – The Hill

It is impossible to ignore the critical role that artificial intelligence (AI) and its subset, machine learning, play in the stock market today.

While AI refers to machines that can perform tasks that would normally require human intelligence, machine learning (ML) involves learning patterns from data, which enhances the machines’ ability to make predictions and decisions.

One of the main ways the stock market uses machine learning is in algorithmic trading. The ML models recognize patterns from vast amounts of financial data, then make trades based on these patterns — thousands upon thousands of trades, in small fractions of a second. These algorithmic trading models learn continually, adjusting their predictions and actions in a process that occurs continuously, which can sometimes lead to phenomena like flash crashes, when certain patterns instigate a feedback loop, sending certain segments of the market into a sudden freefall.

Algorithmic trading, despite its occasional drawbacks, has become indispensable to our financial system. It has enormous upside; which is another way of saying that it makes some people an awful lot of money. According to the technology services company Exadel, banks stand to save $1 trillion by 2030 thanks to algorithmic trading.

Such reliance on machine learning models in finance is not without risks, however — risks beyond flash crashes, even.

One significant and underappreciated threat to these systems is what’s known as adversarial attacks. These occur when malevolent actors manipulate the input data that is fed to the ML model, causing the model to make bad predictions.

One form of this adversarial attack is known as “data poisoning,” wherein bad actors introduce “noise” — or false data — into the input. Training on this poisoned data can then cause the model to misclassify whole datasets. For instance, a credit card fraud system might wrongly attribute fraudulent activity where there has been none.

Such manipulations are not just a theoretical threat. The effects of data poisoning and adversarial attacks have broad implications across different machine learning applications, including financial forecast models. In a study conducted by researchers at the University of Illinois, IBM and other institutions, they demonstrated the vulnerability of financial forecast models to adversarial attacks. According to their findings, these attacks could lead to suboptimal trading decisions, resulting in losses of 23 percent to 32 percent for investors. This study highlights the potential severity of these threats, and underscores the need for robust defenses against adversarial attacks.

The financial industry’s reaction to these attacks has often been reactive — a game of whack-a-mole in which defenses are raised only after an attack has occurred. However, given that these threats are inherent in the very structure of ML algorithms, a more proactive approach is the only way of meaningfully addressing this ongoing problem.

Financial institutions need to implement robust and efficient testing and evaluation methods that can detect potential weaknesses and safeguard against these attacks. Such implementation could involve rigorous testing procedures, employing “red teams” to simulate attacks, and continually updating the models to ensure they’re not compromised by malicious actors or poor data.

The consequences of ignoring the problem of adversarial attacks in algorithmic trading are potentially catastrophic, from significant financial losses to damaged reputations for firms, or even widespread economic disruption. In a world increasingly reliant on ML models, the financial sector needs to shift from being reactive to proactive to ensure the security and integrity of our financial system.

Joshua Steier is a technical analyst, and Sai Prathyush Katragadda is a data scientist, at the nonprofit, nonpartisan RAND Corporation.

See more here:
Money, markets and machine learning: Unpacking the risks of adversarial AI - The Hill

3 Up-and-Coming Machine Learning Stocks to Put on Your Must-Buy List – InvestorPlace

Source: Sergio Photone / Shutterstock.com

Stocks connected to machine learning are synonymous with those connected to artificial intelligence. Machine learning falls under the umbrella of AI and relates to the use of data and algorithms to imitate human learning to improve accuracy. Kinda scary? Sure. However, machine learning is also proving to be revolutionary in 2023. The emergence of generative AI and its promise to improve our world has created a lot of value. This has led to the rise of machine learning stocks to buy.

While the companies discussed in this article might not be truly up-and-coming as they are established, they certainly are improving. That makes them must-buy stocks that any investor ought to consider.

Source: Below the Sky / Shutterstock.com

There are 13.5 billion reasons Nvidia (NASDAQ:NVDA) why stock should be on every investors list. Im of course referring to Nvidias $13.5 billion in second-quarter revenues. That far exceeded the $11 billion mark, perceived as incredibly ambitious, that Nvidia had given as guidance.

Those blowout earnings lend credence to the notion that AI and machine learning will be much more than a bubble. Instead, it is crystal clear that companies are clamoring for Nvidias leading AI chips and that the pace of things is increasing, not slowing.

Nvidias data center revenues alone at $10.32 billion nearly reached that $11 billion figure. Cloud firms are scrambling to secure their supply of chips that are used for machine learning purposes among other things.

NVDA shares can absolutely run higher from their current position. Their price-to-earnings ratio has temporarily fallen given how unexpectedly high earnings were. Nvidia is predicting $16 billion in revenues for the coming quarter. I dont believe theres any real reason to back off from its shares currently.

Source: T. Schneider / Shutterstock.com

Crowdstrike (NASDAQ:CRWD) is another machine learning stock to consider. The company utilizes machine learning to help it better understand how to stop breaches before they can occur. Its an AI-powered cybersecurity firm that is strongly rated on Wall Street and offers a lot of upside on that basis.

Crowdstrike is getting better and better at thwarting cyber attacks probably by the second. Machine learning allows the company to more intelligently prevent cyber attacks with each piece of data it gathers from an attack.

The company has been growing at a rapid pace over the last few years and has seen year-over-year increases above 40% in each of those periods. However, it has simultaneously struggled to find profitability which likely explains the disconnect between prices and expected prices.

Crowdstrike has several opportunities in front of it. First, if it can address profitability concerns its certain to appreciate in price. Second, theres a general rush toward securing systems that also benefit the company and should provide it fertile ground for future gains.

Source: JHVEPhoto / Shutterstock.com

AMD (NASDAQ:AMD) is the runner-up in the battle for machine learning supremacy at this point.

The stock has boomed in 2023 alongside Nvidia but not to the same degree. It is going to continue to crop up in the machine learning/AI conversation and absolutely makes sense as an investment now.

Lets try to understand AMD in relation to machine learning and its strengths and weaknesses vis-a-vis Nvidia. By now, everyone knows that Nvidia wins the overall battle hands down. When it comes to CPUs, AMD has a lot to offer. Its CPUs, along with those from Intel (NASDAQ:INTC), are the highest rated for machine learning purposes.

However, GPUs outperform CPUs when it comes to machine learning and Nvidia is the king of GPU. It has the highest-rated machine learning GPUs for at least the top five spots according to this source.

As bad as that sounds AMD is roughly 80% as capable as Nvidia overall in relation to AI and machine learning. Therefore, it has a massive opportunity at hand in closing that gap. Its also one of those machine learning stocks to buy.

On the date of publication, Alex Sirois did not have (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Alex Sirois is a freelance contributor to InvestorPlace whose personal stock investing style is focused on long-term, buy-and-hold, wealth-building stock picks. Having worked in several industries from e-commerce to translation to education and utilizing his MBA from George Washington University, he brings a diverse set of skills through which he filters his writing.

Read more from the original source:
3 Up-and-Coming Machine Learning Stocks to Put on Your Must-Buy List - InvestorPlace

Smarter AI: Choosing the Best Path to Optimal Deep Learning – SciTechDaily

Researchers have improved deep learning by selecting the most efficient overall path to the output, leading to a more effective AI without added layers.

Like climbing a mountain via the shortest possible path, improving classification tasks can be achieved by choosing the most influential path to the output, and not just by learning with deeper networks.

Deep Learning (DL) performs classification tasks using a series of layers. To effectively execute these tasks, local decisions are performed progressively along the layers. But can we perform an all-encompassing decision by choosing the most influential path to the output rather than performing these decisions locally?

In an article published today (August 31) in the journal Scientific Reports, researchers from Bar-Ilan University in Israel answer this question with a resounding yes. Pre-existing deep architectures have been improved by updating the most influential paths to the output.

Like climbing a mountain via the shortest possible path, improving classification tasks can be achieved by training the most influential path to the output, and not just by learning with deeper networks. Credit: Prof. Ido Kanter, Bar-Ilan University

One can think of it as two children who wish to climb a mountain with many twists and turns. One of them chooses the fastest local route at every intersection while the other uses binoculars to see the entire path ahead and picks the shortest and most significant route, just like Google Maps or Waze. The first child might get a head start, but the second will end up winning, said Prof. Ido Kanter, of Bar-Ilans Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, who led the research.

This discovery can pave the way for better enhanced AI learning, by choosing the most significant route to the top, added Yarden Tzach, a PhD student and one of the key contributors to this work.

This exploration of a deeper comprehension of AI systems by Prof. Kanter and his experimental research team, led by Dr. Roni Vardi, aims to bridge between the biological world and machine learning, thereby creating an improved, advanced AI system. To date they have discovered evidence for efficientdendriticadaptationusingneuronal cultures, as well as how toimplement those findingsin machine learning, showing howshallow networkscan compete with deep ones, and finding themechanism underlying successful deep learning.

Enhancing existing architectures using global decisions can pave the way for improved AI, which can improve its classification tasks without the need for additional layers.

Reference: Enhancing the accuracies by performing pooling decisions adjacent to the output layer 31 August 2023, Scientific Reports. DOI: 10.1038/s41598-023-40566-y

Originally posted here:
Smarter AI: Choosing the Best Path to Optimal Deep Learning - SciTechDaily