Archive for the ‘Machine Learning’ Category

Machine learning drafted to aid Phase 3 testing of ALS therapy PrimeC – ALS News Today

NeuroSense Therapeutics is collaborating with PhaseV for insights into how to better design the protocol for the planned Phase 3 trial that will test PrimeC for amyotrophic lateral sclerosis (ALS).

A specialist in machine learning technology for clinical trials, PhaseV used data from the ongoing Phase 2b PARADIGM trial (NCT05357950) as input to a causal machine learning model. This is a form of artificial intelligence that can help unlock insights and identify features that may contribute to a treatment response.

As part of its independent analysis, the company found that PrimeC could work well in multiple subgroups of patients in the Phase 3 study, which should start in the coming months.

Being able to predict treatment outcomes in certain patients may help optimize the design of the upcoming trial by selecting the patients most likely to respond, while reducing costs.

ALS is a complex disease that manifests in unique ways in each patient. Although there is an improved understanding of the underlying mechanisms of ALS, therapeutic options remain limited, Raviv Pryluk, CEO and co-founder of PhaseV, said in a press release.

NeuroSense plans to submit an end-of-Phase 2 package for review by the U.S. Food and Drug Administration (FDA) and the European Medicines Agency, the FDAs European counterpart, and discuss the clinical protocol for the Phase 3 trial with the regulators.

There remains a critical need for new innovative approaches to address this devastating neurodegenerative disease, said Alon Ben-Noon, CEO of NeuroSense. We plan to continue to collaborate with PhaseV as we develop our Phase 3 trial.

PrimeC contains fixed doses of two FDA-approved oral medications: the antibiotic ciprofloxacin and celecoxib, a pain killer that reduces inflammation. Both are expected to work together to slow or stop disease progression by blocking key mechanisms that lead up to ALS, such as inflammation, iron accumulation, and RNA processing.

PARADIGM is testing a long-acting formulation of PrimeC in 68 adults with ALS who started to see symptoms up to 2.5 years before enrolling. While continuing their standard ALS treatments, the participants were randomly assigned to PrimeC or a placebo, taken as two tablets twice daily for six months.

An analysis of PARADIGMs per-protocol population 62 adults with ALS who adhered well to the clinical protocol showed a significant 37.4% reduction in functional decline, as measured by the ALS Functional Rating Scale-Revised (ALSFRS-R).

A subgroup of those patients who were at a higher risk for rapid disease progression had the most clinical benefit, with those treated with PrimeC for six months showing a significant, 43% reduction in functional decline over a placebo. High-risk patients made up about half the adults in the Phase 2b trial.

Another subgroup of newly diagnosed patients whod had their first symptoms of ALS within a year of enrollment showed a 52% reduction in the rate of disease progression. This translated to a 7.76-point difference in favor of PrimeC on a maximum total of 48 points in the ALSFRS-R.

Through our initial collaboration with PhaseV, we gained an even greater understanding of the effect of PrimeC across multiple patient subgroups, Ben-Noon said. We will apply these insights to optimize the design of our Phase 3 study with the aim of maximizing meaningful clinical results that will differentiate PrimeC in the market.

Through a unique combination of causal [machine learning], real-world data, and advanced statistical methods, we confirmed the potential clinical benefit of PrimeC, Pryluk said. Our analysis predicted a high rate of success for PrimeC in the Phase 3 clinical trial for multiple recommended subgroups.

Go here to read the rest:
Machine learning drafted to aid Phase 3 testing of ALS therapy PrimeC - ALS News Today

Bolstering environmental data science with equity-centered approaches – EurekAlert

image:

Graphical abstract

Credit: Joe F. Bozeman III

A paradigm shift towards integrating socioecological equity into environmental data science and machine learning (ML) is advocated in a new perspective article (DOI: 10.1007/s11783-024-1825-2)published in the Frontiers of Environmental Science & Engineering. Authored by Joe F. Bozeman III from the Georgia Institute of Technology, the paper emphasizes the importance of understanding and addressing socioecological inequity to enhance the integrity of environmental data science.

This study introduces and validates the Systemic Equity Framework and the Wells-Du Bois Protocol, essential tools for integrating equity in environmental data science and machine learning. These methodologies extend beyond traditional approaches by emphasizing socioecological impacts alongside technical accuracy. The Systemic Equity Framework focuses on the concurrent consideration of distributive, procedural, and recognitional equity, ensuring fair benefits for all communities, particularly the marginalized. It encourages researchers to embed equity throughout the project lifecycle, from inception to implementation. The Wells-Du Bois Protocol offers a structured method to assess and mitigate biases in datasets and algorithms, guiding researchers to critically evaluate potential societal bias reinforcement in their work, which could lead to skewed outcomes.

Highlights

Socioecological inequity must be understood to improve environmental data science.

The Systemic Equity Framework and Wells-Du Bois Protocol mitigate inequity.

Addressing irreproducibility in machine learning is vital for bolstering integrity.

Future directions include policy enforcement and systematic programming.

"Our work is not just about improving technology but ensuring it serves everyone justly," said Joe F. Bozeman III, lead researcher and professor at Georgia Institute of Technology. "Incorporating an equity lens into environmental data science is crucial for the integrity and relevance of our research in real-world settings."

This pioneering research not only highlights existing challenges in environmental data science and machine learning but also offers practical solutions to overcome them. It sets a new standard for conducting research that is just, equitable, and inclusive, thereby paving the way for more responsible and impactful environmental science practices.

Frontiers of Environmental Science & Engineering

Experimental study

Not applicable

Bolstering integrity in environmental data science and machine learning requires understanding socioecological inequity

8-Feb-2024

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Read this article:
Bolstering environmental data science with equity-centered approaches - EurekAlert

Machine Learning Stocks to Buy That Are Millionaire-Makers: May – InvestorPlace

Source: Wright Studio / Shutterstock.com

The next phase of technology has been established: machine learning and AI will revolutionize the world for the better. Although it might seem like these stocks are trading in a bubble, investors need to keep a discerning and keen long-term vision for these disruptive, emerging technologies. Some way or another, AI will grow to become a secular movement that nearly every industry, not every company in the world, will incorporate to increase productivity and efficiency.

Of course, anxiousness about the AI bubble is not unwarranted. Preparing a well-diversified portfolio of the right stocks is crucial to avoid such major drawdowns. Just because a company mentions AI doesnt mean it instantly becomes a good investment. Weve already seen this with pullbacks in industries like EVs and fintech. So, if you want to gain machine learning exposure in your portfolio, consider these three machine learning stocks to buy and thank us in the coming five or ten years.

Source: Ascannio / Shutterstock.com

Palantir (NYSE:PLTR) went from a meme stock to a legitimate business, earning hundreds of millions each year in profits. The stock is trading right at the average analyst price target of $21.45 and has a street-high price target of $35.00. This high-end target represents a more than 60% upside from the current price.

This stock has been polarizing on Wall Street since its direct listing debut in September 2020. While the first few years were a roller coaster ride for investors, the stock is earning legitimate backing through its machine-learning integrated production deployment infrastructure. Additionally, the hype doesnt get any more legit than Stanley Druckenmiller, who disclosed that he bought nearly 770,000 shares in the recent quarter! For those who dont know him, Druckenmiller has long supported the ML revolution, with NVIDIA (NASDAQ:NVDA) being his most recent win during its massive rally over the past year.

The problem with Palantir has always been its valuation. Currently, shares trade at 21x sales and 65x forward earnings. Nonetheless, growth prospects are looking strong now, with revenue growing at a five-year compound annual growth rate (CAGR) of 12% and a three-year CAGR of 21%. As multiples begin to compress, investors should consider Palantir to be a legitimate money-making contender in the ML space.

Baidu (NASDAQ:BIDU) is a Chinese technology company that recently amassed over 200 million users on its new Ernie AI chatbot. This year, the stock is down by about 4.0% as Chinese stocks have lagged the broader rally in US equities. Nonetheless, Wall Street has maintained an average analyst price target of $153.36, about 40% higher than the current price.

Baidu recently made headlines after reporting it was interested in partnering with Tesla (NASDAQ:TSLA) to use its robotaxis in China. As China looks to get its hands on some for immediate rollout, investors should keep their eyes peeled for the unveiling of the CyberCabs in America this August. Not only will this potentially be one of the strongest new channels for revenue growth for both these companies, but Baidus race to get first movers advantage could solidify it as a leader in the Chinese automobile space.

As with many Chinese ADR stocks, the multiples for BIDU are low. For example, its P/E ratio of 9.79x is sitting 25% lower than its sectors median! On top of such a discounted valuation, Baidu has maintained a strong 10-year revenue CAGR of 14%. Baidu looks like a bargain for investors who can tolerate the risk that comes with Chinese stocks.

Micron Technologies (NASDAQ:MU) is an American chip maker with a major surge in demand due to AI and machine learning technology. Analysts are bullish on MU, with 28 of 31 recommendations coming in May as a Buy or Strong Buy rating. The average analyst price target is $145.52, nearly 15% higher than the current price.

This chip maker has already hit new all-time highs this month and is seeing revitalized product demand. This growth potential has largely been attributed to Micron being one of three companies in the world that make DRAM memory chips. These chips allow for storing massive amounts of data, which will help accelerate the training of AI and machine learning technologies. These DRAM chips account for 71% of Microns revenue as of Q2 2024, which bodes well for the stocks upward momentum.

Usually, when a stock trades at all-time highs, its valuations also stretch. Thats not exactly true for Micron, as shares are trading at just 7.5x sales and 17x forward earnings. As revenue growth accelerates, Micron sticks out as one of the more under-the-radar ways to gain exposure to AI and potentially join the million-dollar club.

On the date of publication, Ian Hartana and Vayun Chugh did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.comPublishing Guidelines.

Chandler Capital is the work of Ian Hartana and Vayun Chugh. Ian Hartana and Vayun Chugh are both self-taught investors whose work has been featured in Seeking Alpha. Their research primarily revolves around GARP stocks with a long-term investment perspective encompassing diverse sectors such as technology, energy, and healthcare.

See the article here:
Machine Learning Stocks to Buy That Are Millionaire-Makers: May - InvestorPlace

AI-Driven Robotics and Quality Control: Transforming Electronics Assembly and Manufacturing – Robotics and Automation News

In the ever-evolving landscape of manufacturing, especially electronics assembly, AI-driven robotics is transforming the nature of quality control.

By using deep learning, these systems can detect patterns and defects that humans cannot, which provides ample opportunity for companies to leap ahead of their competition.

As AI continues to integrate into manufacturing, its role in ensuring stringent quality standards, enhancing inspection capabilities, and maintaining compliance with IPC (Institute for Printed Circuits) standards becomes increasingly vital.

AI-driven robotics significantly enhance quality control by using advanced algorithms and machine learning to detect defects more accurately than human inspectors.

These systems can analyse large volumes of data in real time, identify patterns, and make decisions based on stringent quality standards. This is important in the automotive, pharmaceutical, and electronic assembly industries, where even minor defects can lead to substantial issues.

The AI algorithms that work on modern quality control depend upon machine learning and deep learning. Machine learning involves the development of algorithms and data sets that allow AI to perform tasks without explicit instructions.

With machine learning, AI can learn and make decisions on data independently. Deep learning is a type of machine learning that uses layers of networks to interpret complicated data. This enables AI to understand and learn from images, text, video, and audio.

Machine learning and especially deep learning is being used to train AI on large data sets so that it can detect patterns and anomalies. This is then applied to quality control.

For example, in the electronics manufacturing industry, AI can be trained to detect soldering defects, missing components, and alignment issues. These algorithms continuously learn from new data, improving their accuracy over time.

One of the significant advantages of AI-driven robotics is the ability to process and analyse data in real time. High-speed cameras and sensors capture detailed images and measurements of products as they move through the production line.

AI systems then quickly analyse this data, instantly identifying defects and deviations from quality standards. This immediate feedback allows for quick corrective actions, reducing the amount of quality issues in manufacturing.

AI systems excel in pattern recognition, which is crucial for detecting defects that may not be obvious to human inspectors. For example, AI can identify subtle variations in texture, colour, or shape that could indicate a potential issue.

This is particularly important in industries like electronics and automotive manufacturing, where precision and consistency are absolutely vital. Pattern recognition capabilities ensure that even the smallest defects are detected and addressed promptly.

IPC is a trade association that standardises the assembly and production of electric equipment and assemblies and it is the bedrock behind the success of modern electronics manufacturing. AI robotics poses a number of significant opportunities to further enhance IPC moving forward:

The Future of Quality Control in Electronics Manufacturing: As AI-driven robotics evolve, their role in quality control will expand. Future advancements may include sophisticated machine learning models for predictive maintenance, reducing downtime and enhancing production efficiency.

As AI-driven robotics continue to evolve, their role in quality control within electronics manufacturing is ready for significant expansion in several different ways.

Combining Internet of Things (IoT) devices with AI-driven robotics will revolutionise quality control. IoT sensors can monitor production environments in real time, providing data that AI systems analyse to maintain optimal conditions.

Predictive maintenance, powered by this integration, will allow for timely repairs, minimising production disruptions.

AI systems that learn and adapt to new products and manufacturing methods offer greater flexibility and efficiency. These adaptive systems can adjust in real-time based on data analysis, ensuring quality control processes evolve with production innovations.

Advanced machine learning models will continually improve from new data, enhancing their defect detection and process optimisation capabilities.

Predictive analytics will become integral to quality control. By analysing historical and real-time data, AI can foresee potential issues, enabling proactive measures. This reduces unexpected downtime and optimises production schedules, leading to more efficient operations.

Future quality control will see increased collaboration between humans and robots. Collaborative robots (cobots) will handle repetitive or hazardous tasks, while humans focus on complex and creative quality control aspects.

This synergy will enhance productivity and job satisfaction, creating a more skilled workforce that can adapt to advanced technologies.

There can be no debate that AI-driven robotics is revolutionising the nature of quality control in manufacturing, especially for electronics assembly. These systems provide consistent, precise inspections and reduce human error to ensure strict adherence to IPC standards.

As AI technology continues to develop, the integration of IoT devices, adaptive quality control systems, and predictive analytics will further increase efficiency and reduce downtime.

Human-robot collaboration will also play a key role, combining the strengths of both to achieve higher productivity and quality standards.

Go here to see the original:
AI-Driven Robotics and Quality Control: Transforming Electronics Assembly and Manufacturing - Robotics and Automation News

Machine Learning vs. Deep Learning: What’s the Difference? – Gizmodo

Artificial intelligence is everywhere these days, but the fundamentals of how this influential new technology works can be difficult to wrap your head around. Two of the most important fields in AI development are machine learning and its sub-field, deep learning, although the terms are sometimes used interchangeably, leading to a certain amount of confusion. Heres a quick explanation of what these two important disciplines are, and how theyre contributing to the evolution of automation.

Like It or Not, Your Doctor Will Use AI | AI Unlocked

Proponents of artificial intelligence say they hope to someday create a machine that can think for itself. The human brain is a magnificent instrument, capable of making computations that far outstrip the capacity of any currently existing machine. Software engineers involved in AI development hope to eventually make a machine that can do everything a human can do intellectually but can also surpass it. Currently, the applications of AI in business and government largely amount to predictive algorithms, the kind that suggest your next song on Spotify or try to sell you a similar product to the one you bought on Amazon last week. However, AI evangelists believe that the technology will, eventually, be able to reason and make decisions that are much more complicated. This is where ML and DL come in.

Machine learning (or ML) is a broad category of artificial intelligence that refers to the process by which software programs are taught how to make predictions or decisions. One IBM engineer, Jeff Crume, explains machine learning as a very sophisticated form of statistical analysis. According to Crume, this analysis allows machines to make predictions or decisions based on data. The more information that is fed into the system, the more its able to give us accurate predictions, he says.

Unlike general programming where a machine is engineered to complete a very specific task, machine learning revolves around training an algorithm to identify patterns in data by itself. As previously stated, machine learning encompasses a broad variety of activities.

Deep learning is machine learning. It is one of those previously mentioned sub-categories of machine learning that, like other forms of ML, focuses on teaching AI to think. Unlike some other forms of machine learning, DL seeks to allow algorithms to do much of their work. DL is fueled by mathematical models known as artificial neural networks (ANNs). These networks seek to emulate the processes that naturally occur within the human brainthings like decision-making and pattern identification.

One of the biggest differences between deep learning and other forms of machine learning is the level of supervision that a machine is provided. In less complicated forms of ML, the computer is likely engaged in supervised learninga process whereby a human helps the machine recognize patterns in labeled, structured data, and thereby improve its ability to carry out predictive analysis.

Machine learning relies on huge amounts of training data. Such data is often compiled by humans via data labeling (many of those humans are not paid very well). Through this process, a training dataset is built, which can then be fed into the AI algorithm and used to teach it to identify patterns. For instance, if a company was training an algorithm to recognize a specific brand of car in photos, it would feed the algorithm huge tranches of photos of that car model that had been manually labeled by human staff. A testing dataset is also created to measure the accuracy of the machines predictive powers, once it has been trained.

When it comes to DL, meanwhile, a machine engages in a process called unsupervised learning. Unsupervised learning involves a machine using its neural network to identify patterns in what is called unstructured or raw datawhich is data that hasnt yet been labeled or organized into a database. Companies can use automated algorithms to sift through swaths of unorganized data and thereby avoid large amounts of human labor.

ANNs are made up of what are called nodes. According to MIT, one ANN can have thousands or even millions of nodes. These nodes can be a little bit complicated but the shorthand explanation is that theylike the nodes in the human brainrelay and process information. In a neural network, nodes are arranged in an organized form that is referred to as layers. Thus, deep learning networks involve multiple layers of nodes. Information moves through the network and interacts with its various environs, which contributes to the machines decision-making process when subjected to a human prompt.

Another key concept in ANNs is the weight, which one commentator compares to the synapses in a human brain. Weights, which are just numerical values, are distributed throughout an AIs neural network and help determine the ultimate outcome of that AI systems final output. Weights are informational inputs that help calibrate a neural network so that it can make decisions. MITs deep dive on neural networks explains it thusly:

To each of its incoming connections, a node will assign a number known as a weight. When the network is active, the node receives a different data item a different number over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node fires, which in todays neural nets generally means sending the number the sum of the weighted inputs along all its outgoing connections.

In short: neural networks are structured to help an algorithm come to its own conclusions about data that has been fed to it. Based on its programming, the algorithm can identify helpful connections in large tranches of data, helping humans to draw their own conclusions based on its analysis.

Machine and deep learning help train machines to carry out predictive and interpretive activities that were previously only the domain of humans. This can have a lot of upsides but the obvious downside is that these machines can (and, lets be honest, will) inevitably be used for nefarious, not just helpful, stuffthings like government and private surveillance systems, and the continued automation of military and defense activity. But, theyre also, obviously, useful for consumer suggestions or coding and, at their best, medical and health research. Like any other tool, whether artificial intelligence has a good or bad impact on the world largely depends on who is using it.

Read the original here:
Machine Learning vs. Deep Learning: What's the Difference? - Gizmodo