Archive for the ‘Machine Learning’ Category

Machine Learning Shown to Identify Patient Response to Sarilumab in Rheumatoid Arthritis – AJMC.com Managed Markets Network

Machine learning was shown to identify patients with rheumatoid arthritis (RA) who present an increased chance of achieving clinical response with sarilumab, with those selected also showing an inferior response to adalimumab, according to an abstract presented at ACR Convergence, the annual meeting of the American College of Rheumatology (ACR).

In prior phase 3 trials comparing the interleukin 6 receptor (IL-6R) inhibitor sarilumab with placebo and the tumor necrosis factor (TNF-) inhibitor adalimumab, sarilumab appeared to provide superior efficacy for patients with moderate to severe RA. Although promising, the researchers of the abstract highlight that treatment of RA requires a more individualized approach to maximize efficacy and minimize risk of adverse events.

The characteristics of patients who are most likely to benefit from sarilumab treatment remain poorly understood, noted researchers.

Seeking to better identify the patients with RA who may best benefit from sarilumab treatment, the researchers applied machine learning to select from a predefined set of patient characteristics, which they hypothesized may help delineate the patients who could benefit most from either antiIL-6R or antiTNF- treatment.

Following their extraction of data from the sarilumab clinical development program, the researchers utilized a decision tree classification approach to build predictive models on ACR response criteria at week 24 in patients from the phase 3 MOBILITY trial, focusing on the 200-mg dose of sarilumab. They incorporated the Generalized, Unbiased, Interaction Detection and Estimation (GUIDE) algorithm, including 17 categorical and 25 continuous baseline variables as candidate predictors. These included protein biomarkers, disease activity scoring, and demographic data, added the researchers.

Endpoints used were ACR20, ACR50, and ACR70 at week 24, with the resulting rule validated through application on independent data sets from the following trials:

Assessing the end points used, it was found that the most successful GUIDE model was trained against the ACR20 response. From the 42 candidate predictor variables, the combined presence of anticitrullinated protein antibodies (ACPA) and C-reactive protein >12.3 mg/L was identified as a predictor of better treatment outcomes with sarilumab, with those patients identified as rule-positive.

These rule-positive patients, which ranged from 34% to 51% in the sarilumab groups across the 4 trials, were shown to have more severe disease and poorer prognostic factors at baseline. They also exhibited better outcomes than rule-negative patients for most end points assessed, except for patients with inadequate response to TNF inhibitors.

Notably, rule-positive patients had a better response to sarilumab but an inferior response to adalimumab, except for patients of the HAQ-Disability Index minimal clinically important difference end point.

If verified in prospective studies, this rule could facilitate treatment decision-making for patients with RA, concluded the researchers.

Reference

Rehberg M, Giegerich C, Praestgaard A, et al. Identification of a rule to predict response to sarilumab in patients with rheumatoid arthritis using machine learning and clinical trial data. Presented at: ACR Convergence 2020; November 5-9, 2020. Accessed January 15, 2021. 021. Abstract 2006. https://acrabstracts.org/abstract/identification-of-a-rule-to-predict-response-to-sarilumab-in-patients-with-rheumatoid-arthritis-using-machine-learning-and-clinical-trial-data/

Original post:
Machine Learning Shown to Identify Patient Response to Sarilumab in Rheumatoid Arthritis - AJMC.com Managed Markets Network

Deep Learning Outperforms Standard Machine Learning in Biomedical Research Applications, Research Shows – Georgia State University News

ATLANTACompared to standard machine learning models, deep learning models are largely superior at discerning patterns and discriminative features in brain imaging, despite being more complex in their architecture, according to a new study in Nature Communications led by Georgia State University.

Advanced biomedical technologies such as structural and functional magnetic resonance imaging (MRI and fMRI) or genomic sequencing have produced an enormous volume of data about the human body. By extracting patterns from this information, scientists can glean new insights into health and disease. This is a challenging task, however, given the complexity of the data and the fact that the relationships among types of data are poorly understood.

Deep learning, built on advanced neural networks, can characterize these relationships by combining and analyzing data from many sources. At the Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State researchers are using deep learning to learn more about how mental illness and other disorders affect the brain.

Although deep learning models have been used to solve problems and answer questions in a number of different fields, some experts remain skeptical. Recent critical commentaries have unfavorably compared deep learning with standard machine learning approaches for analyzing brain imaging data.

However, as demonstrated in the study, these conclusions are often based on pre-processed input that deprive deep learning of its main advantagethe ability to learn from the data with little to no preprocessing. Anees Abrol, research scientist at TReNDS and the lead author on the paper, compared representative models from classical machine learning and deep learning, and found that if trained properly, the deep-learning methods have the potential to offer substantially better results, generating superior representations for characterizing the human brain.

We compared these models side-by-side, observing statistical protocols so everything is apples to apples. And we show that deep learning models perform better, as expected, said co-author Sergey Plis, director of machine learning at TReNDS and associate professor of computer science.

Plis said there are some cases where standard machine learning can outperform deep learning. For example, diagnostic algorithms that plug in single-number measurements such as a patients body temperature or whether the patient smokes cigarettes would work better using classical machine learning approaches.

If your application involves analyzing images or if it involves a large array of data that cant really be distilled into a simple measurement without losing information, deep learning can help, Plis said.. These models are made for really complex problems that require bringing in a lot of experience and intuition.

The downside of deep learning models is they are data hungry at the outset and must be trained on lots of information. But once these models are trained, said co-author Vince Calhoun, director of TReNDS and Distinguished University Professor of Psychology, they are just as effective at analyzing reams of complex data as they are at answering simple questions.

Interestingly, in our study we looked at sample sizes from 100 to 10,000 and in all cases the deep learning approaches were doing better, he said.

Another advantage is that scientists can reverse analyze deep-learning models to understand how they are reaching conclusions about the data. As the published study shows, the trained deep learning models learn to identify meaningful brain biomarkers.

These models are learning on their own, so we can uncover the defining characteristics that theyre looking into that allows them to be accurate, Abrol said. We can check the data points a model is analyzing and then compare it to the literature to see what the model has found outside of where we told it to look.

The researchers envision that deep learning models are capable of extracting explanations and representations not already known to the field and act as an aid in growing our knowledge of how the human brain functions. They conclude that although more research is needed to find and address weaknesses of deep-learning models, from a mathematical point of view, its clear these models outperform standard machine learning models in many settings.

Deep learnings promise perhaps still outweighs its current usefulness to neuroimaging, but we are seeing a lot of real potential for these techniques, Plis said.

More here:
Deep Learning Outperforms Standard Machine Learning in Biomedical Research Applications, Research Shows - Georgia State University News

Predicting falls and injuries in people with multiple sclerosis using machine learning algorithms – DocWire News

This article was originally published here

Mult Scler Relat Disord. 2021 Jan 7;49:102740. doi: 10.1016/j.msard.2021.102740. Online ahead of print.

ABSTRACT

Falls in people with Multiple Sclerosis (PwMS) is a serious issue. It can lead to a lot of problems including sustaining injuries, losing consciousness and hospitalization. Having a model that can predict the probability of these falls and the factors correlated with them and can help caregivers and family members to have a clearer understanding of the risks of falling and proactively minimizing them. We used historical data and machine learning algorithms to predict three outcomes: falling, sustaining injuries and injury types caused by falling in PwMS. The training dataset for this study includes 606 examples of monthly readings. The predictive attributes are the following: Expanded Disability Status Scale (EDSS), years passed since the diagnosis of MS, age of participants in the beginning of the experiment, participants gender, type of MS and season (or month). Two types of algorithms, decision tree and gradient boosted trees (GBT) algorithm, were used to train six models to answer these three outcomes. After the models were trained their accuracy was evaluated using cross-validation. The models had a high accuracy with some exceeding 90%. We did not limit model evaluation to one-number assessments and studied the confusion matrices of the models as well. The GBT had a higher class recall and smaller number of underestimations, which make it a more reliable model. The methodology proposed in this study and its findings can help in developing better decision-support tools to assist PwMS.

PMID:33450500 | DOI:10.1016/j.msard.2021.102740

Originally posted here:
Predicting falls and injuries in people with multiple sclerosis using machine learning algorithms - DocWire News

Decentralized Autonomous Travel Solution Introduced by Fetch.ai, an AI and Machine Learning Network – Crowdfund Insider

The developers at Fetch.ai, an artificial intelligence (AI) and machine learning (ML) network, are introducing what they describe or refer to as decentralized autonomous travel.

Fetch.ai aims to connect to more than 770,000 hotels with its Autonomous Travel system.

The Autonomous AI Travel Agents intend to reduce the role of centralized aggregators and services, thereby encouraging direct provider-to-consumer interaction. These efforts should lead to considerable cost savings of around 10% for both hotels and consumers.

The Autonomous AI Travel Agents framework, developed by Fetch.ai, is not meant to completely replace current systems. It is supposed to complement them. As explained by Fetch.ai in a blog post, the system operates safely, non-destructively, and in parallel to existing relationships that hotels might have. It aims to offer an alternative way by which bookings may be handled: one where the customer and hotel may deal directly with each other, and also one where a more personalized, better value experience can be delivered.

As mentioned in the update, building further upon the Mobility Framework, Fetch.ai is announcing tools and services to enable Autonomous agent-based travel solutions.

As noted in the announcement, Fetch.ai has developed an applications framework to allow hotel operators to launch Autonomous AI Travel Agents to market. They are also able to negotiate and trade their existing inventory via the Fetch.ai network, while getting payments in fiat currencies or cryptos, all powered by the Fetch.ais native FET token.

As stated in the update:

The promise of the Fetch.ai network is that a decentralized, multi-agent based system will be able to provide a new, personalized, privacy focused travel solution and change the way we view and work with the hotel and travel industry.

Before the COVID-19 outbreak, many hotels across the globe had teamed up with service providers such as Expedia because they offer a useful, and intuitive platform to facilitate travel and without the exposure that they deliver, most hotels wouldnt be able to attract consumers, the announcement noted. This gives service providers such as Expedia sufficient leverage over the hotels who have partnered up with them and charge high commission.

As confirmed by Fetch.ai, with the onset of COVID, hotels are now facing a lot of pressure to stay afloat without going completely bankrupt. The Fetch.ai system allows hotel service providers to have their rooms marketed and booked without paying the standard 1520% commission from hotel marketplace aggregators.

Fetch.ai confirmed that theyll be publishing the code base and software toolkits for the Autonomous AI Travel Agents next month (February 2021).

(Note: for more details on this update, check here.)

Original post:
Decentralized Autonomous Travel Solution Introduced by Fetch.ai, an AI and Machine Learning Network - Crowdfund Insider

Machine Learning Project Aims To Improve AM Metrology and Quality News – Online Magazine – "metrology news"

Machine learning technology will be used to make the additive manufacturing (AM) process of metallic alloys for aerospace cheaper and faster, encouraging production of lightweight, energy-efficient aircraft to support net zero targets for aviation.

The Project MEDAL (Machine Learning for Additive Manufacturing Experimental Design) is led by Intellegens, a University of Cambridge, UK spin-out specialising in artificial intelligence, the University of Sheffield AMRC North West, and global aerospace giant Boeing. It aims to accelerate the product development lifecycle of aerospace components by using a machine learning model to optimise additive manufacturing (AM) processing parameters for new metal alloys at a lower cost and faster rate.

AM is a group of technologies that create 3D objects from computer aided design (CAD) data. AM techniques reduce material waste and energy usage; allow easy prototyping, optimising and improvement of components; and enable the manufacture of components with superior engineering performance over their lifecycle. The global AM market is worth 12bn and that is expected to triple in size over the next five years. Project MEDALs research will concentrate on metal laser powder bed fusion the most widely used AM approach in industry focussing on key parameter variables required to manufacture high density, high strength parts.

The project is part of the National Aerospace Technology Exploitation Programme (NATEP), a 10 million initiative for UK SMEs to develop innovative aerospace technologies funded by the Department for Business, Energy and Industrial Strategy and delivered in partnership with the Aerospace Technology Institute (ATI) and Innovate UK. Intellegens was a start-up in the first group of companies to complete the ATI Boeing Accelerator last year.

Ben Pellegrini, CEO of Intellegens, said: We are very excited to be launching this project in conjunction with the AMRC. The intersection of machine learning, design of experiments and additive manufacturing holds enormous potential to rapidly develop and deploy custom parts not only in aerospace, as proven by the involvement of Boeing, but in medical, transport and consumer product applications.

James Hughes, Research Director for University of Sheffield AMRC North West, said the project will build the AMRCs knowledge and expertise in alloy development so it can help other UK manufacturers.

At the AMRC we have experienced first-hand, and through our partner network, how onerous it is to develop a robust set of process parameters for AM. It relies on a multi-disciplinary team of engineers and scientists and comes at great expense in both time and capital equipment, said Hughes. It is our intention to develop a robust, end-to-end methodology for process parameter development that encompasses how we operate our machinery right through to how we generate response variables quickly and efficiently. Intellegens AI-embedded platform Alchemite will be at the heart of all of this.

There are many barriers to the adoption of metallic AM but by providing users, and maybe more importantly new users, with the tools they need to process a required material should not be one of them. With the AMRCs knowledge in AM, and Intellegens AI tools, all the required experience and expertise is in place in order to deliver a rapid, data-driven software toolset for developing parameters for metallic AM processes to make them cheaper and faster.

Sir Martin Donnelly, president of Boeing Europe and managing director of Boeing in the UK and Ireland, said the project shows how industry can successfully partner with government and academia to spur UK innovation.

We are proud to see this project move forward because of what it promises aviation and manufacturing, and because of what it represents for the UKs innovation ecosystem, Donnelly said. We helped found the AMRC two decades ago, Intellegens was one of the companies we invested in as part of the ATI Boeing Accelerator and we have longstanding research partnerships with Cambridge University and the University of Sheffield. We are excited to see what comes from this continued collaboration and how we might replicate this formula in other ways within the UK and beyond.

Aerospace components have to withstand certain loads and temperature resistances, and some materials are limited in what they can offer. There is also simultaneous push for lower weight and higher temperature resistance for better fuel efficiency, bringing new or previously impractical-to-machine metals into the aerospace material mix.

One of the main drawbacks of AM is the limited material selection currently available and the design of new materials, particularly in the aerospace industry, requires expensive and extensive testing and certification cycles which can take longer than a year to complete and cost as much as 1 million ($1.35 million) to undertake. Project MEDAL aims to accelerate this process, using Machine Learning (ML) to rapidly optimise AM processing parameters for new metal alloys, making the development process more time and cost efficient.

Pellegrini said experimental design techniques are extremely important to develop new products and processes in a cost-effective and confident manner. The most common approach is Design of Experiments (DOE), a statistical method that builds a mathematical model of a system by simultaneously investigating the effects of various factors.

DOE is a more efficient, systematic way of choosing and carrying out experiments compared to the Change One Separate variable at a Time (COST) approach. However, the high number of experiments required to obtain a reliable covering of the search space means that DOE can still be a lengthy and costly process, which can be improved, explained Pellegrini.

The machine learning solution in this project can significantly reduce the need for many experimental cycles by around 80%. The software platform will be able to suggest the most important experiments needed to optimise AM processing parameters, in order to manufacture parts that meet specific target properties. The platform will make the development process for AM metal alloys more time and cost efficient. This will in turn accelerate the production of more lightweight and integrated aerospace components, leading to more efficient aircrafts and improved environmental impact.

Intellegens will produce a software platform with an underlying machine learning algorithm based on its Alchemite platform. It has already been used successfully to overcome material design problems in a University of Cambridge research project with a leading OEM where a new alloy was designed, developed and verified in 18 months rather than the expected 20-year timeline, saving about $10m.

Ian Brooks, AM technical fellow at University of Sheffield North West, said by harnessing two key technologies artificial intelligence and additive manufacturing Project MEDAL.

For more information: http://www.amrc.co.uk

HOME PAGE LINK

Latest Headline News

At the virtual CES 2021 event, San Diego based company, IKIN Inc. unveiled a smartphone accessory, inspired by Sci-Fi Movies that can turn content from into 3D holograms. While most

With the GOM ScanCobot, GOM presents a mobile measuring station with a collaborative robot, motorized rotation table and powerful software. Combined with the compact and high-precision sensor ATOS Q, the

Yxlon has presented the new release of its Cheetah and Cougar EVO microfocus X-ray families at recent online events. Under the motto Innovation is key to Evolution Evolution empowers

Steel plate manufacturing is a multi-step process, often requiring multiple machine adjustments after the smelting process to roll the steel properly. Depending on the plates thickness and quality demands, mill

Static CMM manufacturer LK Metrology has expanded its FREEDOM portable arm range of 3D articulating arm metrology systems with the launch of five additional ultra-accuracy models in both 6-axis and

LMI Technologies (LMI) has announced today that Terry Arden, LMIs Chief Executive Officer, will be stepping down from his full-time CEO role, but will continue in a different role at

Energy Robotics, a developer of software solutions for mobile inspection robots, has recently received two million euros ($4.4 million) in seed funding. The round was led by Earlybird, alongside other

Since 1988, Fujigiken Inc. has been expanding 4 core businesses in Japan: the trial manufacture of car seats, the trial production of cars, small-volume production and supply, and jig production.Fujigiken

Following the announcement of a partnership between DMG MORI and NIKON in May 2019 to integrate non-contact laser-line scanning onto its machine tools DMG MORI has posted a video on

North Star Imaging (NSI) has launched a duplex robot computed tomography system for large manufactured parts. NSIs unique Dual RobotiX precision technology features two robot arms working in synchronized harmony

The trend to move process data directly from the factory floor to the digital cloud creates bandwidth and latency issues that can become a roadblock to real-time reporting. In response,

Bart Van der Schueren, Chief Technology Officer and Materialise Mindware representative, discusses megatrends in manufacturing and how these values can guide companies navigating the industry during a pandemic. Manufacturing has

The Fourth Industrial Revolution (Industry 4.0) is essentially the Digital Age, characterised by a heavy focus on automation, real-time data, connectivity, embedded sensors, and machine learning. Its iconic representation is

Metrology News has selected the premium global events to feature in our monthly calendar providing an at-glance overview of all of the most important upcoming events. Links to event websites

Successful manufacturing depends on speed, accuracy and efficiency. One of the most effective ways to achieve this is through seamless collaboration between people and machines also known as factory

See the article here:
Machine Learning Project Aims To Improve AM Metrology and Quality News - Online Magazine - "metrology news"