Archive for the ‘Machine Learning’ Category

AFTAs 2020: Most Innovative Third-Party Technology Vendor (AI, Machine Learning and Analytics)Behavox – www.waterstechnology.com

Enterprise risk and compliance solutions provider Behavox experienced explosive demand for its product in 2020, as the worlds sudden pivot to remote working tested business continuity protocols and created new opportunities for employee misconduct.

This is why the companywhich offers a machine learning-powered platform that helps firms aggregate and analyze enterprise communications data, including email, messaging and voice, for risk assessment, regulatory compliance and fraud monitoringwins this AFTA for the second year in a row. The coronavirus accelerated the understanding that the workplace is no longer a place, says Erkin Adylov, Behavox founder and CEO. It has become a digital realm, and the laws of people dont apply in that realmit is a complete Wild West. And work is not going to become less digitalfirms are thinking that we need to bring the same laws that govern our day-to-day lives to that digital realm, but they need someone to organize all the data they generate.

Early in 2020, Behavox received a $100 million investment from SoftBank, itself a client. The company then signed up a number of the worlds largest banks and asset managers, and doubled its headcount, as it moved into new territories (Japan and the Nordics) and expanded its existing office in Montreal to accommodate additional data scientists and engineers.

The company also managed to complete implementations in months that normally would have taken far more time, with many customers taking advantage of the cloud-based version of the platform. One implementation, at Danske Bank, took just five months.

This year, as the company grows, it is planning to enhance its platform with Behavox Boost, a tool for modeling employee performance, and Motivate, which analyzes soft concepts like team morale and the quality of team collaboration.

You are currently unable to print this content. Please contact [emailprotected] to find out more.

You are currently unable to copy this content. Please contact [emailprotected] to find out more.

Original post:
AFTAs 2020: Most Innovative Third-Party Technology Vendor (AI, Machine Learning and Analytics)Behavox - http://www.waterstechnology.com

A Nepalese Machine Learning (ML) Researcher Introduces Papers-With-Video Browser Extension Which Allows Users To Access Videos Related To Research…

Amit Chaudhary, a machine learning (ML) researcher from Nepal, has recently introduced a browser extension that allows users to directly access videos related to research papers published on the platform arXiv.

ArXiv has become an essential resource for new machine learning (ML) papers. Initially, in 1991, it was launched as a storage site for physics preprints. In 2001 it was named ArXiv and had since been hosted by Cornell University. ArXiv has received close to 2 million submissions across various scientific research fields.

Amit obtained publicly released videos from 2020 ML conferences. He then indexed the videos and reverse-mapped them to the relevant arXiv links through pyarxiv, a dedicated wrapper for the arXiv API. The Google Chrome extension creates a video icon next to the paper title on the arXiv abstract page, enabling users to identify and access available videos related to the paper directly.

Many research teams are creating videos to accompany their papers. These videos can act as a guide by providing demo and other valuable information on the research document. In several situations, the videos are created as an alternative to traditional in-person presentations at AI conferences. This is useful in current circumstances as almost all panels have moved to virtual forms due to the Covid-19 pandemic.

The Papers-With-Video extension enables direct video links for around 3.7k arXiv ML papers. Amit aims to figure out how to pair documents and videos related effectively but has different titles, and with this, he hopes to expand coverage to 8k videos. He has proposed community feedback and has now tweaked the extensions functionality based on user remarks and suggestions.

The browser extension is not available on the Google Chrome Web Store yet. However, one can find the extension, installation guide, and further information on GitHub.

GitHub: https://github.com/amitness/papers-with-video

Paper List: https://gist.github.com/amitness/9e5ad24ab963785daca41e2c4cfa9a82

Suggested

Read the rest here:
A Nepalese Machine Learning (ML) Researcher Introduces Papers-With-Video Browser Extension Which Allows Users To Access Videos Related To Research...

Machine Learning Shown to Identify Patient Response to Sarilumab in Rheumatoid Arthritis – AJMC.com Managed Markets Network

Machine learning was shown to identify patients with rheumatoid arthritis (RA) who present an increased chance of achieving clinical response with sarilumab, with those selected also showing an inferior response to adalimumab, according to an abstract presented at ACR Convergence, the annual meeting of the American College of Rheumatology (ACR).

In prior phase 3 trials comparing the interleukin 6 receptor (IL-6R) inhibitor sarilumab with placebo and the tumor necrosis factor (TNF-) inhibitor adalimumab, sarilumab appeared to provide superior efficacy for patients with moderate to severe RA. Although promising, the researchers of the abstract highlight that treatment of RA requires a more individualized approach to maximize efficacy and minimize risk of adverse events.

The characteristics of patients who are most likely to benefit from sarilumab treatment remain poorly understood, noted researchers.

Seeking to better identify the patients with RA who may best benefit from sarilumab treatment, the researchers applied machine learning to select from a predefined set of patient characteristics, which they hypothesized may help delineate the patients who could benefit most from either antiIL-6R or antiTNF- treatment.

Following their extraction of data from the sarilumab clinical development program, the researchers utilized a decision tree classification approach to build predictive models on ACR response criteria at week 24 in patients from the phase 3 MOBILITY trial, focusing on the 200-mg dose of sarilumab. They incorporated the Generalized, Unbiased, Interaction Detection and Estimation (GUIDE) algorithm, including 17 categorical and 25 continuous baseline variables as candidate predictors. These included protein biomarkers, disease activity scoring, and demographic data, added the researchers.

Endpoints used were ACR20, ACR50, and ACR70 at week 24, with the resulting rule validated through application on independent data sets from the following trials:

Assessing the end points used, it was found that the most successful GUIDE model was trained against the ACR20 response. From the 42 candidate predictor variables, the combined presence of anticitrullinated protein antibodies (ACPA) and C-reactive protein >12.3 mg/L was identified as a predictor of better treatment outcomes with sarilumab, with those patients identified as rule-positive.

These rule-positive patients, which ranged from 34% to 51% in the sarilumab groups across the 4 trials, were shown to have more severe disease and poorer prognostic factors at baseline. They also exhibited better outcomes than rule-negative patients for most end points assessed, except for patients with inadequate response to TNF inhibitors.

Notably, rule-positive patients had a better response to sarilumab but an inferior response to adalimumab, except for patients of the HAQ-Disability Index minimal clinically important difference end point.

If verified in prospective studies, this rule could facilitate treatment decision-making for patients with RA, concluded the researchers.

Reference

Rehberg M, Giegerich C, Praestgaard A, et al. Identification of a rule to predict response to sarilumab in patients with rheumatoid arthritis using machine learning and clinical trial data. Presented at: ACR Convergence 2020; November 5-9, 2020. Accessed January 15, 2021. 021. Abstract 2006. https://acrabstracts.org/abstract/identification-of-a-rule-to-predict-response-to-sarilumab-in-patients-with-rheumatoid-arthritis-using-machine-learning-and-clinical-trial-data/

Original post:
Machine Learning Shown to Identify Patient Response to Sarilumab in Rheumatoid Arthritis - AJMC.com Managed Markets Network

Deep Learning Outperforms Standard Machine Learning in Biomedical Research Applications, Research Shows – Georgia State University News

ATLANTACompared to standard machine learning models, deep learning models are largely superior at discerning patterns and discriminative features in brain imaging, despite being more complex in their architecture, according to a new study in Nature Communications led by Georgia State University.

Advanced biomedical technologies such as structural and functional magnetic resonance imaging (MRI and fMRI) or genomic sequencing have produced an enormous volume of data about the human body. By extracting patterns from this information, scientists can glean new insights into health and disease. This is a challenging task, however, given the complexity of the data and the fact that the relationships among types of data are poorly understood.

Deep learning, built on advanced neural networks, can characterize these relationships by combining and analyzing data from many sources. At the Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State researchers are using deep learning to learn more about how mental illness and other disorders affect the brain.

Although deep learning models have been used to solve problems and answer questions in a number of different fields, some experts remain skeptical. Recent critical commentaries have unfavorably compared deep learning with standard machine learning approaches for analyzing brain imaging data.

However, as demonstrated in the study, these conclusions are often based on pre-processed input that deprive deep learning of its main advantagethe ability to learn from the data with little to no preprocessing. Anees Abrol, research scientist at TReNDS and the lead author on the paper, compared representative models from classical machine learning and deep learning, and found that if trained properly, the deep-learning methods have the potential to offer substantially better results, generating superior representations for characterizing the human brain.

We compared these models side-by-side, observing statistical protocols so everything is apples to apples. And we show that deep learning models perform better, as expected, said co-author Sergey Plis, director of machine learning at TReNDS and associate professor of computer science.

Plis said there are some cases where standard machine learning can outperform deep learning. For example, diagnostic algorithms that plug in single-number measurements such as a patients body temperature or whether the patient smokes cigarettes would work better using classical machine learning approaches.

If your application involves analyzing images or if it involves a large array of data that cant really be distilled into a simple measurement without losing information, deep learning can help, Plis said.. These models are made for really complex problems that require bringing in a lot of experience and intuition.

The downside of deep learning models is they are data hungry at the outset and must be trained on lots of information. But once these models are trained, said co-author Vince Calhoun, director of TReNDS and Distinguished University Professor of Psychology, they are just as effective at analyzing reams of complex data as they are at answering simple questions.

Interestingly, in our study we looked at sample sizes from 100 to 10,000 and in all cases the deep learning approaches were doing better, he said.

Another advantage is that scientists can reverse analyze deep-learning models to understand how they are reaching conclusions about the data. As the published study shows, the trained deep learning models learn to identify meaningful brain biomarkers.

These models are learning on their own, so we can uncover the defining characteristics that theyre looking into that allows them to be accurate, Abrol said. We can check the data points a model is analyzing and then compare it to the literature to see what the model has found outside of where we told it to look.

The researchers envision that deep learning models are capable of extracting explanations and representations not already known to the field and act as an aid in growing our knowledge of how the human brain functions. They conclude that although more research is needed to find and address weaknesses of deep-learning models, from a mathematical point of view, its clear these models outperform standard machine learning models in many settings.

Deep learnings promise perhaps still outweighs its current usefulness to neuroimaging, but we are seeing a lot of real potential for these techniques, Plis said.

More here:
Deep Learning Outperforms Standard Machine Learning in Biomedical Research Applications, Research Shows - Georgia State University News

Predicting falls and injuries in people with multiple sclerosis using machine learning algorithms – DocWire News

This article was originally published here

Mult Scler Relat Disord. 2021 Jan 7;49:102740. doi: 10.1016/j.msard.2021.102740. Online ahead of print.

ABSTRACT

Falls in people with Multiple Sclerosis (PwMS) is a serious issue. It can lead to a lot of problems including sustaining injuries, losing consciousness and hospitalization. Having a model that can predict the probability of these falls and the factors correlated with them and can help caregivers and family members to have a clearer understanding of the risks of falling and proactively minimizing them. We used historical data and machine learning algorithms to predict three outcomes: falling, sustaining injuries and injury types caused by falling in PwMS. The training dataset for this study includes 606 examples of monthly readings. The predictive attributes are the following: Expanded Disability Status Scale (EDSS), years passed since the diagnosis of MS, age of participants in the beginning of the experiment, participants gender, type of MS and season (or month). Two types of algorithms, decision tree and gradient boosted trees (GBT) algorithm, were used to train six models to answer these three outcomes. After the models were trained their accuracy was evaluated using cross-validation. The models had a high accuracy with some exceeding 90%. We did not limit model evaluation to one-number assessments and studied the confusion matrices of the models as well. The GBT had a higher class recall and smaller number of underestimations, which make it a more reliable model. The methodology proposed in this study and its findings can help in developing better decision-support tools to assist PwMS.

PMID:33450500 | DOI:10.1016/j.msard.2021.102740

Originally posted here:
Predicting falls and injuries in people with multiple sclerosis using machine learning algorithms - DocWire News