Archive for the ‘Machine Learning’ Category

Machine learning optimization of an electronic health record audit for heart failure in primary care – DocWire News

This article was originally published here

ESC Heart Fail. 2021 Nov 23. doi: 10.1002/ehf2.13724. Online ahead of print.

ABSTRACT

AIMS: The diagnosis of heart failure (HF) is an important problem in primary care. We previously demonstrated a 74% increase in registered HF diagnoses in primary care electronic health records (EHRs) following an extended audit procedure. What remains unclear is the accuracy of registered HF pre-audit and which EHR variables are most important in the extended audit strategy. This study aims to describe the diagnostic HF classification sequence at different stages, assess general practitioner (GP) HF misclassification, and test the predictive performance of an optimized audit.

METHODS AND RESULTS: This is a secondary analysis of the OSCAR-HF study, a prospective observational trial including 51 participating GPs. OSCAR used an extended audit based on typical HF risk factors, signs, symptoms, and medications in GPs EHR. This resulted in a list of possible HF patients, which participating GPs had to classify as HF or non-HF. We compared registered HF diagnoses before and after GPs assessment. For our analysis of audit performance, we used GPs assessment of HF as primary outcome and audit queries as dichotomous predictor variables for a gradient boosted machine (GBM) decision tree algorithm and logistic regression model. Of the 18 011 patients eligible for the audit intervention, 4678 (26.0%) were identified as possible HF patients and submitted for GPs assessment in the audit stage. There were 310 patients with registered HF before GP assessment, of whom 146 (47.1%) were judged not to have HF by their GP (over-registration). There were 538 patients with registered HF after GP assessment, of whom 374 (69.5%) did not have registered HF before GP assessment (under-registration). The GBM and logistic regression model had a comparable predictive performance (area under the curve of 0.70 [95% confidence interval 0.65-0.77] and 0.69 [95% confidence interval 0.64-0.75], respectively). This was not significantly impacted by reducing the set of predictor variables to the 10 most important variables identified in the GBM model (free-text and coded cardiomyopathy, ischaemic heart disease and atrial fibrillation, digoxin, mineralocorticoid receptor antagonists, and combinations of renin-angiotensin system inhibitors and beta-blockers with diuretics). This optimized query set was enough to identify 86% (n = 461/538) of GPs self-assessed HF population with a 33% reduction (n = 1537/4678) in screening caseload.

CONCLUSIONS: Diagnostic coding of HF in primary care health records is inaccurate with a high degree of under-registration and over-registration. An optimized query set enabled identification of more than 80% of GPs self-assessed HF population.

PMID:34816632 | DOI:10.1002/ehf2.13724

Read this article:
Machine learning optimization of an electronic health record audit for heart failure in primary care - DocWire News

High-performance, low-cost machine learning infrastructure is accelerating innovation in the cloud – MIT Technology Review

Artificial intelligence and machine learning (AI and ML) are key technologies that help organizations develop new ways to increase sales, reduce costs, streamline business processes, and understand their customers better. AWS helps customers accelerate their AI/ML adoption by delivering powerful compute, high-speed networking, and scalable high-performance storage options on demand for any machine learning project. This lowers the barrier to entry for organizations looking to adopt the cloud to scale their ML applications.

Developers and data scientists are pushing the boundaries of technology and increasingly adopting deep learning, which is a type of machine learning based on neural network algorithms. These deep learning models are larger and more sophisticated resulting in rising costs to run underlying infrastructure to train and deploy these models.

To enable customers to accelerate their AI/ML transformation, AWS is building high-performance and low-cost machine learning chips. AWS Inferentia is the first machine learning chip built from the ground up by AWS for the lowest cost machine learning inference in the cloud. In fact, Amazon EC2 Inf1 instances powered by Inferentia, deliver 2.3x higher performance and up to 70% lower cost for machine learning inference than current generation GPU-based EC2 instances. AWS Trainium is the second machine learning chip by AWS that is purpose-built for training deep learning models and will be available in late 2021.

Customers across industries have deployed their ML applications in production on Inferentia and seen significant performance improvements and cost savings. For example, AirBnBs customer support platform enables intelligent, scalable, and exceptional service experiences to its community of millions of hosts and guests across the globe. It used Inferentia-based EC2 Inf1 instances to deploy natural language processing (NLP) models that supported its chatbots. This led to a 2x improvement in performance out of the box over GPU-based instances.

With these innovations in silicon, AWS is enabling customers to train and execute their deep learning models in production easily with high performance and throughput at significantly lower costs.

Machine learning is an iterative process that requires teams to build, train, and deploy applications quickly, as well as train, retrain, and experiment frequently to increase the prediction accuracy of the models. When deploying trained models into their business applications, organizations need to also scale their applications to serve new users across the globe. They need to be able to serve multiple requests coming in at the same time with near real-time latency to ensure a superior user experience.

Emerging use cases such as object detection, natural language processing (NLP), image classification, conversational AI, and time series data rely on deep learning technology. Deep learning models are exponentially increasing in size and complexity, going from having millions of parameters to billions in a matter of a couple of years.

Training and deploying these complex and sophisticated models translates to significant infrastructure costs. Costs can quickly snowball to become prohibitively large as organizations scale their applications to deliver near real-time experiences to their users and customers.

This is where cloud-based machine learning infrastructure services can help. The cloud provides on-demand access to compute, high-performance networking, and large data storage, seamlessly combined with ML operations and higher level AI services, to enable organizations to get started immediately and scale their AI/ML initiatives.

AWS Inferentia and AWS Trainium aim to democratize machine learning and make it accessible to developers irrespective of experience and organization size. Inferentias design is optimized for high performance, throughput, and low latency, which makes it ideal for deploying ML inference at scale.

EachAWS Inferentiachip contains four NeuronCores that implement a high-performancesystolic arraymatrix multiply engine, which massively speeds up typical deep learning operations, such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps to cut down on external memory accesses, reducing latency, and increasing throughput.

AWS Neuron, the software development kit for Inferentia, natively supports leading ML frameworks, likeTensorFlow andPyTorch. Developers can continue using the same frameworks and lifecycle developments tools they know and love. For many of their trained models, they can compile and deploy them on Inferentia by changing just a single line of code, with no additional application code changes.

The result is a high-performance inference deployment, that can easily scale while keeping costs under control.

Sprinklr, a software-as-a-service company, has an AI-driven unified customer experience management platform that enables companies to gather and translate real-time customer feedback across multiple channels into actionable insights. This results in proactive issue resolution, enhanced product development, improved content marketing, and better customer service. Sprinklr used Inferentia to deploy its NLP and some of its computer vision models and saw significant performance improvements.

Several Amazon services also deploy their machine learning models on Inferentia.

Amazon Prime Video uses computer vision ML models to analyze video quality of live events to ensure an optimal viewer experience for Prime Video members. It deployed its image classification ML models on EC2 Inf1 instances and saw a 4x improvement in performance and up to a 40% savings in cost as compared to GPU-based instances.

Another example is Amazon Alexas AI and ML-based intelligence, powered by Amazon Web Services, which is available on more than 100 million devices today. Alexas promise to customers is that it is always becoming smarter, more conversational, more proactive, and even more delightful. Delivering on that promise requires continuous improvements in response times and machine learning infrastructure costs. By deploying Alexas text-to-speech ML models on Inf1 instances, it was able to lower inference latency by 25% and cost-per-inference by 30% to enhance service experience for tens of millions of customers who use Alexa each month.

As companies race to future-proof their business by enabling the best digital products and services, no organization can fall behind on deploying sophisticated machine learning models to help innovate their customer experiences. Over the past few years, there has been an enormous increase in the applicability of machine learning for a variety of use cases, from personalization and churn prediction to fraud detection and supply chain forecasting.

Luckily, machine learning infrastructure in the cloud is unleashing new capabilities that were previously not possible, making it far more accessible to non-expert practitioners. Thats why AWS customers are already using Inferentia-powered Amazon EC2 Inf1 instances to provide the intelligence behind their recommendation engines and chatbots and to get actionable insights from customer feedback.

With AWS cloud-based machine learning infrastructure options suitable for various skill levels, its clear that any organization can accelerate innovation and embrace the entire machine learning lifecycle at scale. As machine learning continues to become more pervasive, organizations are now able to fundamentally transform the customer experienceand the way they do businesswith cost-effective, high-performance cloud-based machine learning infrastructure.

Learn more about how AWSs machine learning platform can help your company innovate here.

This content was produced by AWS. It was not written by MIT Technology Reviews editorial staff.

Link:
High-performance, low-cost machine learning infrastructure is accelerating innovation in the cloud - MIT Technology Review

Researchers Present Global Effort to Develop Machine Learning Tools for Automated Assessment of Radiographic Damage in Rheumatoid Arthritis -…

NEW YORK, Nov. 6, 2021 /PRNewswire/ -- Crowdsourcing has become an increasingly popular way to develop machine learning algorithms to address many clinical problems in a variety of illnesses. Today at the American College of Rheumatology (ACR) annual meeting, a multicenter team led by an investigator from Hospital for Special Surgery (HSS) presented the results from the RA2-DREAM Challenge, a crowdsourced effort focused on developing better methods to quantify joint damage in people with rheumatoid arthritis (RA).

Damage in the joints of people with RA is currently measured by visual inspection and detailed scoring on radiographic imagesof small joints in the hands, wrists and feet. This includes both joint space narrowing (which indicates cartilage loss) and bone erosions (which indicates damage from invasion of the inflamed joint lining). The scoring system requires specially trained experts and is time-consuming and expensive. Finding an automated way to measure joint damage is important for both clinical research and for care of patients, according to the study's senior author, S. Louis Bridges, Jr., MD, PhD, physician-in-chief and chair of the Department of Medicine at HSS.

"If a machine-learning approach could provide a quick, accurate quantitative score estimating the degree of joint damage in hands and feet, it would greatly help clinical research," he said. "For example, researchers could analyze data from electronic health records and from genetic and other research assays to find biomarkers associated with progressive damage. Having to score all the images by visual inspection ourselves would be tedious, and outsourcing it is cost prohibitive."

"This approach could also aid rheumatologists by quickly assessing whether there is progression of damage over time, which would prompt a change in treatment to prevent further damage," he added. "This is really important in geographic areas where expert musculoskeletal radiologists are not available."

For the challenge, Dr. Bridges and his collaborators partnered with Sage Bionetworks, a nonprofit organization that helps investigators create DREAM (Dialogue on Reverse Engineering Assessment and Methods) Challenges. These competitions are focused on the development of innovative artificial intelligence-based tools in the life sciences. The investigators sent out a call for submissions, with grant money providing prizes for the winning teams. Competitors were from a variety of fields, including computer scientists, computational biologists and physician-scientists; none were radiologists with expertise or training in reading radiographic images.

For the first part of the challenge, one set of images was provided to the teams, along with known scores that had been visually generated. These were used to train the algorithms. Additional sets of images were then provided so the competitors could test and refine the tools they had developed. In the final round, a third set of images was given without scores, and competitors estimated the amount of joint space narrowing and erosions. Submissions were judged according to which most closely replicated the gold-standard visually generated scores. There were 26 teams that submitted algorithms and 16 final submissions. In total, competitors were given 674 sets of images from 562 different RA patients, all of whom had participated in prior National Institutes of Health-funded research studies led by Dr. Bridges. In the end, four teams were named top performers.

For the DREAM Challenge organizers, it was important that any scoring system developed through the project be freely available rather than proprietary, so that it could be used by investigators and clinicians at no cost. "Part of the appeal of this collaboration was that everything is in the public domain," Dr. Bridges said.

Dr. Bridges explained that additional research and development of computational methods are needed before the tools can be broadly used, but the current research demonstrates that this type of approach is feasible. "We still need to refine the algorithms, but we're much closer to our goal than we were before the Challenge," he concluded.

About HSS

HSS is the world's leading academic medical center focused on musculoskeletal health. At its core is Hospital for Special Surgery, nationally ranked No. 1 in orthopedics (for the 12th consecutive year), No. 4 in rheumatology by U.S. News & World Report (2021-2022), and the best pediatric orthopedic hospital in NY, NJ and CT by U.S. News & World Report "Best Children's Hospitals" list (2021-2022). HSS is ranked world #1 in orthopedics by Newsweek (2021-2022). Founded in 1863, the Hospital has the lowest complication and readmission rates in the nation for orthopedics, and among the lowest infection rates. HSS was the first in New York State to receive Magnet Recognition for Excellence in Nursing Service from the American Nurses Credentialing Center five consecutive times. The global standard total knee replacement was developed at HSS in 1969. An affiliate of Weill Cornell Medical College, HSS has a main campus in New York City and facilities in New Jersey, Connecticut and in the Long Island and Westchester County regions of New York State, as well as in Florida. In addition to patient care, HSS leads the field in research, innovation and education. The HSS Research Institute comprises 20 laboratories and 300 staff members focused on leading the advancement of musculoskeletal health through prevention of degeneration, tissue repair and tissue regeneration. The HSS Global Innovation Institute was formed in 2016 to realize the potential of new drugs, therapeutics and devices. The HSS Education Institute is a trusted leader in advancing musculoskeletal knowledge and research for physicians, nurses, allied health professionals, academic trainees, and consumers in more than 130 countries. The institution is collaborating with medical centers and other organizations to advance the quality and value of musculoskeletal care and to make world-class HSS care more widely accessible nationally and internationally. http://www.hss.edu.

SOURCE Hospital for Special Surgery

http://www.hss.edu

More:
Researchers Present Global Effort to Develop Machine Learning Tools for Automated Assessment of Radiographic Damage in Rheumatoid Arthritis -...

Machine learning can provide strong predictive accuracy for identifying adolescents that have experienced suicidal thoughts and behavior – EurekAlert

image:Fig 7. The top 10 most important questions for males vs females. view more

Credit: Weller et al., 2021, PLOS ONE, CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)

Researchers have developed a new, machine learning-based algorithm that shows high accuracy in identifying adolescents who are experiencing suicidal thoughts and behavior. Orion Weller of Johns Hopkins University in Baltimore, Maryland, and colleagues present these findings in the open-access journal PLOS ONE on November 3rd, 2021.

Decades of research have identified specific risk factors associated with suicidal thoughts and behavior among adolescents, helping to inform suicide prevention efforts. However, few studies have explored these risk factors in combination with each other, especially in large groups of adolescents. Now, the field of machine learning has opened up new opportunities for such research, which could ultimately improve prevention efforts.

To explore that opportunity, Weller and colleagues applied machine-learning analysis to data from a survey of high school students in Utah that is routinely conducted to monitor issues such as drug abuse and mental health. The data included responses to more than 300 questions each for more than 179,000 high school students who took the survey between 2011 to 2017, as well as demographic data from the U.S. census.

The researchers found that they could use the survey data to predict with 91 percent accuracy which individual adolescents answers indicated suicidal thoughts or behavior. In doing so, they were able to identify which survey questions had the most predictive power; these included questions about digital media harassment or threats, at-school bullying, serious arguments at home, gender, alcohol use, feelings of safety at school, age, and attitudes about marijuana.

The new algorithms accuracy is higher than that of previously developed predictive approaches, suggesting that machine-learning could indeed improve understanding of adolescent suicidal thoughts and behaviorand could thereby help inform and refine preventive programs and policies.

Future research could expand the new findings by using data from other states, as well as data on actual suicide rates.

The authors add: Our paper examines machine learning approaches applied to a large dataset of adolescent questionnaires, in order to predict suicidal thoughts and behaviors from their answers. We find strong predictive accuracy in identifying those at risk and analyze our model with recent advances in ML interpretability. We found that factors that strongly influence the model include bullying and harassment, as expected, but also aspects of their family life, such as being in a family with yelling and/or serious arguments.We hope that this study can provide insight to inform early prevention efforts.

Computational simulation/modeling

People

Predicting suicidal thoughts and behavior among adolescents using the risk and protective factor framework: A large-scale machine learning approach

3-Nov-2021

The authors have declared that no competing interests exist.

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Read the rest here:
Machine learning can provide strong predictive accuracy for identifying adolescents that have experienced suicidal thoughts and behavior - EurekAlert

Machine Learning Approach Takes MSK Researchers Beyond Known Method to Predict Immunotherapy Response – On Cancer – Memorial Sloan Kettering

How can oncologists better predict who will benefit from a widely used class of immunotherapy drugs called checkpoint inhibitors?

In the precision medicine era of cancer care, its a question that has only increased in relevance. To answer it, Luc Morris, a physician-scientist and research laboratory head, together with several colleagues at Memorial Sloan Kettering Cancer Center, are looking beyond a known method to predict immunotherapy response.

Tumor mutational burden, or TMB, refers to the number of mutations a tumor has. High TMB means there are a lot of mutations. Low TMB means there are not many mutations. In the past five years, its been well established that tumors that have high TMB tend to respond better to checkpoint inhibitor therapy compared with tumors that have low TMB. Because checkpoint inhibitors only work in a fraction of people with cancer, the ability to predict response like with TMB is crucial. While TMB can be used to guide treatment decisions for certain patients with cancer for example, the checkpoint inhibitor pembrolizumab (Keytruda) is FDA approved for all tumors with high TMB it remains a crude predictor by itself, according to Dr. Morris.

We know that TMB provides some value in predicting immunotherapy response, but we also know that it is not a perfect predictor. It has limited value in isolation, says Dr. Morris, a senior author on the study, which published November 1, 2021, in Nature Biotechnology.

Oncologists will consider many factors when deciding on the best treatment for a patient with cancer TMB is only one, he says. For example, a melanoma tumor with low TMB may still have a very good chance of responding, just as a breast tumor with high TMB might have a lower chance of responding. We recognize that we need more predictive tools besides just TMB.

The studys co-first authors were Diego Chowell and Steve Yoo, research fellows in the lab of Timothy Chan at MSK, and Cristina Valero, a research fellow inthe Morris Labat MSK. Diego Chowell is currently an assistant professor in the Icahn School of Medicine at Mount Sinai.Nils Weinhold, an MSK cancer researcher and computational biologist, led the study as a co-senior author together with Dr. Chan and Dr. Morris. (Dr. Chan, whose lab first reported the importance of TMB in cancer immunotherapy in 2014, moved to the Cleveland Clinic in 2020.)

The limited value in isolation of TMB was one of the reasons why Dr. Morris and fellow investigators wanted to go beyond the biomarker in their latest analysis, he says. Another reason that Dr. Morris undertook this research was to learn more about a blood marker called neutrophil-to-lymphocyte ratio (NLR). Recent MSK research showed that NLR, especially when combined with TMB and other information such as patient blood markers could improve the ability to predict tumor immunotherapy response.

That opened the door for us to say: Why dont we just gather all of the variables that either have been shown to have predictive value, or that we think might possibly have predictive value, and put them into a machine learning algorithm and see how well we can predict outcomes with a larger pool of information, Dr. Morris says.

The team used a model that integrated 16 genomic, molecular, demographic, and clinical features, including TMB and NLR. By taking a machine learning approach, the investigators would be able to determine which combination of variables had the highest predictive power.

Using this large set of clinical and genomic data from patients treated at MSK, we trained a machine learning model that incorporated a number of different pieces of data, Dr. Morris explains.

Oncologists will consider many factors when deciding on the best treatment for a patient with cancer TMB is only one.

The investigators analyzed the variables in a group of 1,479 patients who were treated with immunotherapy: PD-1/PD-L1 inhibitor immunotherapy, CTLA-4 inhibitor immunotherapy, or a combination of both. Most patients (1,070) did not respond. The group included patients with 16 different types of cancer, of which non-small cell lung cancer and melanoma were the most prevalent. Investigators analyzed patients tumors using MSK-IMPACTTM, a powerful tool that provides detailed information about a tumors mutations.

MSK-IMPACT is an incredible resource for us, both as oncologists treating patients and as scientists trying to understand cancer, says Dr. Morris. For this study, we had a wealth of genomic data for these patients who were treated at MSK, to integrate with clinical data and blood test data.

The results reaffirmed TMBs relevance as a predictor of immunotherapy response; when the variables were studied individually, TMB was associated with the greatest effect of the 16 individual factors.

The next strongest predictors of response to immunotherapy were prior receipt of chemotherapy, albumin levels in the blood, and NLR.

Although each of these four measures could predict immunotherapy response, MSK researchers found that the 16-feature model more accurately predicted response than any one of the individual factors studied alone. Whats more, the 16-feature model was also able to better forecast survival differences among patients who did respond to immune checkpoint blockade and those who did not, further supporting the 16-feature approach over one involving fewer features. Cumulatively, the findings indicate that clinicians can do better than TMB alone by including other available pieces of information about the patient or the tumor genetics, Dr. Morris says.

Importantly, the model also takes into account TMBs varying degrees of predictive value across cancer types, Dr. Morris adds.

Although the predictive value of TMB varies quite a bit across different cancer types, the [16-feature] model had good predictive ability across all cancer types, he says. This is important because TMB is less predictive for some malignancies than for others, and for some types of cancer, it has no value at all. For example, the predictive value of elevated TMB is well established in melanoma and non-small cell lung cancer. In breast and prostate cancers, though, TMB has not been found to accurately predict immunotherapy response.

Broad use is part of Dr. Morris and his colleagues aim: This is a very good predictive biomarker based on genetic data from tumor sequencing, but our next research goal will be to try to determine how much value we can glean from a simpler model that maybe could be more widely implemented around the world.

Key Takeaways

Continue reading here:
Machine Learning Approach Takes MSK Researchers Beyond Known Method to Predict Immunotherapy Response - On Cancer - Memorial Sloan Kettering