Archive for the ‘Machine Learning’ Category

SD Times Open-Source Project of the Week: KServe – SDTimes.com

KServe is a tool for serving machine learning models on Kubernetes. It encapsulates the complexity of tasks like autoscaling, networking, health checking, and server configuration. This allows users to provide their machine learning deployments with features like GPU Autoscaling, Scale to Zero, and Canary Rollouts.

Created by IBM and Bloombergs Data Science and Compute Infrastructure team, KServe was previously known as KFServing. It was inspired when IBM presented the idea to serve machine learning models in a serverless way using Knative. Together Bloomberg and IBM met at the Kubeflow Contributor Summit 2019, and at the time, Kubeflow didnt have a model serving component so the companies worked together on a new project to provide a model serving deployment solution.

The new project first debuted at KubeCon + CloudNativeCon North America in 2019. It was moved from the KubeFlow Serving Working Group into an independent organization in order to grow the project and broaden the contributor base. At this point the project became known as KServe.

KServe provides model explainability through integrations with Alibi, AI Explainability 360, and Captum. It also provides monitoring for models in production through integrations with Alibi-detect, AI Fairness 360, and Adversarial Robustness Toolbox (ART).

The project has been adopted by a number of organizations, including Nvidia, Cisco, Zillow, and more.

Read the original:
SD Times Open-Source Project of the Week: KServe - SDTimes.com

Your neighborhood matters: A machine-learning approach to the geospatial and social determinants of health in 9-1-1 activated chest pain – DocWire…

This article was originally published here

Res Nurs Health. 2021 Nov 24. doi: 10.1002/nur.22199. Online ahead of print.

ABSTRACT

Healthcare disparities in the initial management of patients with acute coronary syndrome (ACS) exist. Yet, the complexity of interactions between demographic, social, economic, and geospatial determinants of health hinders incorporating such predictors in existing risk stratification models. We sought to explore a machine-learning-based approach to study the complex interactions between the geospatial and social determinants of health to explain disparities in ACS likelihood in an urban community. This study identified consecutive patients transported by Pittsburgh emergency medical service for a chief complaint of chest pain or ACS-equivalent symptoms. We extracted demographics, clinical data, and location coordinates from electronic health records. Median income was based on US census data by zip code. A random forest (RF) classifier and a regularized logistic regression model were used to identify the most important predictors of ACS likelihood. Our final sample included 2400 patients (age 59 17 years, 47% Females, 41% Blacks, 15.8% adjudicated ACS). In our RF model (area under the receiver operating characteristic curve of 0.71 0.03) age, prior revascularization, income, distance from hospital, and residential neighborhood were the most important predictors of ACS likelihood. In regularized regression (akaike information criterion = 1843, bayesian information criterion = 1912, 2 = 193, df = 10, p < 0.001), residential neighborhood remained a significant and independent predictor of ACS likelihood. Findings from our study suggest that residential neighborhood constitutes an upstream factor to explain the observed healthcare disparity in ACS risk prediction, independent from known demographic, social, and economic determinants of health, which can inform future work on ACS prevention, in-hospital care, and patient discharge.

PMID:34820853 | DOI:10.1002/nur.22199

See original here:
Your neighborhood matters: A machine-learning approach to the geospatial and social determinants of health in 9-1-1 activated chest pain - DocWire...

Machine learning optimization of an electronic health record audit for heart failure in primary care – DocWire News

This article was originally published here

ESC Heart Fail. 2021 Nov 23. doi: 10.1002/ehf2.13724. Online ahead of print.

ABSTRACT

AIMS: The diagnosis of heart failure (HF) is an important problem in primary care. We previously demonstrated a 74% increase in registered HF diagnoses in primary care electronic health records (EHRs) following an extended audit procedure. What remains unclear is the accuracy of registered HF pre-audit and which EHR variables are most important in the extended audit strategy. This study aims to describe the diagnostic HF classification sequence at different stages, assess general practitioner (GP) HF misclassification, and test the predictive performance of an optimized audit.

METHODS AND RESULTS: This is a secondary analysis of the OSCAR-HF study, a prospective observational trial including 51 participating GPs. OSCAR used an extended audit based on typical HF risk factors, signs, symptoms, and medications in GPs EHR. This resulted in a list of possible HF patients, which participating GPs had to classify as HF or non-HF. We compared registered HF diagnoses before and after GPs assessment. For our analysis of audit performance, we used GPs assessment of HF as primary outcome and audit queries as dichotomous predictor variables for a gradient boosted machine (GBM) decision tree algorithm and logistic regression model. Of the 18 011 patients eligible for the audit intervention, 4678 (26.0%) were identified as possible HF patients and submitted for GPs assessment in the audit stage. There were 310 patients with registered HF before GP assessment, of whom 146 (47.1%) were judged not to have HF by their GP (over-registration). There were 538 patients with registered HF after GP assessment, of whom 374 (69.5%) did not have registered HF before GP assessment (under-registration). The GBM and logistic regression model had a comparable predictive performance (area under the curve of 0.70 [95% confidence interval 0.65-0.77] and 0.69 [95% confidence interval 0.64-0.75], respectively). This was not significantly impacted by reducing the set of predictor variables to the 10 most important variables identified in the GBM model (free-text and coded cardiomyopathy, ischaemic heart disease and atrial fibrillation, digoxin, mineralocorticoid receptor antagonists, and combinations of renin-angiotensin system inhibitors and beta-blockers with diuretics). This optimized query set was enough to identify 86% (n = 461/538) of GPs self-assessed HF population with a 33% reduction (n = 1537/4678) in screening caseload.

CONCLUSIONS: Diagnostic coding of HF in primary care health records is inaccurate with a high degree of under-registration and over-registration. An optimized query set enabled identification of more than 80% of GPs self-assessed HF population.

PMID:34816632 | DOI:10.1002/ehf2.13724

Read this article:
Machine learning optimization of an electronic health record audit for heart failure in primary care - DocWire News

High-performance, low-cost machine learning infrastructure is accelerating innovation in the cloud – MIT Technology Review

Artificial intelligence and machine learning (AI and ML) are key technologies that help organizations develop new ways to increase sales, reduce costs, streamline business processes, and understand their customers better. AWS helps customers accelerate their AI/ML adoption by delivering powerful compute, high-speed networking, and scalable high-performance storage options on demand for any machine learning project. This lowers the barrier to entry for organizations looking to adopt the cloud to scale their ML applications.

Developers and data scientists are pushing the boundaries of technology and increasingly adopting deep learning, which is a type of machine learning based on neural network algorithms. These deep learning models are larger and more sophisticated resulting in rising costs to run underlying infrastructure to train and deploy these models.

To enable customers to accelerate their AI/ML transformation, AWS is building high-performance and low-cost machine learning chips. AWS Inferentia is the first machine learning chip built from the ground up by AWS for the lowest cost machine learning inference in the cloud. In fact, Amazon EC2 Inf1 instances powered by Inferentia, deliver 2.3x higher performance and up to 70% lower cost for machine learning inference than current generation GPU-based EC2 instances. AWS Trainium is the second machine learning chip by AWS that is purpose-built for training deep learning models and will be available in late 2021.

Customers across industries have deployed their ML applications in production on Inferentia and seen significant performance improvements and cost savings. For example, AirBnBs customer support platform enables intelligent, scalable, and exceptional service experiences to its community of millions of hosts and guests across the globe. It used Inferentia-based EC2 Inf1 instances to deploy natural language processing (NLP) models that supported its chatbots. This led to a 2x improvement in performance out of the box over GPU-based instances.

With these innovations in silicon, AWS is enabling customers to train and execute their deep learning models in production easily with high performance and throughput at significantly lower costs.

Machine learning is an iterative process that requires teams to build, train, and deploy applications quickly, as well as train, retrain, and experiment frequently to increase the prediction accuracy of the models. When deploying trained models into their business applications, organizations need to also scale their applications to serve new users across the globe. They need to be able to serve multiple requests coming in at the same time with near real-time latency to ensure a superior user experience.

Emerging use cases such as object detection, natural language processing (NLP), image classification, conversational AI, and time series data rely on deep learning technology. Deep learning models are exponentially increasing in size and complexity, going from having millions of parameters to billions in a matter of a couple of years.

Training and deploying these complex and sophisticated models translates to significant infrastructure costs. Costs can quickly snowball to become prohibitively large as organizations scale their applications to deliver near real-time experiences to their users and customers.

This is where cloud-based machine learning infrastructure services can help. The cloud provides on-demand access to compute, high-performance networking, and large data storage, seamlessly combined with ML operations and higher level AI services, to enable organizations to get started immediately and scale their AI/ML initiatives.

AWS Inferentia and AWS Trainium aim to democratize machine learning and make it accessible to developers irrespective of experience and organization size. Inferentias design is optimized for high performance, throughput, and low latency, which makes it ideal for deploying ML inference at scale.

EachAWS Inferentiachip contains four NeuronCores that implement a high-performancesystolic arraymatrix multiply engine, which massively speeds up typical deep learning operations, such as convolution and transformers. NeuronCores are also equipped with a large on-chip cache, which helps to cut down on external memory accesses, reducing latency, and increasing throughput.

AWS Neuron, the software development kit for Inferentia, natively supports leading ML frameworks, likeTensorFlow andPyTorch. Developers can continue using the same frameworks and lifecycle developments tools they know and love. For many of their trained models, they can compile and deploy them on Inferentia by changing just a single line of code, with no additional application code changes.

The result is a high-performance inference deployment, that can easily scale while keeping costs under control.

Sprinklr, a software-as-a-service company, has an AI-driven unified customer experience management platform that enables companies to gather and translate real-time customer feedback across multiple channels into actionable insights. This results in proactive issue resolution, enhanced product development, improved content marketing, and better customer service. Sprinklr used Inferentia to deploy its NLP and some of its computer vision models and saw significant performance improvements.

Several Amazon services also deploy their machine learning models on Inferentia.

Amazon Prime Video uses computer vision ML models to analyze video quality of live events to ensure an optimal viewer experience for Prime Video members. It deployed its image classification ML models on EC2 Inf1 instances and saw a 4x improvement in performance and up to a 40% savings in cost as compared to GPU-based instances.

Another example is Amazon Alexas AI and ML-based intelligence, powered by Amazon Web Services, which is available on more than 100 million devices today. Alexas promise to customers is that it is always becoming smarter, more conversational, more proactive, and even more delightful. Delivering on that promise requires continuous improvements in response times and machine learning infrastructure costs. By deploying Alexas text-to-speech ML models on Inf1 instances, it was able to lower inference latency by 25% and cost-per-inference by 30% to enhance service experience for tens of millions of customers who use Alexa each month.

As companies race to future-proof their business by enabling the best digital products and services, no organization can fall behind on deploying sophisticated machine learning models to help innovate their customer experiences. Over the past few years, there has been an enormous increase in the applicability of machine learning for a variety of use cases, from personalization and churn prediction to fraud detection and supply chain forecasting.

Luckily, machine learning infrastructure in the cloud is unleashing new capabilities that were previously not possible, making it far more accessible to non-expert practitioners. Thats why AWS customers are already using Inferentia-powered Amazon EC2 Inf1 instances to provide the intelligence behind their recommendation engines and chatbots and to get actionable insights from customer feedback.

With AWS cloud-based machine learning infrastructure options suitable for various skill levels, its clear that any organization can accelerate innovation and embrace the entire machine learning lifecycle at scale. As machine learning continues to become more pervasive, organizations are now able to fundamentally transform the customer experienceand the way they do businesswith cost-effective, high-performance cloud-based machine learning infrastructure.

Learn more about how AWSs machine learning platform can help your company innovate here.

This content was produced by AWS. It was not written by MIT Technology Reviews editorial staff.

Link:
High-performance, low-cost machine learning infrastructure is accelerating innovation in the cloud - MIT Technology Review

Researchers Present Global Effort to Develop Machine Learning Tools for Automated Assessment of Radiographic Damage in Rheumatoid Arthritis -…

NEW YORK, Nov. 6, 2021 /PRNewswire/ -- Crowdsourcing has become an increasingly popular way to develop machine learning algorithms to address many clinical problems in a variety of illnesses. Today at the American College of Rheumatology (ACR) annual meeting, a multicenter team led by an investigator from Hospital for Special Surgery (HSS) presented the results from the RA2-DREAM Challenge, a crowdsourced effort focused on developing better methods to quantify joint damage in people with rheumatoid arthritis (RA).

Damage in the joints of people with RA is currently measured by visual inspection and detailed scoring on radiographic imagesof small joints in the hands, wrists and feet. This includes both joint space narrowing (which indicates cartilage loss) and bone erosions (which indicates damage from invasion of the inflamed joint lining). The scoring system requires specially trained experts and is time-consuming and expensive. Finding an automated way to measure joint damage is important for both clinical research and for care of patients, according to the study's senior author, S. Louis Bridges, Jr., MD, PhD, physician-in-chief and chair of the Department of Medicine at HSS.

"If a machine-learning approach could provide a quick, accurate quantitative score estimating the degree of joint damage in hands and feet, it would greatly help clinical research," he said. "For example, researchers could analyze data from electronic health records and from genetic and other research assays to find biomarkers associated with progressive damage. Having to score all the images by visual inspection ourselves would be tedious, and outsourcing it is cost prohibitive."

"This approach could also aid rheumatologists by quickly assessing whether there is progression of damage over time, which would prompt a change in treatment to prevent further damage," he added. "This is really important in geographic areas where expert musculoskeletal radiologists are not available."

For the challenge, Dr. Bridges and his collaborators partnered with Sage Bionetworks, a nonprofit organization that helps investigators create DREAM (Dialogue on Reverse Engineering Assessment and Methods) Challenges. These competitions are focused on the development of innovative artificial intelligence-based tools in the life sciences. The investigators sent out a call for submissions, with grant money providing prizes for the winning teams. Competitors were from a variety of fields, including computer scientists, computational biologists and physician-scientists; none were radiologists with expertise or training in reading radiographic images.

For the first part of the challenge, one set of images was provided to the teams, along with known scores that had been visually generated. These were used to train the algorithms. Additional sets of images were then provided so the competitors could test and refine the tools they had developed. In the final round, a third set of images was given without scores, and competitors estimated the amount of joint space narrowing and erosions. Submissions were judged according to which most closely replicated the gold-standard visually generated scores. There were 26 teams that submitted algorithms and 16 final submissions. In total, competitors were given 674 sets of images from 562 different RA patients, all of whom had participated in prior National Institutes of Health-funded research studies led by Dr. Bridges. In the end, four teams were named top performers.

For the DREAM Challenge organizers, it was important that any scoring system developed through the project be freely available rather than proprietary, so that it could be used by investigators and clinicians at no cost. "Part of the appeal of this collaboration was that everything is in the public domain," Dr. Bridges said.

Dr. Bridges explained that additional research and development of computational methods are needed before the tools can be broadly used, but the current research demonstrates that this type of approach is feasible. "We still need to refine the algorithms, but we're much closer to our goal than we were before the Challenge," he concluded.

About HSS

HSS is the world's leading academic medical center focused on musculoskeletal health. At its core is Hospital for Special Surgery, nationally ranked No. 1 in orthopedics (for the 12th consecutive year), No. 4 in rheumatology by U.S. News & World Report (2021-2022), and the best pediatric orthopedic hospital in NY, NJ and CT by U.S. News & World Report "Best Children's Hospitals" list (2021-2022). HSS is ranked world #1 in orthopedics by Newsweek (2021-2022). Founded in 1863, the Hospital has the lowest complication and readmission rates in the nation for orthopedics, and among the lowest infection rates. HSS was the first in New York State to receive Magnet Recognition for Excellence in Nursing Service from the American Nurses Credentialing Center five consecutive times. The global standard total knee replacement was developed at HSS in 1969. An affiliate of Weill Cornell Medical College, HSS has a main campus in New York City and facilities in New Jersey, Connecticut and in the Long Island and Westchester County regions of New York State, as well as in Florida. In addition to patient care, HSS leads the field in research, innovation and education. The HSS Research Institute comprises 20 laboratories and 300 staff members focused on leading the advancement of musculoskeletal health through prevention of degeneration, tissue repair and tissue regeneration. The HSS Global Innovation Institute was formed in 2016 to realize the potential of new drugs, therapeutics and devices. The HSS Education Institute is a trusted leader in advancing musculoskeletal knowledge and research for physicians, nurses, allied health professionals, academic trainees, and consumers in more than 130 countries. The institution is collaborating with medical centers and other organizations to advance the quality and value of musculoskeletal care and to make world-class HSS care more widely accessible nationally and internationally. http://www.hss.edu.

SOURCE Hospital for Special Surgery

http://www.hss.edu

More:
Researchers Present Global Effort to Develop Machine Learning Tools for Automated Assessment of Radiographic Damage in Rheumatoid Arthritis -...