Archive for the ‘Machine Learning’ Category

Apixio’s New Apicare AuthAdvisor Leverages Machine Learning, Predictive Decision-Making to Automate Approvals & Reduce Manual Workload by more…

SAN MATEO, Calif., March 1, 2022 /PRNewswire/ -- Apixio, Inc., the healthcare analytics company, today announced the launch of its new Apicare AuthAdvisor, which uses machine learning and predictive analytics to automate prior authorization decisions for payers. By leveraging historical decision data, AuthAdvisor automates approval for payers, medical benefits managers, and other vendors to deliver decisions within seconds rather than days, and reduces manual reviews by over 50%.

According to the Council for Affordable Quality Healthcare (CAQH), "the cost to complete a prior authorization remains the single highest cost for the healthcare industry at $13.40 per manual transaction and $7.19 per partially electronic web portal transaction." Not only is it costly, but it is also an administrative burden with manual reviews sometimes taking days or weeks, which delays patient treatments, creates obstacles to care, and potentially negatively impacts clinical outcomes.

Apixio's AuthAdvisor solves these problems by automatically approving diagnostics and procedures based on historical data and decisions made by the provider and payer.

"This is a new way to use data science to accelerate one of the most burdensome aspects of healthcare delivery," said Apixio CEO Sachin Patel. "AuthAdvisor relies on the accuracy of an organization's past decisions to process approvals, rather than relying on rules-based approaches that are tedious to maintain and often result in a high number of manual reviews. With AuthAdvisor, approvals are delivered at the speed and scale that today's high-performance healthcare environments demand."

With Apixio's Apicare AuthAdvisor solution, organizations can:

"The AuthAdvisor system is transparent and customizable, giving payers and benefits managers the visibility and flexibility they need to feel confident in the decisions being made," Patel said. "The latest addition to our AI platform, this technology has the potential to not only save tremendous time and money, but also greatly improve care delivery and member satisfaction for millions of Americans."

AuthAdvisor is already active in 16 states, automating authorization requests for over 4,000 different procedures. Apixio will be showcasing its value-based care platform, including Apicare AuthAdvisor, at both RISE National 2022 on March 7-9 in Nashville and HIMSS 2022 at booth #1579 on March 14-18 in Orlando.

To learn more about the Apicare AuthAdivsor solution, visit http://www.apixio.com/apicare-authadvisor/.

About ApixioApixio is advancing healthcare with data-driven intelligence and analytics. Our Artificial Intelligence platform gives organizations across the healthcare spectrum the power to mine clinical information at scale, creating novel insights that will change the way healthcare is measured, care is delivered, and discoveries are made. Learn more atwww.apixio.com.

MEDIA CONTACT:Kerri TarantoNext PR[emailprotected]

SOURCE Apixio Inc.

Read more:
Apixio's New Apicare AuthAdvisor Leverages Machine Learning, Predictive Decision-Making to Automate Approvals & Reduce Manual Workload by more...

How Telecom Companies Can Leverage Machine Learning To Boost Their Profits – Forbes

The number of smartphone users across the world has skyrocketed over the last decade and promises to do so in the future too. Additionally, most business functions can now be executed on mobile devices. However, despite the mobile surge, telecom operators around the world are still not that profitable, with average net profit margins hovering around the 17% mark. The main reasons for the middling profit rates are the high number of market rivals vouching for the same customer base and the high overhead expenses associated with the sector. Communication Service Providers (CSPs) need to become more data-driven to reduce such costs and, automatically, improve their profit margins. Increasing the involvement of AI in telecom operations enables telecom companies to make this switch from rigid, infrastructure-driven operations to a data-driven approach seamlessly.

The inclusion of AI in telecom functional areas positively impacts the bottom line of CSPs in several ways. Businesses can use specific capabilities, avatars or applications of machine learning and AI for this purpose.

Mobile networks are one of the prime components of the ever-expanding internet community. As stated earlier, a large number of internet users and business operations have gone mobile in recent times. Additionally, the emergence of 5G and edge applications, and the impending arrival of the metaverse, will simply increase the need for high-performance telecom networks. It is very likely that the standard automation tech and personnel will be overwhelmed by the relentless pressure of high-speed network connectivity and mobile calls.

The use of AI in telecom operations can transform an underperforming mobile network into a self-optimizing network (SON). Telecom businesses can monitor network equipment and anticipate equipment failure with AI-powered predictive analysis. Additionally, AI-based tools allow CSPs to keep network quality consistently high by monitoring key performance indicators such as traffic on a zone-to-zone basis. Apart from monitoring the performance of equipment, machine learning algorithms can also continually run pattern recognition while scanning network data to detect anomalies. Then, AI-based systems can either perform remedial actions or notify the network administrator and engineers in the region where the anomaly was detected. This enables telecom companies to fix network issues at source before they adversely impact customers.

Network security is another area of focus for telecom operators. Of late, the rising security issues in telecom networks have been a point of concern for CSPs globally. AI-based data security tools allow telecom companies to constantly monitor the cyber health of their networks. Machine learning algorithms perform analysis of global data networks and past security incidents to make key predictions of existing network vulnerabilities. In other words, AI-based network security tools enable telecom businesses to pre-empt future security complications and proactively take preventive measures to deal with them.

Ultimately, AI improves telecom networks in multiple ways. By improving the performance, anomaly detection and security of CSP networks, machine learning algorithms can enhance the user experience for telecom company clients. This will result in a growth of such companies customer base in the long term, and, by extension, an increase in profits.

How Telecom Companies Can Leverage Machine Learning To Boost Their Profits

The Europol classifies the telecom sector to be particularly vulnerable to fraud. Telecom fraud involves the abuse of telecommunications systems such as mobile phones and tablets by criminals to siphon money off CSPs. As per a recent study, telecom fraud accounted for losses of US$40.1 billionapproximately 1.88% of the total revenue of telecom operators. One of the common types of telecom fraud is International Revenue Sharing Fraud (IRSF). IRSF involves criminals linking up with International Premium Rate Number (IPRN) providers to illegally acquire money from telecom companies by using bots to make an absurdly high number of international calls of long duration. Such calls are difficult to trace. Additionally, telecom companies cannot bill clients for such premium calls as the connections are fraudulent. So, telecom operators end up bearing the losses for such calls. The IPRNs and criminals share the spoils between themselves. Apart from IRSF, vishing (a portmanteau for voice calls and phishing attacks) is a way in which malicious entities dupe clients of telecom companies to extract money and data. The involvement of AI in telecom operations enables CSPs to detect and eliminate these kinds of fraud.

Machine learning algorithms assist telecom network engineers with detecting instances of illegal access, fake caller profiles and cloning. To achieve this, the algorithms perform behavioral monitoring of the global telecom networks of CSPs. Accordingly, the network traffic along such networks is closely monitored. The pattern recognition capabilities of AI algorithms come into play again as they enable network administrators to identify contentious scenarios such as several calls being made from a fraudulent number, or blank callsa general indicator of vishingbeing repeatedly made from questionable sources. One of the more prominent examples of telecom companies using data analytics for fraud detection and prevention is Vodafones partnership with Argyle Data. The data science-based firm analyzes the network traffic of the telecom giant for intelligent, data-driven fraud management.

Detecting and eliminating telecom fraud are major steps towards increasing the profit margins of CSPs. As you can see, the role of AI in telecom operations is significant for achieving this objective.

To reliably serve millions of clients, telecom companies need to have a massive workforce that can handle their backend operations on a daily basis efficiently. Dealing with such a large volume of customers creates several opportunities for human error.

Telecom companies can employ cognitive computinga robotics-based field that involves Natural Language Processing (NLP), Robotic Process Automation (RPA) and rule enginesto automate the rule-based processes such as sending marketing emails, autocompleting e-forms, recording data and carrying out certain tasks that can replicate human actions. The use of AI in telecom operations brings greater accuracy in back-office operations. As per a study conducted by Deloitte, several executives in the telecom, media and tech industry felt that the use of cognitive computing for backend operations brought substantial and transformative benefits to their respective businesses.

Customer sentiment analysis involves a set of data classification and analysis tasks carried out to understand the pulse of customers. This allows telecom companies to evaluate whether their clients like or dislike their services based on raw emotions. Marketers can use NLP and AI to sense the "mood" of their customers from their texts, emails or social media posts bearing a telecom companys name. Aspect-based sentiment analytics highlight the exact service areas in which customers have problems. For example, if a customer is upset about the number of calls getting dropped regularly and writes a long and incoherent email to a telcos customer service team about it, the machine learning algorithms employed for sentiment analysis can still autonomously ascertain their mood (angry) and the problem (the call drop rate).

Apart from sentiment analysis, telecom businesses can hugely benefit from the growing emergence of chatbots and virtual assistants. Service requests for network set-ups, installation, troubleshooting and maintenance-based issues can be resolved through such machine learning-based tools and applications. Virtual assistants enable CRM teams in telecom companies to manage a large number of customers with ease. In this way, CSPs can manage customer service and sentiment analysis successfully.

Across the board, users generally cite the quality of their telecom customer service to be below satisfactory. Telecom users are constantly infuriated by long waiting times to get to a service executive, unanswered complaint emails and poor grievance handling by CSPs. Poor CRM does not bode well for telecom companies as it maligns their reputation and diminishes shareholder confidence. By implementing machine learning for CRM, telecom companies can address such issues efficiently.

Like businesses in any other sector, telecom companies need to boost their profits for long-term survival and diversification. As stated at the beginning, there are multiple factors that thwart their chances of profit generation. Going down the data science route is one of the novel ways to overcome such challenges. By involving AI in telecom operations, CSPs can manage their data wisely and channelize their resources towards maximizing revenues.

Despite the positives associated with AI, only a limited percentage of telecom businesses have incorporated the technology for profit maximization. Gradually, one can expect that percentage to rise.

See the article here:
How Telecom Companies Can Leverage Machine Learning To Boost Their Profits - Forbes

Machine learning helps improve the flash graphene process – Graphene-Info

Scientists at Rice University are using machine-learning techniques to fine-tune the process of synthesizing graphene from waste through flash Joule heating. The researchers describe in their new work how machine-learning models that adapt to variables and show them how to optimize procedures are helping them push the technique forward.

Machine learning is fine-tuning Rice Universitys flash Joule heating method for making graphene from a variety of carbon sources, including waste materials. Credit: Jacob Beckham, from: Phys.org

The process, discovered by the Rice lab of chemist James Tour, has expanded beyond making graphene from various carbon sources to extracting other materials like metals from urban waste, with the promise of more environmentally friendly recycling to come. The technique is the same: blasting a jolt of high energy through the source material to eliminate all but the desired product. However, the details for flashing each feedstock are different.

"Machine-learning algorithms will be critical to making the flash process rapid and scalable without negatively affecting the graphene product's properties," Prof. Tour said.

"In the coming years, the flash parameters can vary depending on the feedstock, whether it's petroleum-based, coal, plastic, household waste or anything else," he said. "Depending on the type of graphene we wantsmall flake, large flake, high turbostratic, level of puritythe machine can discern by itself what parameters to change."

Because flashing makes graphene in hundreds of milliseconds, it's difficult to follow the details of the chemical process. So Tour and company took a clue from materials scientists who have worked machine learning into their everyday process of discovery.

"It turned out that machine learning and flash Joule heating had really good synergy," said Rice graduate student and lead author Jacob Beckham. "Flash Joule heating is a really powerful technique, but it's difficult to control some of the variables involved, like the rate of current discharge during a reaction. And that's where machine learning can really shine. It's a great tool for finding relationships between multiple variables, even when it's impossible to do a complete search of the parameter space". "That synergy made it possible to synthesize graphene from scrap material based entirely on the models' understanding of the Joule heating process," he explained. "All we had to do was carry out the reactionwhich can eventually be automated."

The lab used its custom optimization model to improve graphene crystallization from four starting materialscarbon black, plastic pyrolysis ash, pyrolyzed rubber tires and cokeover 173 trials, using Raman spectroscopy to characterize the starting materials and graphene products.

The researchers then fed more than 20,000 spectroscopy results to the model and asked it to predict which starting materials would provide the best yield of graphene. The model also took the effects of charge density, sample mass and material type into account in their calculations.

Lat month, the Rice team developed an acoustic processing method to analyze LIG synthesis in real time.

Go here to read the rest:
Machine learning helps improve the flash graphene process - Graphene-Info

Competitive programming with AlphaCode – DeepMind

Solving novel problems and setting a new milestone in competitive programming.

Creating solutions to unforeseen problems is second nature in human intelligence a result of critical thinking informed by experience. The machine learning community has made tremendous progress in generating and understanding textual data, but advances in problem solving remain limited to relatively simple maths and programming problems, or else retrieving and copying existing solutions. As part of DeepMinds mission to solve intelligence, we created a system called AlphaCode that writes computer programs at a competitive level. AlphaCode achieved an estimated rank within the top 54% of participants in programming competitions by solving new problems that require a combination of critical thinking, logic, algorithms, coding, and natural language understanding.

In our preprint, we detail AlphaCode, which uses transformer-based language models to generate code at an unprecedented scale, and then smartly filters to a small set of promising programs.

We validated our performance using competitions hosted on Codeforces, a popular platform which hosts regular competitions that attract tens of thousands of participants from around the world who come to test their coding skills. We selected for evaluation 10 recent contests, each newer than our training data. AlphaCode placed at about the level of the median competitor, marking the first time an AI code generation system has reached a competitive level of performance in programming competitions.

To help others build on our results, were releasing our dataset of competitive programming problems and solutions on GitHub, including extensive tests to ensure the programs that pass these tests are correct a critical feature current datasets lack. We hope this benchmark will lead to further innovations in problem solving and code generation.

Read the original post:
Competitive programming with AlphaCode - DeepMind

Using Deep Learning to Find Genetic Causes of Mental Health Disorders in an Understudied Population – Neuroscience News

Summary: A new deep learning algorithm that looks for the burden of genomic variants is 70% accurate at identifying specific mental health disorders within the African-American community.

Source: CHOP

Minority populations have been historically under-represented in existing studies addressing how genetic variations may contribute to a variety of disorders. A new study from researchers at Childrens Hospital of Philadelphia (CHOP) shows that a deep learning model has promising accuracy when helping to diagnose a variety of common mental health disorders in African American patients.

This tool could help distinguish between disorders as well as identify multiple disorders, fostering early intervention with better precision and allowing patients to receive a more personalized approach to their condition.

The study was recently published by the journalMolecular Psychiatry.

Properly diagnosing mental disorders can be challenging, especially for young toddlers who are unable to complete questionnaires or rating scales. This challenge has been particularly acute in understudied minority populations. Past genomic research has found several genomic signals for a variety of mental disorders, with some serving as potential therapeutic drug targets.

Deep learning algorithms have also been used to successfully diagnose complex diseases like attention deficit hyperactivity disorder (ADHD). However, these tools have rarely been applied in large populations of African American patients.

In a unique study, the researchers generated whole genome sequencing data from 4,179 patient blood samples of African American patients, including 1,384 patients who had been diagnosed with at least one mental disorder This study focused on eight common mental disorders, including ADHD, depression, anxiety, autism spectrum disorder, intellectual disabilities, speech/language disorder, delays in developments and oppositional defiant disorder (ODD).

The long-term goal of this work is to learn more about specific risks for developing certain diseases in African American populations and how to potentially improve health outcomes by focusing on more personalized approaches to treatment.

Most studies focus only on one disease, and minority populations have been very under-represented in existing studies that utilize machine learning to study mental disorders, said senior author Hakon Hakonarson, MD, Ph.D., Director of the Center for Applied Genomics at CHOP.

We wanted to test this deep learning model in an African American population to see whether it could accurately differentiate mental disorder patients from healthy controls, and whether we could correctly label the types of disorders, especially in patients with multiple disorders.

The deep learning algorithm looked for the burden of genomic variants in coding and non-coding regions of the genome. The model demonstrated over 70% accuracy in distinguishing patients with mental disorders from the control group. The deep learning algorithm was equally effective in diagnosing patients with multiple disorders, with the model providing exact diagnostic matches in approximately 10% of cases.

The model also successfully identified multiple genomic regions that were highly enriched formental disorders, meaning they were more likely to be involved in the development of these medical disorders. The biological pathways involved included ones associated with immune responses, antigen and nucleic acid binding, a chemokine signaling pathway, and guanine nucleotide-binding protein receptors.

However, the researchers also found that variants in regions that did not code for proteins seemed to be implicated in these disorders at higher frequency, which means they may serve as alternative markers.

By identifying genetic variants and associated pathways, future research aimed at characterizing their function may provide mechanistic insight as to how these disorders develop, Hakonarson said.

Author: Press OfficeSource: CHOPContact: Press Office CHOPImage: The image is in the public domain

Original Research: Open access.Application of deep learning algorithm on whole genome sequencing data uncovers structural variants associated with multiple mental disorders in African American patients by Yichuan Liu et al. Molecular Psychiatry

Abstract

Application of deep learning algorithm on whole genome sequencing data uncovers structural variants associated with multiple mental disorders in African American patients

Mental disorders present a global health concern, while the diagnosis of mental disorders can be challenging. The diagnosis is even harder for patients who have more than one type of mental disorder, especially for young toddlers who are not able to complete questionnaires or standardized rating scales for diagnosis. In the past decade, multiple genomic association signals have been reported for mental disorders, some of which present attractive drug targets.

Concurrently, machine learning algorithms, especially deep learning algorithms, have been successful in the diagnosis and/or labeling of complex diseases, such as attention deficit hyperactivity disorder (ADHD) or cancer. In this study, we focused on eight common mental disorders, including ADHD, depression, anxiety, autism, intellectual disabilities, speech/language disorder, delays in developments, and oppositional defiant disorder in the ethnic minority of African Americans.

Blood-derived whole genome sequencing data from 4179 individuals were generated, including 1384 patients with the diagnosis of at least one mental disorder. The burden of genomic variants in coding/non-coding regions was applied as feature vectors in the deep learning algorithm. Our model showed ~65% accuracy in differentiating patients from controls. Ability to label patients with multiple disorders was similarly successful, with a hamming loss score less than 0.3, while exact diagnostic matches are around 10%. Genes in genomic regions with the highest weights showed enrichment of biological pathways involved in immune responses, antigen/nucleic acid binding, chemokine signaling pathway, and G-protein receptor activities.

A noticeable fact is that variants in non-coding regions (e.g., ncRNA, intronic, and intergenic) performed equally well as variants in coding regions; however, unlike coding region variants, variants in non-coding regions do not express genomic hotspots whereas they carry much more narrow standard deviations, indicating they probably serve as alternative markers.

See the original post:
Using Deep Learning to Find Genetic Causes of Mental Health Disorders in an Understudied Population - Neuroscience News