Archive for the ‘Machine Learning’ Category

Machine learning security vulnerabilities are a growing threat to the web, report highlights – The Daily Swig

Security industry needs to tackle nascent AI threats before its too late

As machine learning (ML) systems become a staple of everyday life, the security threats they entail will spill over into all kinds of applications we use, according to a new report.

Unlike traditional software, where flaws in design and source code account for most security issues, in AI systems, vulnerabilities can exist in images, audio files, text, and other data used to train and run machine learning models.

This is according to researchers from Adversa, a Tel Aviv-based start-up that focuses on security for artificial intelligence (AI) systems, who outlined their latest findings in their report, The Road to Secure and Trusted AI, this month.

This makes it more difficult to filter, handle, and detect malicious inputs and interactions, the report warns, adding that threat actors will eventually weaponize AI for malicious purposes.

Unfortunately, the AI industry hasnt even begun to solve these challenges yet, jeopardizing the security of already deployed and future AI systems.

Theres already a body of research that shows many machine learning systems are vulnerable to adversarial attacks, imperceptible manipulations that cause models to behave erratically.

BACKGROUND Adversarial attacks against machine learning systems everything you need to know

According to the researchers at Adversa, machine learning systems that process visual data account for most of the work on adversarial attacks, followed by analytics, language processing, and autonomy.

Machine learning systems have a distinct attack surface

With the growth of AI, cyberattacks will focus on fooling new visual and conversational Interfaces, the researchers write.

Additionally, as AI systems rely on their own learning and decision making, cybercriminals will shift their attention from traditional software workflows to algorithms powering analytical and autonomy capabilities of AI systems.

Web developers who are integrating machine learning models into their applications should take note of these security issues, warned Alex Polyakov, co-founder and CEO of Adversa.

There is definitely a big difference in so-called digital and physical attacks. Now, it is much easier to perform digital attacks against web applications: sometimes changing only one pixel is enough to cause a misclassification, Polyakov told The Daily Swig, adding that attacks against ML systems in the physical world have more stringent demands and require much more time and knowledge.

Read more of the latest infosec research news

Polyakov also warned about vulnerabilities in machine learning models served over the web such as API services provided by large tech companies.

Most of the models we saw online are vulnerable, and it has been proven by several research reports as well as by our internal tests, Polyakov. With some tricks, it is possible to train an attack on one model and then transfer it to another model without knowing any special details of it.

Also, you can perform CopyCat attack to steal a model, apply the attack on it and then use this attack on the API.

Most machine learning algorithms require large sets of labeled data to train models. In many cases, instead of going through the effort of creating their own datasets, machine learning developers search and download datasets published on GitHub, Kaggle, or other web platforms.

Eugene Neelou, co-founder and CTO of Adversa, warned about potential vulnerabilities in these datasets that can lead to data poisoning attacks.

Poisoning data with maliciously crafted data samples may make AI models learn those data entries during training, thus learning malicious triggers, Neelou told The Daily Swig. The model will behave as intended in normal conditions, but malicious actors may call those hidden triggers during attacks.

RELATED TrojanNet a simple yet effective attack on machine learning models

Neelou also warned about trojan attacks, where adversaries distribute contaminated models on web platforms.

Instead of poisoning data, attackers have control over the AI model internal parameters, Neelou said. They could train/customize and distribute their infected models via GitHub or model platforms/marketplaces.

Unfortunately, GitHub and other platforms dont yet have any safeguards in place to detect and defend against data poisoning schemes. This makes it very easy for attackers to spread contaminated datasets and models across the web.

Attacks against machine learning and AI systems are set to increase over the coming years

Neelou warned that while AI is extensively used in myriads of organizations, there are no efficient AI defenses.

He also raised concern that under currently established roles and procedures, no one is responsible for AI/ML security.

AI security is fundamentally different from traditional computer security, so it falls under the radar for cybersecurity teams, he said. Its also often out of scope for practitioners involved in responsible/ethical AI, and regular AI engineering hasn't solved the MLOps and QA testing yet.

Check out more machine learning security news

On the bright side, Polyakov said that adversarial attacks can also be used for good. Adversa recently helped one of its clients use adversarial manipulations to develop web CAPTCHA queries that are resilient against bot attacks.

The technology itself is a double-edged sword and can serve both good and bad, he said.

Adversa is one of several organizations involved in dealing with the emerging threats of machine learning systems.

Last year, in a joint effort, several major tech companies released the Adversarial Threat ML Matrix, a set of practices and procedures meant to secure the machine learning training and delivery pipeline in different settings.

RECOMMENDED Emotet clean-up: Security pros draw lessons from botnet menace as kill switch is activated

See the rest here:
Machine learning security vulnerabilities are a growing threat to the web, report highlights - The Daily Swig

3 Applications of Machine Learning and AI in Finance – TAPinto.net

Thanks to advanced technology, consumers can now access, spend, and invest their money in safer ways. Lenders looking to win new business should apply technology to make processes faster and more efficient.

Artificial intelligence has transformed the way we handle money by giving the financial industry a smarter, more convenient way to meet customer demands.

Sign Up for Elizabeth Newsletter

Our newsletter delivers the local news that you can trust.

You have successfully signed up for the TAPinto Elizabeth Newsletter.

Machine learning helps financial institutions develop systems that improve user experiences by adjusting parameters automatically. It's become easier to handle the extensive amount of data related to daily financial transactions.

Machine learning and AI are changing how the financial industry does business in these ways:

Fraud Detection

The need to enhance fraud detection and cybersecurity is no longer an option. People pay bills, transfer money, trade stocks, and deposit checks through smartphone applications or online accounts.

Many businesses store their information online, increasing the risk of security breaches. Fraud is a major concern for companies that offer financial services--including banks--which lose billions of dollars yearly.

Machine learning and artificial intelligence technologies improve online finance security by scanning data and identifying unique activities. They then highlight these activities for further investigation. This technology can also prevent credential stuffing and credit application fraud.

Cognito is a cyber-threat detection and hunting software impacting the financial space positively. Its built by a company called Vectra. Besides detecting threats automatically, it can expose hidden attackers that target financial institutions and also pinpoint compromised information.

Making Credit Decisions

Having good credit can help you rent an apartment of your choice, land a great job, and explore different financing options. Now more than ever, many things depend on your credit history, even taking loans and credit cards.

Lenders and banks now use artificial intelligence to make smarter decisions. They use AI to accurately assess borrowers, simplifying the underwriting process. This helps save time and financial resources that would have been spent on humans.

Data--such as income, age, and credit behavior--can be used to determine if customers qualify for loans or insurance. Machine learning accurately calculates credit scores using several factors, making loan approval quick and easy.

AI software like ZestFinance can help you to easily find online lenders, all you do is type title loans near me. Its automated machine learning platform (ZAML) works with companies to assess borrowers without credit history and little to no credit information. The transparent platform helps lenders to better evaluate borrowers who are considered high risk.

Algorithmic Trading

Many businesses depend on accurate forecasts for their continued existence. In the finance industry, time is money. Financial markets are now using machine learning to develop faster, more exact mathematical models. These are better at identifying risks, showing trends, and providing advanced information in real time.

Financial institutions and hedge fund managers are applying artificial intelligence in quantitative or algorithmic trading. This trading captures patterns from large data sets to identify factors that may cause security prices to rise or fall, making trading strategic.

Tools like Kavout combine quantitative analysis with machine learning to simultaneously process large, complex, unstructured data faster and more efficiently. The Kai Score ranks stocks using AI to generate numbers. A higher Kai Score means the stock is likely to outperform the market.

Online lenders and other financial institutions can now streamline processes thanks to faster, more efficient tools. Consumers no longer have to worry about unnecessary delays and the safety of their transactions.

About The Author:

Aqib Ijaz is a content writingguru at Eyes on Solution. He is adept in IT as well. He loves to write on different topics. In his free time, he likes to travel and explore different parts of the world.

View original post here:
3 Applications of Machine Learning and AI in Finance - TAPinto.net

AI and ML can Help to Turn Millionaire Dreams into Reality – Analytics Insight

The world today has joined hands with advanced technologies like AI and machine learning and is progressing at a rapid pace. The advent of these technologies has almost made a world without them, unimaginable. Artificial intelligence and machine learning have invaded every sector that can be thought of and have shown substantial transformation and revolution in them.

The pandemic today has left us no choice but to adapt to a digital and technological culture. While AI and ML hold the promises of changing the world for good, they also empower the skilled ones to mint money to the point of becoming a millionaire. Here is how:

Studying artificial intelligence and machine learning has become crucial to being a member of big IT sectors and Silicon Valley companies. Truth be told, the field of artificial intelligence and machine learning is not everybodys cup of tea owing to its complex operations and algorithms.

Besides the fact that the fields are fantasised, there are several other reasons why honing skills in AI and machine learning can make you economically rich and stable.

1. AI-driven gadgets are taking over human workforce

As already mentioned that the improvements, advancements and success are getting as higher as the sky, they have now started to replace human work force. The pandemic has mandated remote working for humans but someone has to be there in the office to look after the operations. This objective is now achieved with machines.

2. Use of automation in manufacturing and supply chain management

Automation is rising in fashion and is fished upon by manufacturing companies, and service providers of supply chain management programs.

The manufacturing sector and companies have suffered immensely when business operations where brought into a halt. It found itself drowning when work resumed. Employees assigned to the back office had to handle plethora of work simultaneously, inevitably being erroneous. Additionally, limitless responsibility tended to tire them out, hindering the quality and flow of work.

3. Robotics are taking over the world

Be it defence or any other sector or discipline, robotics play a significant role. Nations are excelling at designing humanoids that can not only mimic human intelligence but can also carry out appropriate business decisions.

The importance and need for artificial intelligence and machine learning cannot be re-iterated enough number of times. Their importance supports the fact why being strongly armed with skills in AI and ML can not only help one land in a promising career but also bring fat salaries as rewards.

There are varied ways to learn machine learning and artificial intelligence. These days, kids mostly enrol themselves into virtual courses that offer extensive training on machine learning and artificial intelligence.

Grasping an in-depth learning about artificial intelligence and machine learning cannot be done all by the self. It is suggested to opt for virtual courses available for a proper training in AI and ML.

Artificial intelligence companies and the companies engulfed in machine learning always have all eyes and ears for the ones who have mastered skills in these domains. The famous companies namely, Google, Apple and Microsoft believe that the AI and ML skilled personalities can renovate and improve the future of AI.

These companies are ready to disburse huge amounts in the accounts of their employees in exchange of efforts from them that are potent to burnish and eliminate the pain points, setting the company on the path of success.

Go here to see the original:
AI and ML can Help to Turn Millionaire Dreams into Reality - Analytics Insight

Prediction of Disease Progression of COVID-19 | IJGM – Dove Medical Press

Introduction

By November 22, 2020, more than 180 countries had reported a total of 57.8 million confirmed COVID-19 cases, a condition caused by SARS-CoV2.1 SARS-CoV-2 is a novel enveloped RNA -coronavirus, which has phylogenetic similarity to SARS-CoV, the pathogen causing SARS.2 The clinical symptoms of COVID-19 have a broad spectrum, and vary among individuals.3 Most infected individuals have mild or subclinical illness, while approximately 15.7%32% of hospitalized COVID-19 patients develop severe acute respiratory distress or are admitted to an intensive care unit.3,4 Potential risk factors to identify patients who will develop into severe or critical severe cases at an early stage include older age, underlying comorbidities, and elevated D-dimer.5,6 As the COVID-19 outbreak continues to evolve, it is critical to find patients at high risk of disease progression. Several investigations have analyzed risk factors associated with disease progression and clinical outcomes, and suggested that older age, comorbidities, immunoresponse were potential risk factors.610 However, the clinical details were not well described, and many important laboratory results were not included in the analyses. Therefore, it is necessary to develop an effective classifier model for predicting disease progression at an early stage. Machine-learning techniques provide new methods for predicting clinical outcomes and assessing risk factors. Here, we aimed to predict the diseases progression with machine learning, based on a large set of clinical and laboratory features. Performance of the models was evaluated using clinical data of multicenter-confirmed COVID-19 patients. Software was developed for clinical practice. These predictive models can identify patients at high risk of disease progression and predict the prognosis of COVID-19 patients accurately.

This retrospective multicenter cohort study was performed at Huoshenshan Hospital and Taikang Tongji Hospital (Wuhan, China). Diagnostic criteria for COVID-19 followed the 2020 WHO interim guidance.11 Severe COVID-19 cases were defined as patients with fever plus one of respiratory rate >30 breaths/minute, severe respiratory distress, or SpO2 93% in room air. All severe cases included had progressed from nonsevere cases. Adults with pneumonia but no signs of severe pneumonia and no need for supplemental oxygen were defined as nonsevere. All nonsevere cases study were stable and had been discharged. RT-PCR assays of nasal and pharyngeal swab specimens were performed for laboratory confirmation of SARS-CoV2 virus.

Data of COVID-19 patients were collected from February 10, 2020 to April 5, 2020. A total of 29 features of laboratory data obtained on admission to hospital (within 6 hours) are shown in Supplementary Table 1. This study was approved by the ethics committee of Huoshenshan Hospital (HSSLL024). As all subjects were anonymized in this retrospective study, written informed consent was waived due to urgent need. This study was conducted in accordance with the Declaration of Helsinki.

A feature selection process was employed to incrementally choose the most representative features. The features with significant difference between two groups were selected for the following machine learning process. The combination trainingvalidation set was collected from Huoshenshan Hospital, and two test sets were collected from Huoshenshan Hospital and TaikangTongji Hospital, respectively.

To prevent overfitting and improve generalizability, k-fold cross-validation was used. Since training and validation data were randomly generated, we took the average score of five rounds of k-fold cross-validation as the final validation results. The optimal-feature subsets in each model were defined as those with the highest AUC values. The flow diagram of training, validation, and test of the prediction models is shown in Figure 1.

Figure 1 Flow diagram of training, validation, and testing of the prediction models.

Four prediction models were trained with logical regression (LR), support vector-machine(SVM), knearest neighbor (KNN), and nave Bayes (NB), respectively. Experiments were implemented using MatLab 2018. ROC curve, AUC value, sensitivity, and specificity were used to evaluate predictive performance. The prediction tasks in this work mean classification.

Software for predicting disease progression of COVID-19 was developed based on machine learning, which is convenient for clinicians to use. The interface of the software is written in Visual Studio 2013 and the internal function in MatLab 2018.

Statistical analyses were performed using SPSS 23.0. Categorical data are expressed as proportions. Descriptive data are expressed as medians and interquartile ranges for skewed-distribution variables and means SD for variables with normal distribution. Students t-tests and nonparametric MannWhitney tests were used to compare normal- and skewed-distribution variables, respectively. Pearsons 2 was used to compare categorical variables and multiple rates. Two-sided <0.05 was considered statistically significant.

By April 5, 2020, 1,567 COVID-19 patients in the medical record systems of Huoshenshan Hospital and Taikang Tongji Hospital had been screened for data collection. Data from 455 patients (347 from Huoshenshan, 108 from Taikang Tongji) with complete medical information and laboratory-examination results were collected. In sum, 78 patients from Huoshenshan were randomly selected as test set 1 (30 severe cases and 48 nonsevere cases) and 108 patients from Taikang Tongji as test set 2 (40 severe cases and 68 nonsevere cases). Data of the remaining 269 patients from Huoshenshan were used for the training and validation set (101 severe cases and 168 nonsevere cases). Demographic and clinical characteristics of the 269 patients in the trainingvalidation set are summarized in Table 1, and clinical characteristics of patients in test sets 1 and 2 are summarized in Supplementary Table 2 and 3, respectively.

Table 1 Demographic and clinical characteristics of COVID19 patients in training and validation sets

The median age of the patients in training and validation set was 62.75 years, and 51% of the patients were men. Severe patients were much older than nonsevere patients (71.31 vs 57.61, P<0.05). Comorbidities were present in 55% of patients (147270), and the prevalence of comorbidities in severe patients was higher than that in nonsevere patients (73% vs 45%, P<0.05). Hypertension (32%), diabetes (13%), and coronary heart disease (9%) were the most common comorbidities, and presented more frequently in severe patients: 26% of patients overall had two or more comorbidities, while severe patients had higher prevalence of two or more comorbidities (52% vs 15%, P<0.05). Fever (68%), cough (49.4%), and fatigue (45%) were the most common symptoms at onset of illness, and fever and fatigue were present more frequently in severe patients (Table 1).

Severe patients had elevated levels of CRP, lactate dehydrogenase (LDH), D-dimer, and -hydroxybutyrate dehydrogenase, and had reduced levels of hemoglobin, hematocrit, and albumin (Table 1). Features with significant differences between the groups were introduced for selection using machine learning.

A total of 21 features with significant difference between the training and validation sets were used for the following modeling (Supplementary Table 4). The subset with the highest AUC was selected to be the optimal subset of the corresponding machine-learning method (Table 2). Briefly, KNN achieved the highest AUC (0.9484, 95% CI 0.9240.973) among the eleven features of the four methods in training and validation sets (Table 2). D-dimer was the single optimal feature with the highest AUC in the optimal-feature subset of each machine-learning method (0.8368 in LR model, 0.8169 in NB model, 0.8343 in KNN model, and 0.8322 in SVM model, respectively; Supplementary Table 5). ROC curves obtained by the optimal-feature subsets, single features, and all features using k-fold cross-validation are shown in Figure 2. Highest AUC values in optimal-feature subsets were 0.937, 95% CI 0.9020.972) for LR, 0.949 (95% CI 0.9240.973) for KNN, 0.935 (95% CI 0.9060.964) for NB, and 0.931 (95% CI 0.8950.967) for SVM (Table 3).

Table 2 Optimal-feature subset of each machine learning method

Table 3 Comparison of the average predictive performance by k-fold cross-validation with optimal-feature subset

Figure 2 ROC curves for models in training and validation sets. (A) ROC curves of LR subsets for distinguishing between severe and nonsevere patients. AUC of optimal-feature subset 0.937 (95% CI 0.9020.927), all features 0.916 (95% CI 0.8760.955), and single optimal feature (D-dimer) 0.837 (95% CI 0.7860.887). (B) ROC curves for subsets of features from KNN for distinguishing between severe and nonsevere patients. AUC of the optimal feature subset 0.948 (95% CI 0.9240.937), all features 0.935 (95% CI 0.9070.963), and single optimal feature (D-dimer) 0.835 (95% CI 0.7820.887). (C) ROC curves of subsets of features from NB for distinguishing between severe and nonsevere patients. AUC of optimal feature set 0.935 (95% CI 0.9060.964), all features 0.916 (95% CI 0.8790.954), and single optimal feature (D-dimer) 0.805 (95% CI 0.7480.861). (D) ROC curves of subsets of features from SVM for distinguishing between severe and nonsevere patients. AUC of optimal feature subset 0.931 (95% CI 0.8950.967), features 0.918 (95% CI 0.8790.957), and single optimal feature (D-dimer) 0.832 (95% CI 0.7810.884).

We compared predictive performance obtained by the models based on the optimal-feature subsets. Sensitivity (Sen), specificity (Spe), false-positive rate (FPR), false-negative rate (FNR), positive predictive value (PPV), negative predictive value (NPV), accuracy, and F1 scores of the above four models are shown in Table 3. No significant differences were observed among these four models for Sen, FNR, PPV, NPV, or accuracy. Spe, FPR, and F1 scores for SVM were superior (Table 3).

To evaluate the importance of each feature in the corresponding optimal subsets, we evaluated predictive performance based on AUC obtained by each feature in the subsets. D-dimer, CRP, age, white blood cell (WBC) count, LDH, and albumin showed the highest predictive performance in the optimal subsets, with D-dimer, CRP, and age the top three (Supplementary Table 5).

Test set 1 comprised 78 patients from Huoshenshan, and test set 2 108 patients from Taikang Tongji. AUC values obtained by the four models in test set 1 were 0.9059 (95% CI 0.8320.980) for LR, 0.9139 (95% CI 0.8410.987) for KNN, 0.9177 (95% CI 0.8480.988) for NB, and 0.9594 (95% CI 0.9200.999) for SVM. F1 scores of the four models in test set 1 were 0.818 for LR, 0.828 for KNN, 0.867 for NB, and 0.885 for SVM (Supplementary Table 6). ROC curve obtained for the models in test set 1 are shown in Figure 3A. No significant differences were observed among these models for Sen, Spe, FPR, FNR, PPV, NPV, or accuracy (Supplementary Table 6). The predictive performance of all models was satisfied in test set 1. Then, to test whether these models would still work at another hospital, we evaluated predictive performance in test set 2. AUC values of the four models in test set 2 were 0.8143 (95% CI 0.7280.901) for LR, 0.8057 (95% CI 0.7170.894) for KNN, 0.8265 (95% CI 0.7410.912) for NB, and 0.8140 (95% CI 0.7280.900) for SVM. F1 scores of the four models in test set 2 were 0.676 for LR, 0.698 for KNN, 0.716 for NB, and 0.691 for SVM (Supplementary Table 7). ROC curves obtained by the four models in test set 2 are shown in Figure 3 (Figure 3B). No significant differences were observed among these four models for Sen, Spe, FPR, FNR, PPV, NPV, or accuracy P>0.05, Supplementary Table 7).

Figure 3 ROC curves of models in testing sets. (A) Optimal-feature set of LR, KNN, NB, and SVM in test set 1. (B) Optimal feature set of LR, KNN, NB, and SVM in test 2. (C) Optimal-feature set of LR, KNN, NB, and SVM in the mixed test sets. (D) AUC values of optimal-feature subsets for different models in test set 1, test set 2, and mixed test set.

To explore potential reasons for differences between the two test sets, we randomly selected 54 patients from test set 2 (Taikang Tongji), and added their data to the trainingvalidation set. The remaining data from test sets 2 and 1 were combined (from Huoshenshan). As such, data from 323 patients were used as the trainingvalidation set, and data from 132 patients were used as mixed test set. AUC values obtained by the four models were 0.8843 (95% CI 0.8230.946) for LR, 0.8561 (95% CI 0.7860.926) for KNN, 0.9096 (95% CI 0.853967) for NB, and 0.9255 (95% CI 0.8820.969) for SVM in the mixed test set. F1 score of the four models in the mixed test set were 0.777 for LR, 0.750 for KNN, 0.840 for NB, 0.832 for SVM, respectively (Supplementary Table 8). ROC curves obtained by the four models in test set 2 are shown in Figure 3C. The predictive performance of the models in the mixed test set was much better than that in test set 2 (Figure 3D).

Software was developed for predicting disease progression based on machine learning for clinical practice (Supplementary Figure 1, 2, and 3). The first page provided the function of training and validation using k-fold cross-validation to select the optimal-feature subset and parameters (Supplementary Figure 1). In second page, one model that has been trained can be easily selected, and predictive performance can be evaluated in test sets (Supplementary Figure 2). Once the validity of the trained model has been confirmed by the second page, a prediction probability wil emerge for an upcoming patient on the third page (Supplementary Figure 3).

We developed a prediction model of disease progression based on machine learning. Clinical characteristics, WBC count, inflammatory markers, liver function, renal function, and coagulation functions were collected and utilized to establish the predictive model based on machine learning. In sum, 21 features with significant differences between the severe and nonsevere groups were selected from a total of 48 features. In this feature-selection process, relatively useless features were eliminated to make the calculation more effective. Finally, the optimal-feature subset was determined using k-fold cross validation for each method. Moreover, the predictive performance of the models was evaluated by two test sets from two hospitals, and AUC values in these test sets were satisfactory. We also developed software to predict disease progression of COVID-19 based on machine learning that can be used conveniently in clinical practice.

Clinical features of the patients in this study were consistent with previous large-sample studies.3,12 Comorbidity, older age, lower lymphocyte count, and higher LDH were identified as independent high-risk factors for COVID-19 disease progression.13 Ji et al developed a risk factorscoring system (CALL) based on these features to predict disease progression.13 However, there were few cases included, and the reliability of the model needs to be confirmed. In our study, these models were trained by optimal-feature subsets to attain optimal predictive performance. We evaluated predictive performance with two test sets from two hospitals to ensure the reliability of the models.

D-dimer, CRP, age, WBC count, LDH, and albumin had better predictive performance in the optimal-feature subset, with D-dimer, CRP, and age the top three. Zhou et al found no significant differences between a nonaggravation group and aggravation group for WBC count, CRP, albumin, LDH, or D-dimer level.10 They found that total lymphocyte count was a risk factor associated with disease progression in COVID-19 patients using a binary logistic regression model.10 However, only 17 patients were included in this study, and total lymphocyte count did not reflect disease progression. Zhou et al showed that older age and elevated D-dimer could help clinicians to identify patients with poor prognosis at an early stage.6 Consistently with this study, age and D-dimer level were important features in the optimal-feature subset. Elevated levels of D-dimer are associated with disease activity and inflammation, mainly including venous thromboembolism, sepsis, or cancer.14,15 A retrospective study on deceased patients also showed that D-dimer was markedly higher in deceased patients than recovered patients.16 Therefore, monitoring the D-dimer levels can help clinicians identify patients at high risk of disease progression. Anticoagulation treatment can be given patients with high D-dimer levels to prevent disease progression. Albumin levels decrease significantly in most severe COVID-19 patients and decrease continuously during the diseases progress.17 Hypoalbuminemia is associated with poor clinical outcomes for hospitalized patients.18,19 Hypoalbuminemia in severe patients is mainly due to inadequate nutrition intake and overconsumption.

The predictive performance of the models in test set 1 was much better than that in test set 2. and patients enrolled in test set 2 were from another hospital. Differences in laboratory findings and medical services may be potential reasons for the lower predictive performance in test set 2. After data from Taikang Tongji had been added to this training set, predictive performance improved significantly, indicating that predictive performance in another hospital could be improved if part of the data collected from another hospital participated in the training stage.

The code of the software used in this study is available from the corresponding author on reasonable request.

The data sets used in this study are available from the corresponding author Kaijun Liu (email [emailprotected]) on reasonable request.

This study was approved by the ethics committee of Huoshenshan Hospital (Wuhan, China) (HSSLL024).

As all subjects were anonymized in this retrospective study, written informed consent was waived due to urgent need.

All authors made substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data, took part in drafting the article or revising it critically for important intellectual content, agreed to submit to the current journal, gave final approval to the version to be published, and agree to be accountable for all aspects of the work.

This work was supported by the National Natural Science Foundation of China (81700483), Chongqing Research Program of Basic Research and Frontier Technology (cstc2017jcyjAX0302, cstc2020jcyj-msxmX1100), and Army Medical University Frontier Technology Research Program (2019XLC3051). The funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding authors had full access to all the data in the study, and had final responsibility for the decision to submit for publication.

The authors declare that there are no conflicts of interest.

1. World Health Organization. Weekly epidemiological update - 24 November 2020. Available from: https://www.who.int/publications/m/item/weekly-epidemiological-update---24-november-2020. Accessed April 07, 2021.

2. Lu R, Zhao X, Li J, et al. Genomic characterisation and epidemiology of 2019 novel coronavirus: implications for virus origins and receptor binding. Lancet. 2020;395(10224):565574. doi:10.1016/S0140-6736(20)30251-8

3. Huang C, Wang Y, Li X, et al. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet. 2020;395(10223):497506. doi:10.1016/S0140-6736(20)30183-5

4. Chen N, Zhou M, Dong X, et al. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study. Lancet. 2020;395(10223):507513. doi:10.1016/S0140-6736(20)30211-7

5. Sun Y, Koh V, Marimuthu K, et al. Epidemiological and clinical predictors of COVID-19. Clin Infect Dis. 2020;71(15):786792. doi:10.1093/cid/ciaa322

6. Zhou F, Yu T, Du R, et al. Clinical course and risk factors for mortality of adult inpatients with COVID-19 in Wuhan, China: a retrospective cohort study. Lancet. 2020;395(10229):10541062. doi:10.1016/S0140-6736(20)30566-3

7. Guan WJ, Liang WH, Zhao Y, et al. Comorbidity and its impact on 1590 patients with Covid-19 in China: a nationwide analysis. Eur Respir J. 2020;55(5):2000547. doi:10.1183/13993003.00547-2020

8. Wang L, He W, Yu X, et al. Coronavirus Disease 2019 in elderly patients: characteristics and prognostic factors based on 4-week follow-up. J Infect. 2020;80(6):639645. doi:10.1016/j.jinf.2020.03.019

9. Wu C, Chen X, Cai Y, et al. Risk factors associated with acute respiratory distress syndrome and death in patients with coronavirus disease 2019 pneumonia in Wuhan, China. JAMA Intern Med. 2020;180(7):934. doi:10.1001/jamainternmed.2020.0994

10. Zhou Y, Zhang Z, Tian J, Xiong S. Risk factors associated with disease progression in a cohort of patients infected with the 2019 novel coronavirus. Ann Palliat Med. 2020. doi:10.21037/apm.2020.03.26

11. World Health Organziation. Clinical management of severe acute respiratory infection when Novel coronavirus (nCoV) infection is suspected: interim guidance. 2020. Available from: https://www.who.int/docs/default-source/coronaviruse/clinical-management-of-novel-cov.pdf. Accessed April 7, 2021.

12. Guan WJ, Ni ZY, Hu Y, et al. Clinical characteristics of coronavirus disease 2019 in China. N Engl J Med. 2020;382(18):17081720. doi:10.1056/NEJMoa2002032

13. Ji D, Zhang D, Xu J, et al. Prediction for progression risk in patients with COVID-19 pneumonia: the CALL score. Clin Infect Dis. 2020;71(6):13931399. doi:10.1093/cid/ciaa414

14. Borowiec A, Dabrowski R, Kowalik I, et al. Elevated levels of d-dimer are associated with inflammation and disease activity rather than risk of venous thromboembolism in patients with granulomatosis with polyangiitis in long term observation. Adv Med Sci. 2020;65(1):97101. doi:10.1016/j.advms.2019.12.007

15. Schutte T, Thijs A, Smulders YM. Never ignore extremely elevated D-dimer levels: they are specific for serious illness. Neth J Med. 2016;74(10):443448.

16. Chen T, Wu D, Chen H, et al. Clinical characteristics of 113 deceased patients with coronavirus disease 2019: retrospective study. BMJ. 2020;368:m1091. doi:10.1136/bmj.m1091

17. Zhang Y, Zheng L, Liu L, Zhao M, Xiao J, Zhao Q. Liver impairment in COVID-19 patients: a retrospective analysis of 115 cases from a single center in Wuhan city, China. Liver Int. 2020;40(9):20952103. doi:10.1111/liv.14455

18. Kim S, McClave SA, Martindale RG, Miller KR, Hurt RT. Hypoalbuminemia and clinical outcomes: What is the mechanism behind the relationship? Am Surg. 2017;83(11):12201227. doi:10.1177/000313481708301123

19. Yanagisawa S, Miki K, Yasuda N, Hirai T, Suzuki N, Tanaka T. Clinical outcomes and prognostic factor for acute heart failure in nonagenarians: impact of hypoalbuminemia on mortality. Int J Cardiol. 2010;145(3):574576. doi:10.1016/j.ijcard.2010.05.061

Read the original here:
Prediction of Disease Progression of COVID-19 | IJGM - Dove Medical Press

Artificial intelligence will maximise efficiency of 5G network operations – ComputerWeekly.com

Compared with previous types of networks, 5G networks are both more in need of automation and more amenable to automation. Automation tools are still evolving and machine learning is not yet common in carrier-grade networking, but rapid change is expected.

Emerging standards from 3GPP, ETSI, ITU and the open source software community anticipate increased use of automation, artificial intelligence (AI) and machine learning (ML). And key suppliers activities add credibility to the vision and promise of artificially intelligent network operations.

Growing complexity and the need to solve repetitive tasks in 5G and future radio systems necessitate new automation solutions that take advantage of state-of-the-art artificial intelligence and machine learning techniques that boost system efficiency, wrote Ericssons chief technology officer (CTO), Erik Ekudden, recently.

In 2020, Ericsson engineers demonstrated machine learning software that orchestrated virtual machines on a web server. They reported that during a 12-hour stress test, their software decreased idle cycles to 2%, from a baseline of 20%. Similar efficiency gains could enhance collections of edge computers and computers within cloud-native 5G infrastructure.

Considering that 5G core networks are evolving towards increased dependence on software and generic computing resources, Ericssons demonstration suggests that large-scale use of AI solutions could help carriers use infrastructure as efficiently as possible while handling a mix of traffic types that change dynamically and fulfilling diverse service-level agreements.

Nokia marketing manager Filip De Greve recently stated: The benefits of AI and ML are unquestionable all it needs is the right approach and the right partner to unlock them.

A whitepaper from Nokia describes potential roles for AI and ML in virtually all phases of a service providers operations. Last month, Nokia announced the availability of its Software Enablement Platform, whose features include a means for making use of AI and ML in edge computers that run both open radio access networks (O-RANs) and application-level services. Nokias platform provides data that is important to machine learning developments for software-defined radios.

Carriers and third parties can develop software for Nokias platform, which comes with some samples that are in current commercial trials. One included xApp relies on machine learning methods for traffic steering roughly speaking, a type of service-aware load balancing for radio channels.

Huawei, too, has engaged in a number of machine learning developments in recent years, but seems to have made relatively few disclosures about the matter recently. The company said its management and orchestration (MANO) solution uses AI and big data technologies to implement automatic deployment, configuration, scaling and healing.

Needs for machine learning arise from expected challenges in managing future 5G networks. Future deployments will likely have traffic-carrying capacity orders of magnitude greater than existing infrastructures. Many suppliers, researchers and developers expect to need machine learning to make efficient use of 5G technologies.

Opportunities to use machine learning are arising with increased reliance on cloud-native resources in telecommunications networks. Carriers also experience the same powerful currents that impel many industries towards softwarisation, use of virtual machines, DevOps principles and other global vectors for intelligent automation.

Suppliers to telecoms carriers and advanced researchers are developing machine learning software that, for example, controls smart antennas with split-second timing, assigns and reassigns bandwidth within a packet core and orchestrates assignments for an edge computers virtual machines.

Essentially, the software plays a game, aiming to predict traffic loads and use the fewest resources to carry traffic in accordance with service-level agreements. The intended result would improve the availability of resources to serve additional customers at times when loads are at their peak. When loads abate, the software can cause hardware to operate in power-saving standby mode.

Rules-based scripts and statistical models can accomplish some of these goals, but hand-crafted algorithms face challenges. A vast number of parameters specify a connection event in a 5G network more so than in previous generations. That is why machine learning could be a requirement, not simply an optimisation tool, for efficient resource utilisation in full-scale 5G operations.

Recent reports have surveyed a range of wireless communications applications that machine learning researchers and developers are working on, yielding many candidate technologies for carrier roadmaps.

From a business lifecycle perspective, opportunities exist for machine learning developments to expedite network planning and design, operations, marketing and other duties that normally require an intelligent human. Developers are targeting network management functions, including fault management assurance, configuration, accounting, performance and security (FCAPS).

From a network technology perspective, machine learning applications in research and development phases could affect every layer of the communications stack, from low-level physical and data link layers, through media access, transport, switching, session, presentation and application layers.

At lower layers of radio access networks, generic computers process baseband signals, and they schedule and form directional radio beams by synchronising many antenna elements. Machine learning systems can alleviate congestion by assigning optimal modulation parameters and rapidly scheduling beams that are calculated to fulfil immediate demands.

At higher layers of communications stacks, softwarisation yields opportunities to use and reuse virtual network functions (VNFs) in dynamic combinations to handle changes in traffic patterns. For example, intelligent systems can right-size (autoscale) temporary combinations of resources to support a large video conference and reassign those resources to other jobs after the event.

In packet core networks, intelligent selection is among the astronomical number of ways to mix and match network functions to cut idling while keeping customers satisfied. In radio access networks, intelligent tweaks to power levels, symbol sets, frame sizes and other parameters promise to squeeze the greatest capacity from the available spectrum.

Cyber security and privacy measures can also benefit from machine learning. In theory, intelligent domain isolation can open and shut access automatically in accordance with knowledge encoded in large databases such as event logs. Distributed learning methods can run on edge computers and user devices, keeping private data separate from centralised databases.

Much as driverless cars are requiring more time and development resources than some expected, the vision of fully autonomic networks seems to remain a distant one Michael Gold

Junipers slogan the self-driving network expresses a vision of autonomous communications services, analogous to autonomous vehicles. Many other network technology developers have embraced similar ideas. Engineers and marketers often describe intent-based networking (IBN),one-touch provisioning, and zero-touch network and service management.

Most suppliers will probably use one of these phrases, or a similar phase. All of them refer to a subset of network operations that can occur autonomously, or nearly so. In fact, many software-defined networking technology concepts rely on rules-based systems, a programming strategy that the artificial intelligence community developed decades ago.

Verizon network architect Mehmet Toy recently described one interpretation of IBN to mean deploying and configuring the network resources according to operator intentions automatically. While developments often focus on fulfilling the intentions of network managers, Toy also envisions network configurations that respond to changes in user intentions.

Imaginably, a future network manager could employ natural language to revise a bandwidth-throttling policy. But beware of hype surrounding network automation. In some enterprise networks, zero-touch nodes configure automatically when a technician powers up a new rack. In contrast, installing a carrier-class fibre termination node remains complex.

Much as driverless cars are requiring more time and development resources than some expected, the vision of fully autonomic networks seems to remain a distant one. One major challenge consists of acquiring and analysing abundant telemetry data within service providers networks.

Many systems do not expose the data that data-hungry machine learning systems need to predict and respond to changes in traffic loads. Systems that do provide telemetry use diverse protocols and data structures, complicating AI software developments. Perhaps suppliers will see telemetry data as having high value as intellectual property and worthy of encryption.

A 2020 Nokia whitepaper advocates a multistage technology roadmap to manage the opportunities and risks. Nokia acknowledges that AI is rare in todays networks. More commonly, expert human network managers create, implement and often adjust statistical and rules-based models that govern automated systems in telecommunications networks.

Intermediate between todays model-driven practices and the future vision of autonomic networks, Nokia sees the emergence of intent-driven network management processes, enabled by closed-loop automation systems. Automated resource orchestration would free up human network managers to focus on business needs, service creation and DevOps.

In one sense, a changing technology landscape challenges networking professionals to keep up with new developments. In another sense, AI tools in diverse fields tend to be productivity enhancers rather than redundancy generators Michael Gold

Does AI threaten network managers jobs? In one sense, a changing technology landscape often challenges networking professionals to keep up with new developments. In another sense, AI tools in diverse fields tend to be productivity enhancers rather than redundancy generators. Similarly, for doctors and attorneys, AI is more of a tool than a threat.

One or another industry player seems to be always buzzing about intelligent networks. AT&T has been at it the longest, initially using the phrase in the 1980s to describe an early network computing initiative. Expectations of artificial intelligence in networks have focused and refocused repeatedly over the years. This time may be different. Are we there yet?

Now that computers control or constitute virtually all network nodes, software seems to be more agile at all layers of communications stacks. Business evolution will determine which AI and ML developments contribute most to business results and customer experiences, and which nodes in a network provide maximum leverage for machine learning software to add value.

More:
Artificial intelligence will maximise efficiency of 5G network operations - ComputerWeekly.com