Archive for the ‘Machine Learning’ Category

AI and ML can Help to Turn Millionaire Dreams into Reality – Analytics Insight

The world today has joined hands with advanced technologies like AI and machine learning and is progressing at a rapid pace. The advent of these technologies has almost made a world without them, unimaginable. Artificial intelligence and machine learning have invaded every sector that can be thought of and have shown substantial transformation and revolution in them.

The pandemic today has left us no choice but to adapt to a digital and technological culture. While AI and ML hold the promises of changing the world for good, they also empower the skilled ones to mint money to the point of becoming a millionaire. Here is how:

Studying artificial intelligence and machine learning has become crucial to being a member of big IT sectors and Silicon Valley companies. Truth be told, the field of artificial intelligence and machine learning is not everybodys cup of tea owing to its complex operations and algorithms.

Besides the fact that the fields are fantasised, there are several other reasons why honing skills in AI and machine learning can make you economically rich and stable.

1. AI-driven gadgets are taking over human workforce

As already mentioned that the improvements, advancements and success are getting as higher as the sky, they have now started to replace human work force. The pandemic has mandated remote working for humans but someone has to be there in the office to look after the operations. This objective is now achieved with machines.

2. Use of automation in manufacturing and supply chain management

Automation is rising in fashion and is fished upon by manufacturing companies, and service providers of supply chain management programs.

The manufacturing sector and companies have suffered immensely when business operations where brought into a halt. It found itself drowning when work resumed. Employees assigned to the back office had to handle plethora of work simultaneously, inevitably being erroneous. Additionally, limitless responsibility tended to tire them out, hindering the quality and flow of work.

3. Robotics are taking over the world

Be it defence or any other sector or discipline, robotics play a significant role. Nations are excelling at designing humanoids that can not only mimic human intelligence but can also carry out appropriate business decisions.

The importance and need for artificial intelligence and machine learning cannot be re-iterated enough number of times. Their importance supports the fact why being strongly armed with skills in AI and ML can not only help one land in a promising career but also bring fat salaries as rewards.

There are varied ways to learn machine learning and artificial intelligence. These days, kids mostly enrol themselves into virtual courses that offer extensive training on machine learning and artificial intelligence.

Grasping an in-depth learning about artificial intelligence and machine learning cannot be done all by the self. It is suggested to opt for virtual courses available for a proper training in AI and ML.

Artificial intelligence companies and the companies engulfed in machine learning always have all eyes and ears for the ones who have mastered skills in these domains. The famous companies namely, Google, Apple and Microsoft believe that the AI and ML skilled personalities can renovate and improve the future of AI.

These companies are ready to disburse huge amounts in the accounts of their employees in exchange of efforts from them that are potent to burnish and eliminate the pain points, setting the company on the path of success.

Go here to see the original:
AI and ML can Help to Turn Millionaire Dreams into Reality - Analytics Insight

Prediction of Disease Progression of COVID-19 | IJGM – Dove Medical Press

Introduction

By November 22, 2020, more than 180 countries had reported a total of 57.8 million confirmed COVID-19 cases, a condition caused by SARS-CoV2.1 SARS-CoV-2 is a novel enveloped RNA -coronavirus, which has phylogenetic similarity to SARS-CoV, the pathogen causing SARS.2 The clinical symptoms of COVID-19 have a broad spectrum, and vary among individuals.3 Most infected individuals have mild or subclinical illness, while approximately 15.7%32% of hospitalized COVID-19 patients develop severe acute respiratory distress or are admitted to an intensive care unit.3,4 Potential risk factors to identify patients who will develop into severe or critical severe cases at an early stage include older age, underlying comorbidities, and elevated D-dimer.5,6 As the COVID-19 outbreak continues to evolve, it is critical to find patients at high risk of disease progression. Several investigations have analyzed risk factors associated with disease progression and clinical outcomes, and suggested that older age, comorbidities, immunoresponse were potential risk factors.610 However, the clinical details were not well described, and many important laboratory results were not included in the analyses. Therefore, it is necessary to develop an effective classifier model for predicting disease progression at an early stage. Machine-learning techniques provide new methods for predicting clinical outcomes and assessing risk factors. Here, we aimed to predict the diseases progression with machine learning, based on a large set of clinical and laboratory features. Performance of the models was evaluated using clinical data of multicenter-confirmed COVID-19 patients. Software was developed for clinical practice. These predictive models can identify patients at high risk of disease progression and predict the prognosis of COVID-19 patients accurately.

This retrospective multicenter cohort study was performed at Huoshenshan Hospital and Taikang Tongji Hospital (Wuhan, China). Diagnostic criteria for COVID-19 followed the 2020 WHO interim guidance.11 Severe COVID-19 cases were defined as patients with fever plus one of respiratory rate >30 breaths/minute, severe respiratory distress, or SpO2 93% in room air. All severe cases included had progressed from nonsevere cases. Adults with pneumonia but no signs of severe pneumonia and no need for supplemental oxygen were defined as nonsevere. All nonsevere cases study were stable and had been discharged. RT-PCR assays of nasal and pharyngeal swab specimens were performed for laboratory confirmation of SARS-CoV2 virus.

Data of COVID-19 patients were collected from February 10, 2020 to April 5, 2020. A total of 29 features of laboratory data obtained on admission to hospital (within 6 hours) are shown in Supplementary Table 1. This study was approved by the ethics committee of Huoshenshan Hospital (HSSLL024). As all subjects were anonymized in this retrospective study, written informed consent was waived due to urgent need. This study was conducted in accordance with the Declaration of Helsinki.

A feature selection process was employed to incrementally choose the most representative features. The features with significant difference between two groups were selected for the following machine learning process. The combination trainingvalidation set was collected from Huoshenshan Hospital, and two test sets were collected from Huoshenshan Hospital and TaikangTongji Hospital, respectively.

To prevent overfitting and improve generalizability, k-fold cross-validation was used. Since training and validation data were randomly generated, we took the average score of five rounds of k-fold cross-validation as the final validation results. The optimal-feature subsets in each model were defined as those with the highest AUC values. The flow diagram of training, validation, and test of the prediction models is shown in Figure 1.

Figure 1 Flow diagram of training, validation, and testing of the prediction models.

Four prediction models were trained with logical regression (LR), support vector-machine(SVM), knearest neighbor (KNN), and nave Bayes (NB), respectively. Experiments were implemented using MatLab 2018. ROC curve, AUC value, sensitivity, and specificity were used to evaluate predictive performance. The prediction tasks in this work mean classification.

Software for predicting disease progression of COVID-19 was developed based on machine learning, which is convenient for clinicians to use. The interface of the software is written in Visual Studio 2013 and the internal function in MatLab 2018.

Statistical analyses were performed using SPSS 23.0. Categorical data are expressed as proportions. Descriptive data are expressed as medians and interquartile ranges for skewed-distribution variables and means SD for variables with normal distribution. Students t-tests and nonparametric MannWhitney tests were used to compare normal- and skewed-distribution variables, respectively. Pearsons 2 was used to compare categorical variables and multiple rates. Two-sided <0.05 was considered statistically significant.

By April 5, 2020, 1,567 COVID-19 patients in the medical record systems of Huoshenshan Hospital and Taikang Tongji Hospital had been screened for data collection. Data from 455 patients (347 from Huoshenshan, 108 from Taikang Tongji) with complete medical information and laboratory-examination results were collected. In sum, 78 patients from Huoshenshan were randomly selected as test set 1 (30 severe cases and 48 nonsevere cases) and 108 patients from Taikang Tongji as test set 2 (40 severe cases and 68 nonsevere cases). Data of the remaining 269 patients from Huoshenshan were used for the training and validation set (101 severe cases and 168 nonsevere cases). Demographic and clinical characteristics of the 269 patients in the trainingvalidation set are summarized in Table 1, and clinical characteristics of patients in test sets 1 and 2 are summarized in Supplementary Table 2 and 3, respectively.

Table 1 Demographic and clinical characteristics of COVID19 patients in training and validation sets

The median age of the patients in training and validation set was 62.75 years, and 51% of the patients were men. Severe patients were much older than nonsevere patients (71.31 vs 57.61, P<0.05). Comorbidities were present in 55% of patients (147270), and the prevalence of comorbidities in severe patients was higher than that in nonsevere patients (73% vs 45%, P<0.05). Hypertension (32%), diabetes (13%), and coronary heart disease (9%) were the most common comorbidities, and presented more frequently in severe patients: 26% of patients overall had two or more comorbidities, while severe patients had higher prevalence of two or more comorbidities (52% vs 15%, P<0.05). Fever (68%), cough (49.4%), and fatigue (45%) were the most common symptoms at onset of illness, and fever and fatigue were present more frequently in severe patients (Table 1).

Severe patients had elevated levels of CRP, lactate dehydrogenase (LDH), D-dimer, and -hydroxybutyrate dehydrogenase, and had reduced levels of hemoglobin, hematocrit, and albumin (Table 1). Features with significant differences between the groups were introduced for selection using machine learning.

A total of 21 features with significant difference between the training and validation sets were used for the following modeling (Supplementary Table 4). The subset with the highest AUC was selected to be the optimal subset of the corresponding machine-learning method (Table 2). Briefly, KNN achieved the highest AUC (0.9484, 95% CI 0.9240.973) among the eleven features of the four methods in training and validation sets (Table 2). D-dimer was the single optimal feature with the highest AUC in the optimal-feature subset of each machine-learning method (0.8368 in LR model, 0.8169 in NB model, 0.8343 in KNN model, and 0.8322 in SVM model, respectively; Supplementary Table 5). ROC curves obtained by the optimal-feature subsets, single features, and all features using k-fold cross-validation are shown in Figure 2. Highest AUC values in optimal-feature subsets were 0.937, 95% CI 0.9020.972) for LR, 0.949 (95% CI 0.9240.973) for KNN, 0.935 (95% CI 0.9060.964) for NB, and 0.931 (95% CI 0.8950.967) for SVM (Table 3).

Table 2 Optimal-feature subset of each machine learning method

Table 3 Comparison of the average predictive performance by k-fold cross-validation with optimal-feature subset

Figure 2 ROC curves for models in training and validation sets. (A) ROC curves of LR subsets for distinguishing between severe and nonsevere patients. AUC of optimal-feature subset 0.937 (95% CI 0.9020.927), all features 0.916 (95% CI 0.8760.955), and single optimal feature (D-dimer) 0.837 (95% CI 0.7860.887). (B) ROC curves for subsets of features from KNN for distinguishing between severe and nonsevere patients. AUC of the optimal feature subset 0.948 (95% CI 0.9240.937), all features 0.935 (95% CI 0.9070.963), and single optimal feature (D-dimer) 0.835 (95% CI 0.7820.887). (C) ROC curves of subsets of features from NB for distinguishing between severe and nonsevere patients. AUC of optimal feature set 0.935 (95% CI 0.9060.964), all features 0.916 (95% CI 0.8790.954), and single optimal feature (D-dimer) 0.805 (95% CI 0.7480.861). (D) ROC curves of subsets of features from SVM for distinguishing between severe and nonsevere patients. AUC of optimal feature subset 0.931 (95% CI 0.8950.967), features 0.918 (95% CI 0.8790.957), and single optimal feature (D-dimer) 0.832 (95% CI 0.7810.884).

We compared predictive performance obtained by the models based on the optimal-feature subsets. Sensitivity (Sen), specificity (Spe), false-positive rate (FPR), false-negative rate (FNR), positive predictive value (PPV), negative predictive value (NPV), accuracy, and F1 scores of the above four models are shown in Table 3. No significant differences were observed among these four models for Sen, FNR, PPV, NPV, or accuracy. Spe, FPR, and F1 scores for SVM were superior (Table 3).

To evaluate the importance of each feature in the corresponding optimal subsets, we evaluated predictive performance based on AUC obtained by each feature in the subsets. D-dimer, CRP, age, white blood cell (WBC) count, LDH, and albumin showed the highest predictive performance in the optimal subsets, with D-dimer, CRP, and age the top three (Supplementary Table 5).

Test set 1 comprised 78 patients from Huoshenshan, and test set 2 108 patients from Taikang Tongji. AUC values obtained by the four models in test set 1 were 0.9059 (95% CI 0.8320.980) for LR, 0.9139 (95% CI 0.8410.987) for KNN, 0.9177 (95% CI 0.8480.988) for NB, and 0.9594 (95% CI 0.9200.999) for SVM. F1 scores of the four models in test set 1 were 0.818 for LR, 0.828 for KNN, 0.867 for NB, and 0.885 for SVM (Supplementary Table 6). ROC curve obtained for the models in test set 1 are shown in Figure 3A. No significant differences were observed among these models for Sen, Spe, FPR, FNR, PPV, NPV, or accuracy (Supplementary Table 6). The predictive performance of all models was satisfied in test set 1. Then, to test whether these models would still work at another hospital, we evaluated predictive performance in test set 2. AUC values of the four models in test set 2 were 0.8143 (95% CI 0.7280.901) for LR, 0.8057 (95% CI 0.7170.894) for KNN, 0.8265 (95% CI 0.7410.912) for NB, and 0.8140 (95% CI 0.7280.900) for SVM. F1 scores of the four models in test set 2 were 0.676 for LR, 0.698 for KNN, 0.716 for NB, and 0.691 for SVM (Supplementary Table 7). ROC curves obtained by the four models in test set 2 are shown in Figure 3 (Figure 3B). No significant differences were observed among these four models for Sen, Spe, FPR, FNR, PPV, NPV, or accuracy P>0.05, Supplementary Table 7).

Figure 3 ROC curves of models in testing sets. (A) Optimal-feature set of LR, KNN, NB, and SVM in test set 1. (B) Optimal feature set of LR, KNN, NB, and SVM in test 2. (C) Optimal-feature set of LR, KNN, NB, and SVM in the mixed test sets. (D) AUC values of optimal-feature subsets for different models in test set 1, test set 2, and mixed test set.

To explore potential reasons for differences between the two test sets, we randomly selected 54 patients from test set 2 (Taikang Tongji), and added their data to the trainingvalidation set. The remaining data from test sets 2 and 1 were combined (from Huoshenshan). As such, data from 323 patients were used as the trainingvalidation set, and data from 132 patients were used as mixed test set. AUC values obtained by the four models were 0.8843 (95% CI 0.8230.946) for LR, 0.8561 (95% CI 0.7860.926) for KNN, 0.9096 (95% CI 0.853967) for NB, and 0.9255 (95% CI 0.8820.969) for SVM in the mixed test set. F1 score of the four models in the mixed test set were 0.777 for LR, 0.750 for KNN, 0.840 for NB, 0.832 for SVM, respectively (Supplementary Table 8). ROC curves obtained by the four models in test set 2 are shown in Figure 3C. The predictive performance of the models in the mixed test set was much better than that in test set 2 (Figure 3D).

Software was developed for predicting disease progression based on machine learning for clinical practice (Supplementary Figure 1, 2, and 3). The first page provided the function of training and validation using k-fold cross-validation to select the optimal-feature subset and parameters (Supplementary Figure 1). In second page, one model that has been trained can be easily selected, and predictive performance can be evaluated in test sets (Supplementary Figure 2). Once the validity of the trained model has been confirmed by the second page, a prediction probability wil emerge for an upcoming patient on the third page (Supplementary Figure 3).

We developed a prediction model of disease progression based on machine learning. Clinical characteristics, WBC count, inflammatory markers, liver function, renal function, and coagulation functions were collected and utilized to establish the predictive model based on machine learning. In sum, 21 features with significant differences between the severe and nonsevere groups were selected from a total of 48 features. In this feature-selection process, relatively useless features were eliminated to make the calculation more effective. Finally, the optimal-feature subset was determined using k-fold cross validation for each method. Moreover, the predictive performance of the models was evaluated by two test sets from two hospitals, and AUC values in these test sets were satisfactory. We also developed software to predict disease progression of COVID-19 based on machine learning that can be used conveniently in clinical practice.

Clinical features of the patients in this study were consistent with previous large-sample studies.3,12 Comorbidity, older age, lower lymphocyte count, and higher LDH were identified as independent high-risk factors for COVID-19 disease progression.13 Ji et al developed a risk factorscoring system (CALL) based on these features to predict disease progression.13 However, there were few cases included, and the reliability of the model needs to be confirmed. In our study, these models were trained by optimal-feature subsets to attain optimal predictive performance. We evaluated predictive performance with two test sets from two hospitals to ensure the reliability of the models.

D-dimer, CRP, age, WBC count, LDH, and albumin had better predictive performance in the optimal-feature subset, with D-dimer, CRP, and age the top three. Zhou et al found no significant differences between a nonaggravation group and aggravation group for WBC count, CRP, albumin, LDH, or D-dimer level.10 They found that total lymphocyte count was a risk factor associated with disease progression in COVID-19 patients using a binary logistic regression model.10 However, only 17 patients were included in this study, and total lymphocyte count did not reflect disease progression. Zhou et al showed that older age and elevated D-dimer could help clinicians to identify patients with poor prognosis at an early stage.6 Consistently with this study, age and D-dimer level were important features in the optimal-feature subset. Elevated levels of D-dimer are associated with disease activity and inflammation, mainly including venous thromboembolism, sepsis, or cancer.14,15 A retrospective study on deceased patients also showed that D-dimer was markedly higher in deceased patients than recovered patients.16 Therefore, monitoring the D-dimer levels can help clinicians identify patients at high risk of disease progression. Anticoagulation treatment can be given patients with high D-dimer levels to prevent disease progression. Albumin levels decrease significantly in most severe COVID-19 patients and decrease continuously during the diseases progress.17 Hypoalbuminemia is associated with poor clinical outcomes for hospitalized patients.18,19 Hypoalbuminemia in severe patients is mainly due to inadequate nutrition intake and overconsumption.

The predictive performance of the models in test set 1 was much better than that in test set 2. and patients enrolled in test set 2 were from another hospital. Differences in laboratory findings and medical services may be potential reasons for the lower predictive performance in test set 2. After data from Taikang Tongji had been added to this training set, predictive performance improved significantly, indicating that predictive performance in another hospital could be improved if part of the data collected from another hospital participated in the training stage.

The code of the software used in this study is available from the corresponding author on reasonable request.

The data sets used in this study are available from the corresponding author Kaijun Liu (email [emailprotected]) on reasonable request.

This study was approved by the ethics committee of Huoshenshan Hospital (Wuhan, China) (HSSLL024).

As all subjects were anonymized in this retrospective study, written informed consent was waived due to urgent need.

All authors made substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data, took part in drafting the article or revising it critically for important intellectual content, agreed to submit to the current journal, gave final approval to the version to be published, and agree to be accountable for all aspects of the work.

This work was supported by the National Natural Science Foundation of China (81700483), Chongqing Research Program of Basic Research and Frontier Technology (cstc2017jcyjAX0302, cstc2020jcyj-msxmX1100), and Army Medical University Frontier Technology Research Program (2019XLC3051). The funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report. The corresponding authors had full access to all the data in the study, and had final responsibility for the decision to submit for publication.

The authors declare that there are no conflicts of interest.

1. World Health Organization. Weekly epidemiological update - 24 November 2020. Available from: https://www.who.int/publications/m/item/weekly-epidemiological-update---24-november-2020. Accessed April 07, 2021.

2. Lu R, Zhao X, Li J, et al. Genomic characterisation and epidemiology of 2019 novel coronavirus: implications for virus origins and receptor binding. Lancet. 2020;395(10224):565574. doi:10.1016/S0140-6736(20)30251-8

3. Huang C, Wang Y, Li X, et al. Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China. Lancet. 2020;395(10223):497506. doi:10.1016/S0140-6736(20)30183-5

4. Chen N, Zhou M, Dong X, et al. Epidemiological and clinical characteristics of 99 cases of 2019 novel coronavirus pneumonia in Wuhan, China: a descriptive study. Lancet. 2020;395(10223):507513. doi:10.1016/S0140-6736(20)30211-7

5. Sun Y, Koh V, Marimuthu K, et al. Epidemiological and clinical predictors of COVID-19. Clin Infect Dis. 2020;71(15):786792. doi:10.1093/cid/ciaa322

6. Zhou F, Yu T, Du R, et al. Clinical course and risk factors for mortality of adult inpatients with COVID-19 in Wuhan, China: a retrospective cohort study. Lancet. 2020;395(10229):10541062. doi:10.1016/S0140-6736(20)30566-3

7. Guan WJ, Liang WH, Zhao Y, et al. Comorbidity and its impact on 1590 patients with Covid-19 in China: a nationwide analysis. Eur Respir J. 2020;55(5):2000547. doi:10.1183/13993003.00547-2020

8. Wang L, He W, Yu X, et al. Coronavirus Disease 2019 in elderly patients: characteristics and prognostic factors based on 4-week follow-up. J Infect. 2020;80(6):639645. doi:10.1016/j.jinf.2020.03.019

9. Wu C, Chen X, Cai Y, et al. Risk factors associated with acute respiratory distress syndrome and death in patients with coronavirus disease 2019 pneumonia in Wuhan, China. JAMA Intern Med. 2020;180(7):934. doi:10.1001/jamainternmed.2020.0994

10. Zhou Y, Zhang Z, Tian J, Xiong S. Risk factors associated with disease progression in a cohort of patients infected with the 2019 novel coronavirus. Ann Palliat Med. 2020. doi:10.21037/apm.2020.03.26

11. World Health Organziation. Clinical management of severe acute respiratory infection when Novel coronavirus (nCoV) infection is suspected: interim guidance. 2020. Available from: https://www.who.int/docs/default-source/coronaviruse/clinical-management-of-novel-cov.pdf. Accessed April 7, 2021.

12. Guan WJ, Ni ZY, Hu Y, et al. Clinical characteristics of coronavirus disease 2019 in China. N Engl J Med. 2020;382(18):17081720. doi:10.1056/NEJMoa2002032

13. Ji D, Zhang D, Xu J, et al. Prediction for progression risk in patients with COVID-19 pneumonia: the CALL score. Clin Infect Dis. 2020;71(6):13931399. doi:10.1093/cid/ciaa414

14. Borowiec A, Dabrowski R, Kowalik I, et al. Elevated levels of d-dimer are associated with inflammation and disease activity rather than risk of venous thromboembolism in patients with granulomatosis with polyangiitis in long term observation. Adv Med Sci. 2020;65(1):97101. doi:10.1016/j.advms.2019.12.007

15. Schutte T, Thijs A, Smulders YM. Never ignore extremely elevated D-dimer levels: they are specific for serious illness. Neth J Med. 2016;74(10):443448.

16. Chen T, Wu D, Chen H, et al. Clinical characteristics of 113 deceased patients with coronavirus disease 2019: retrospective study. BMJ. 2020;368:m1091. doi:10.1136/bmj.m1091

17. Zhang Y, Zheng L, Liu L, Zhao M, Xiao J, Zhao Q. Liver impairment in COVID-19 patients: a retrospective analysis of 115 cases from a single center in Wuhan city, China. Liver Int. 2020;40(9):20952103. doi:10.1111/liv.14455

18. Kim S, McClave SA, Martindale RG, Miller KR, Hurt RT. Hypoalbuminemia and clinical outcomes: What is the mechanism behind the relationship? Am Surg. 2017;83(11):12201227. doi:10.1177/000313481708301123

19. Yanagisawa S, Miki K, Yasuda N, Hirai T, Suzuki N, Tanaka T. Clinical outcomes and prognostic factor for acute heart failure in nonagenarians: impact of hypoalbuminemia on mortality. Int J Cardiol. 2010;145(3):574576. doi:10.1016/j.ijcard.2010.05.061

Read the original here:
Prediction of Disease Progression of COVID-19 | IJGM - Dove Medical Press

Artificial intelligence will maximise efficiency of 5G network operations – ComputerWeekly.com

Compared with previous types of networks, 5G networks are both more in need of automation and more amenable to automation. Automation tools are still evolving and machine learning is not yet common in carrier-grade networking, but rapid change is expected.

Emerging standards from 3GPP, ETSI, ITU and the open source software community anticipate increased use of automation, artificial intelligence (AI) and machine learning (ML). And key suppliers activities add credibility to the vision and promise of artificially intelligent network operations.

Growing complexity and the need to solve repetitive tasks in 5G and future radio systems necessitate new automation solutions that take advantage of state-of-the-art artificial intelligence and machine learning techniques that boost system efficiency, wrote Ericssons chief technology officer (CTO), Erik Ekudden, recently.

In 2020, Ericsson engineers demonstrated machine learning software that orchestrated virtual machines on a web server. They reported that during a 12-hour stress test, their software decreased idle cycles to 2%, from a baseline of 20%. Similar efficiency gains could enhance collections of edge computers and computers within cloud-native 5G infrastructure.

Considering that 5G core networks are evolving towards increased dependence on software and generic computing resources, Ericssons demonstration suggests that large-scale use of AI solutions could help carriers use infrastructure as efficiently as possible while handling a mix of traffic types that change dynamically and fulfilling diverse service-level agreements.

Nokia marketing manager Filip De Greve recently stated: The benefits of AI and ML are unquestionable all it needs is the right approach and the right partner to unlock them.

A whitepaper from Nokia describes potential roles for AI and ML in virtually all phases of a service providers operations. Last month, Nokia announced the availability of its Software Enablement Platform, whose features include a means for making use of AI and ML in edge computers that run both open radio access networks (O-RANs) and application-level services. Nokias platform provides data that is important to machine learning developments for software-defined radios.

Carriers and third parties can develop software for Nokias platform, which comes with some samples that are in current commercial trials. One included xApp relies on machine learning methods for traffic steering roughly speaking, a type of service-aware load balancing for radio channels.

Huawei, too, has engaged in a number of machine learning developments in recent years, but seems to have made relatively few disclosures about the matter recently. The company said its management and orchestration (MANO) solution uses AI and big data technologies to implement automatic deployment, configuration, scaling and healing.

Needs for machine learning arise from expected challenges in managing future 5G networks. Future deployments will likely have traffic-carrying capacity orders of magnitude greater than existing infrastructures. Many suppliers, researchers and developers expect to need machine learning to make efficient use of 5G technologies.

Opportunities to use machine learning are arising with increased reliance on cloud-native resources in telecommunications networks. Carriers also experience the same powerful currents that impel many industries towards softwarisation, use of virtual machines, DevOps principles and other global vectors for intelligent automation.

Suppliers to telecoms carriers and advanced researchers are developing machine learning software that, for example, controls smart antennas with split-second timing, assigns and reassigns bandwidth within a packet core and orchestrates assignments for an edge computers virtual machines.

Essentially, the software plays a game, aiming to predict traffic loads and use the fewest resources to carry traffic in accordance with service-level agreements. The intended result would improve the availability of resources to serve additional customers at times when loads are at their peak. When loads abate, the software can cause hardware to operate in power-saving standby mode.

Rules-based scripts and statistical models can accomplish some of these goals, but hand-crafted algorithms face challenges. A vast number of parameters specify a connection event in a 5G network more so than in previous generations. That is why machine learning could be a requirement, not simply an optimisation tool, for efficient resource utilisation in full-scale 5G operations.

Recent reports have surveyed a range of wireless communications applications that machine learning researchers and developers are working on, yielding many candidate technologies for carrier roadmaps.

From a business lifecycle perspective, opportunities exist for machine learning developments to expedite network planning and design, operations, marketing and other duties that normally require an intelligent human. Developers are targeting network management functions, including fault management assurance, configuration, accounting, performance and security (FCAPS).

From a network technology perspective, machine learning applications in research and development phases could affect every layer of the communications stack, from low-level physical and data link layers, through media access, transport, switching, session, presentation and application layers.

At lower layers of radio access networks, generic computers process baseband signals, and they schedule and form directional radio beams by synchronising many antenna elements. Machine learning systems can alleviate congestion by assigning optimal modulation parameters and rapidly scheduling beams that are calculated to fulfil immediate demands.

At higher layers of communications stacks, softwarisation yields opportunities to use and reuse virtual network functions (VNFs) in dynamic combinations to handle changes in traffic patterns. For example, intelligent systems can right-size (autoscale) temporary combinations of resources to support a large video conference and reassign those resources to other jobs after the event.

In packet core networks, intelligent selection is among the astronomical number of ways to mix and match network functions to cut idling while keeping customers satisfied. In radio access networks, intelligent tweaks to power levels, symbol sets, frame sizes and other parameters promise to squeeze the greatest capacity from the available spectrum.

Cyber security and privacy measures can also benefit from machine learning. In theory, intelligent domain isolation can open and shut access automatically in accordance with knowledge encoded in large databases such as event logs. Distributed learning methods can run on edge computers and user devices, keeping private data separate from centralised databases.

Much as driverless cars are requiring more time and development resources than some expected, the vision of fully autonomic networks seems to remain a distant one Michael Gold

Junipers slogan the self-driving network expresses a vision of autonomous communications services, analogous to autonomous vehicles. Many other network technology developers have embraced similar ideas. Engineers and marketers often describe intent-based networking (IBN),one-touch provisioning, and zero-touch network and service management.

Most suppliers will probably use one of these phrases, or a similar phase. All of them refer to a subset of network operations that can occur autonomously, or nearly so. In fact, many software-defined networking technology concepts rely on rules-based systems, a programming strategy that the artificial intelligence community developed decades ago.

Verizon network architect Mehmet Toy recently described one interpretation of IBN to mean deploying and configuring the network resources according to operator intentions automatically. While developments often focus on fulfilling the intentions of network managers, Toy also envisions network configurations that respond to changes in user intentions.

Imaginably, a future network manager could employ natural language to revise a bandwidth-throttling policy. But beware of hype surrounding network automation. In some enterprise networks, zero-touch nodes configure automatically when a technician powers up a new rack. In contrast, installing a carrier-class fibre termination node remains complex.

Much as driverless cars are requiring more time and development resources than some expected, the vision of fully autonomic networks seems to remain a distant one. One major challenge consists of acquiring and analysing abundant telemetry data within service providers networks.

Many systems do not expose the data that data-hungry machine learning systems need to predict and respond to changes in traffic loads. Systems that do provide telemetry use diverse protocols and data structures, complicating AI software developments. Perhaps suppliers will see telemetry data as having high value as intellectual property and worthy of encryption.

A 2020 Nokia whitepaper advocates a multistage technology roadmap to manage the opportunities and risks. Nokia acknowledges that AI is rare in todays networks. More commonly, expert human network managers create, implement and often adjust statistical and rules-based models that govern automated systems in telecommunications networks.

Intermediate between todays model-driven practices and the future vision of autonomic networks, Nokia sees the emergence of intent-driven network management processes, enabled by closed-loop automation systems. Automated resource orchestration would free up human network managers to focus on business needs, service creation and DevOps.

In one sense, a changing technology landscape challenges networking professionals to keep up with new developments. In another sense, AI tools in diverse fields tend to be productivity enhancers rather than redundancy generators Michael Gold

Does AI threaten network managers jobs? In one sense, a changing technology landscape often challenges networking professionals to keep up with new developments. In another sense, AI tools in diverse fields tend to be productivity enhancers rather than redundancy generators. Similarly, for doctors and attorneys, AI is more of a tool than a threat.

One or another industry player seems to be always buzzing about intelligent networks. AT&T has been at it the longest, initially using the phrase in the 1980s to describe an early network computing initiative. Expectations of artificial intelligence in networks have focused and refocused repeatedly over the years. This time may be different. Are we there yet?

Now that computers control or constitute virtually all network nodes, software seems to be more agile at all layers of communications stacks. Business evolution will determine which AI and ML developments contribute most to business results and customer experiences, and which nodes in a network provide maximum leverage for machine learning software to add value.

More:
Artificial intelligence will maximise efficiency of 5G network operations - ComputerWeekly.com

The Future of AI: Careers in Machine Learning – Southern New Hampshire University

The robots are coming. If there is one thing we learned from the COVID-19 pandemic, its that when humans are sent home, machines keep working.

This doesnt mean that robots will take over the world. It does, however, mean that our technical landscape is changing.

Human history has a long and favorable track record of technological advancements, particularly when it comes to ideas that seem ludicrous at the time (Wright brothers, anyone?). The printing press, assembly line and personal computer have all helped move civilization forward by leaps and bounds over the last few centuries.

Imagine being one of the first people to replace glasses with contact lenses by putting them directly on their eyes, no less. Henry Ford replaced horses with the automobile as our main mode of transportation. The process of pasteurization changed the way we eat. Examples like these are endless, because throughout human history, there has been innovation and change.

Even as recently as the 1980s, there was no internet in peoples homes. The very means by which you are reading this article did not exist. Online school did not exist, at least not in the way we take college classes online now.

And while each technological advancement may have its detractors, its hard to argue with the benefits of technology as a whole. After all, thinking big got us to the moon, and gave us television, 3-D printing and a host of incredible advances in modern medicine.

So, are you wondering whats next? The future of technology lies squarely with machine learning and with artificial intelligence, known as AI.

Artificial intelligence is part of the field of data science. People who work in data science are skilled in developing mathematical algorithms to answer complex questions. When, for example, a company like Netflix wants to predict what movies a customer might want to watch next, a data scientist will create an algorithm based on that customers viewing history. Then, they will use that algorithm to offer a list of suggestions.

Machine learning is a branch of data science which involves using data science programs that can adapt based on experience, said Ben Tasker, technical program facilitator of data science and data analytics at Southern New Hampshire University. Take a weather predictor, for example. The more weather inputs there are, the better the prediction for what will come next.

While machine learning is useful, its important to note that there is no artificial intelligence involved in its functions. Machine learning involves rote mathematical or mechanical processes only.

Artificial intelligence then advances data science and machine learning even further.

Whereas machine learning can make predictions, artificial intelligence can make adjustments to its computations. In other words, AI can adjust a program to execute tasks smartly, Tasker said. For example, a fully autonomous, self-driving car is an example of something that would use full artificial intelligence.

These days, the idea of such a self-driving car is no longer science fiction. As the fields of science and engineering continue to advance, artificial intelligence is becoming a lot less artificial and a lot more intelligent, Tasker said.

Because so much about the field of data science in general and AI in particular is new, there are many opportunities to make your own niche, especially now that many companies have started to invest in the idea of artificial intelligence, Tasker said. This creates a wealth of career opportunities for those who thrive on charting their own path. The future of AI is great.

Careers for computer information and research scientists are predicted to grow 15% between now and 2029, according to the U.S. Bureau of Labor Statistics (BLS). That is much faster than the national average for career growth. The median pay is a healthy $122,840 per year, BLS reported.

Some other top career options for machine learning and artificial intelligence include:

So, will robots replace humans moving forward? For some jobs or tasks, quite possibly. For all jobs or tasks? Not likely.

Of course, robots are already in the workplace, Tasker said. They are not intelligent, but they perform basic tasks. Car manufacturers use robots on assembly lines already and have for years.

Whether a company actively uses artificial intelligence or not, all industries will be impacted by it, whether intentionally or unintentionally, Tasker said. I do think that some industries will have a higher barrier of entry, so to speak, such as medicine, he said. Patients still prefer a human touch for things like receiving a diagnosis or test results.

As artificial technology continues to develop, humans will need to have an ethical debate about what robots can and cannot do, but yes, we will see more robots, said Tasker.

And as use of robots grows, without a doubt, ethics is going to play a much larger role as AI grows, said Tasker, or at least it should.

Careers in machine learning and artificial intelligence are still being defined, which creates generous opportunities to innovate and carve your own career path. If you like math, computer programming, coding, and technology in general, a career in data science, machine learning, or AI is definitely one to consider.

Having a strong foundation in math and STEM can help prepare you for a career in AI. Knowledge of psychology will be particularly helpful, too.

Also important: a large threshold for change. Data science [and AI with it] changes every year, Tasker said, so the people working in data science will need to change with it. You will always be learning new technologies, algorithms, and coding languages.

The more math, programming, and experience with cloud computing that you can get under your belt, the better.

And, as more and more adoption of artificial intelligence technologies occur, we will begin to see an ethical debate emerge about what AI should and should not be doing, Tasker said. That makes courses in ethics critical, because "as the field of AI grows, more ethical considerations will need to be applied."

Keep in mind that while a bachelors degreeis a great foundation on which to build a career in artificial intelligence, an advanced degree is likely necessary to advance to the highest levels in the field.

Most jobs in the field of artificial intelligence require a graduate degree, such as a master of science or even doctorate, so be ready to continually learn, said Tasker.

While no career is truly future-proof given the ever-changing technology landscape, there are some ways you can be best prepared to weather the change. By grounding yourself with a strong science, math, and engineering background and then being ready to drive change, you may enjoy a long and prosperous career in the field of artificial intelligence.

Of course, while having a strong academic background is important, being good at math and programming is not enough. To really thrive in this career-field, you also need good, old-fashioned grit. In fact, curiosity, grit, and being humble are key traits toward having a successful, long-term career in data science, and especially in artificial intelligence, said Tasker. These are traits that you cannot necessarily learn in the classroom, but are helpful to being successful in this field long-term.

We have actually been using AI for some time, and not just in factories and on assembly lines, or to design futuristic cars.

Have you ever filled out a job application and included key words so that the artificial job screening tool doesnt filter you out of contention? Thats artificial intelligence.

Some artificial intelligence programs can even scan how a resume is drafted to see personality traits of an applicant, said Tasker. Other programs use facial recognition, which scans your facial expressions in an interview to create personality profiles of applicants.

Likewise, if you have ever used a website and a chat bot popped up, saying How can I help you today? that is also artificial intelligence. If youve ever thought you were chatting with a real, live human only to be informed that youre chatting with a bot, you already know just how realistic artificial intelligence tools already are in the business and retail world.

Chat bots and virtual assistants are being routinely used to respond to easy emails, schedule appointments, and even take meeting notes for users, Tasker said. While at times, being on the receiving end of using a bot can be frustrating, many businesses use them because they can perform repetitive tasks that have some known outcomes, such as which department your query needs to be routed to when you contact customer service for a company.

There are limitations currently, though. While chat bots can accomplish a surprisingly large number of tasks, they cannot operate your Tesla, for example, said Tasker.

With high return-on-investment to using chat bots and interview bots, the use of artificial intelligence in commerce is not likely to go away anytime soon. If anything, the use of AI will continue to grow in new and innovative ways.

With an increased use in artificial intelligence comes an increase in the conversation about how it should be implemented. This is where a background in psychologycould be helpful for people working in this field. "Psychology is important because it teaches a student how the human brain works, which is complicated," said Tasker. "To really learn to program AI, learning how the brain works at some basic level would help as well."

Just because a chat-bot can attend a meeting for an employee, does that mean that we should also make a bot that can perform medical exams? Where is the line? What about facilitating a classroom and teaching our children? Tasker asked. "What about fully autonomous truck driving?"

Is there a line between what we need versus what we can do? And where does focusing on the bottom line financially begin to cost us when it comes to our humanity?

These are big questions for which there are no easy answers. Yet by studying data science, math and STEM, and by embracing the change inherent in the field of machine learning and artificial intelligence, you just might be the next Wilbur or Orville Wright.

Marie Morganelli, PhD, is a freelance content writer and editor.

Continue reading here:
The Future of AI: Careers in Machine Learning - Southern New Hampshire University

Increasing the Accessibility of Machine Learning at the Edge – Industry Articles – All About Circuits

In recent years, connected devices and the Internet of Things (IoT) have become omnipresent in our everyday lives, be it in our homes and cars or at our workplace. Many of these small devices are connected to a cloud servicenearly everyone with a smartphone or laptop uses cloud-based services today, whether actively or through an automated backup service, for example.

However, a new paradigm known as "edge intelligence" is quickly gaining traction in technologys fast-changing landscape. This article introduces cloud-based intelligence, edge intelligence, and possible use-cases for professional users to make machine learning accessible for all.

Cloud computing, simply put, is the availability of remote computational resources whenever a client needs them.

For public cloud services, the cloud service provider is responsible for managing the hardware and ensuring that the service's availability is up to a certain standard and customer expectations. The customers of cloud services pay for what they use, and the employment of such services is generally only viable for large-scale operations.

On the other hand, edge computing happens somewhere between the cloud and the clients network.

While the definition of where exactly edge nodes sit may vary from application to application, they are generally close to the local network. These computational nodes provide services such as filtering and buffering data, and they help increase privacy, provide increased reliability, and reduce cloud-service costs and latency.

Recently, its become more common for AI and machine learning to complement edge-computing nodes and help decide what data is relevant and should be uploaded to the cloud for deeper analysis.

Machine learning (ML) is a broad scientific field, but in recent times, neural networks (often abbreviated to NN) have gained the most attention when discussing machine learning algorithms.

Multiclass or complex ML applications such as object tracking and surveillance, automatic speech recognition, and multi-face detection typically require NNs. Many scientists have worked hard to improve and optimize NN algorithms in the last decade to allow them to run on devices with limited computational resources, which has helped accelerate the edge-computing paradigms popularity and practicability.

One such algorithm is MobileNet, which is an image classification algorithm developed by Google. This project demonstrates that highly accurate neural networks can indeed run on devices with significantly restricted computational power.

Until recently, machine learning was primarily meant for data-science experts with a deep understanding of ML and deep learning applications. Typically, the development tools and software suites were immature and challenging to use.

Machine learning and edge computing are expanding rapidly, and the interest in these fields steadily grows every year. According to current research, 98% of edge devices will use machine learning by 2025. This percentage translates to about 18-25 billion devices that the researchers expect to have machine learning capabilities.

In general, machine learning at the edge opens doors for a broad spectrum of applications ranging from computer vision, speech analysis, and video processing to sequence analysis.

Some concrete examples for possible applications are intelligent door locks combined with a camera. These devices could automatically detect a person wanting access to a room and allow the person entry when appropriate.

Due to the previously discussed optimizations and performance improvements of neural network algorithms, many ML applications can now run on embedded devices powered by crossover MCUs such as the i.MX RT1170. With its two processing cores (a 1GHz Arm Cortex M7 and a 400 MHz Arm Cortex-M4 core), developers can choose to run compatible NN implementations with real-time constraints in mind.

Due to its dual-core design, the i.MX RT1170 also allows the execution of multiple ML models in parallel. The additional built-in crypto engines, advanced security features, and graphics and multimedia capabilities make the i.MX RT1170 suitable for a wide range of applications. Some examples include driver distraction detection, smart light switches, intelligent locks, fleet management, and many more.

The i.MX 8M Plus is a family of applications processors that focuses on ML, computer vision, advanced multimedia applications, and industrial automation with high reliability. These devices were designed with the needs of smart devices and Industry 4.0 applications in mind and come equipped with a dedicated NPU (neural processing unit) operating at up to 2.3 TOPS and up to four Arm Cortex A53 processor cores.

Built-in image signal processors allow developers to utilize either two HD camera sensors or a single 4K camera. These features make the i.MX 8M Plus family of devices viable for applications such as facial recognition, object detection, and other ML tasks. Besides that, devices of the i.MX 8M Plus family come with advanced 2D and 3D graphics acceleration capabilities, multimedia features such as video encode and decode support including H.265), and 8 PDM microphone inputs.

An additional low-power 800 MHz Arm Cortex M7 core complements the package. This dedicated core serves real-time industrial applications that require robust networking features such as CAN FD support and Gigabit Ethernet communication with TSN capabilities.

With new devices comes the need for an easy-to-use, efficient, and capable development ecosystem that enables developers to build modern ML systems. NXPs comprehensive eIQ ML software development environment is designed to assist developers in creating ML-based applications.

The eIQ tools environment includes inference engines, neural network compilers, and optimized libraries to enable working with ML algorithms on NXP microcontrollers, i.MX RT crossover MCUs, and the i.MX family of SoCs. The needed ML technologies are accessible to developers through NXPs SDKs for the MCUXpresso IDE and Yocto BSP.

The upcoming eIQ Toolkit adds an accessible GUI; eIQ Portal and workflow, enabling developers of all experience levels to create ML applications.

Developers can choose to follow a process called BYOM (bring your own model), where developers build their trained models using cloud-based tools and then import them to the eIQ Toolkit software environment. Then, all thats left to do is select the appropriate inference engine in eIQ. Or the developer can use the eIQ Portal GUI-based tools or command line interface to import and curate datasets and use the BYOD (bring your own data) workflow to train their model within the eIQ Toolkit.

Most modern-day consumers are familiar with cloud computing. However, in recent years a new paradigm known as edge computing has seen a rise in interest.

With this paradigm, not all data gets uploaded to the cloud. Instead, edge nodes, located somewhere between the end-user and the cloud, provide additional processing power. This paradigm has many benefits, such as increased security and privacy, reduced data transfer to the cloud, and lower latency.

More recently, developers often enhance these edge nodes with machine learning capabilities. Doing so helps to categorize collected data and filter out unwanted results and irrelevant information. Adding ML to the edge enables many applications such as driver distraction detection, smart light switches, intelligent locks, fleet management, surveillance and categorization, and many more.

ML applications have traditionally been exclusively designed by data-science experts with a deep understanding of ML and deep learning applications. NXP provides a range of inexpensive yet powerful devices, such as the i.MX RT1170 and the i.MX 8M Plus, and the eIQ ML software development environment to help open ML up to any designer. This hardware and software aims to allow developers to build future-proof ML applications at any level of experience, regardless of how small or large the project will be.

Industry Articles are a form of content that allows industry partners to share useful news, messages, and technology with All About Circuits readers in a way editorial content is not well suited to. All Industry Articles are subject to strict editorial guidelines with the intention of offering readers useful news, technical expertise, or stories. The viewpoints and opinions expressed in Industry Articles are those of the partner and not necessarily those of All About Circuits or its writers.

See more here:
Increasing the Accessibility of Machine Learning at the Edge - Industry Articles - All About Circuits