Archive for the ‘Machine Learning’ Category

Worldwide Artificial Intelligence in HR Market to 2027 – Integration of Cloud and Mobile Deployment in HRM Systems Drives Growth – Yahoo Finance

Company Logo

Global Artificial Intelligence in HR Market

Global Artificial Intelligence in HR Market

Dublin, April 06, 2022 (GLOBE NEWSWIRE) -- The "Global Artificial Intelligence in HR Market (2022-2027) by Offering, Technology, Application, Industry and Geography, Competitive Analysis and the Impact of Covid-19 with Ansoff Analysis" report has been added to ResearchAndMarkets.com's offering.

The Global Artificial Intelligence in HR Market is estimated to be USD 3.89 Bn in 2022 and is expected to reach USD 17.61 Bn by 2027, growing at a CAGR of 35.26%.

Market Segmentation

The Global Artificial Intelligence in HR Market is segmented based on Offering, Technology, Application, Industry and Geography.

Offering, the market is classified into Hardware, Software, and Services.

Technology, the market is classified into Machine Learning, Natural Language Processing, Context-aware Computing, and Computer Vision.

Application, the market is classified into Recruitment, Performance Management, Retention, Payroll, Safety and Security, Regulatory Compliance, and Others.

Industry, the market is classified into Academic, BFSI, Government, Healthcare, IT & Telecom, Manufacturing, Retail, and Others.

Geography, the market is classified into Americas, Europe, Middle-East & Africa and Asia-Pacific.

Company Profiles

The report provides a detailed analysis of the competitors in the market. It covers the financial performance analysis for the publicly listed companies in the market. The report also offers detailed information on the companies' recent development and competitive scenario. Some of the companies covered in this report are Automatic Data Processing Inc., Ceredian HCM Inc., Cezanne Inc., etc.

Competitive Quadrant

The report includes Competitive Quadrant, a proprietary tool to analyze and evaluate the position of companies based on their Industry Position score and Market Performance score. The tool uses various factors for categorizing the players into four categories. Some of these factors considered for analysis are financial performance over the last 3 years, growth strategies, innovation score, new product launches, investments, growth in market share, etc.

Ansoff Analysis

Story continues

The report presents a detailed Ansoff matrix analysis for the Global Artificial Intelligence in HR Market. Ansoff Matrix, also known as Product/Market Expansion Grid, is a strategic tool used to design strategies for the growth of the company. The matrix can be used to evaluate approaches in four strategies viz. Market Development, Market Penetration, Product Development and Diversification. The matrix is also used for risk analysis to understand the risk involved with each approach.

The analyst analyses Global Artificial Intelligence in HR Market using the Ansoff Matrix to provide the best approaches a company can take to improve its market position.

Based on the SWOT analysis conducted on the industry and industry players, the analyst has devised suitable strategies for market growth.

Key Topics Covered:

1 Report Description

2 Research Methodology

3 Executive Summary3.1 Introduction3.2 Market Size, Segmentation, and Outlook

4 Market Dynamics4.1 Drivers4.1.1 Integration of Cloud and Mobile Deployment in HRM Systems 4.1.2 Increasingly Large and Complex Resumes Screening and Reduction in Biases Hiring Decision 4.1.3 Growing Emphasis on HR Process Automation4.2 Restraints4.2.1 Lack of Standard Regulatory Policies and Data Regulations 4.2.2 Reluctance Among HR to Adopt AI-Based Technologies4.3 Opportunities4.3.1 Collaboration and Partnership with the HR Organization 4.3.2 Technological Advances in AI for HR4.4 Challenges4.4.1 Privacy and Security Concerns 4.4.2 Requiement of Human Aspect in HR

5 Market Analysis5.1 Regulatory Scenario5.2 Porter's Five Forces Analysis5.3 Impact of COVID-195.4 Ansoff Matrix Analysis

6 Global Artificial Intelligence in HR Market, By Offering6.1 Introduction6.2 Hardware 6.2.1 Processor 6.2.2 Memory 6.2.3 Network 6.3 Software 6.3.1 AI Solutions 6.3.2 AI Platform 6.4 Services 6.4.1 Deployment & Integration 6.4.2 Support & Maintenance 6.4.3 Training & Consulting

7 Global Artificial Intelligence in HR Market, By Technology7.1 Introduction7.2 Machine Learning7.2.1 Deep Learning7.2.2 Supervised Learning7.2.3 Reinforced Learning7.2.4 Unsupervised Learning7.2.5 Others7.3 Natural Language Processing7.4 Context-aware Computing7.5 Computer Vision

8 Global Artificial Intelligence in HR Market, By Application8.1 Introduction8.2 Recruitment8.3 Performance Management8.4 Retention8.5 Payroll8.6 Safety and Security8.7 Regulatory Compliance8.8 Others

9 Global Artificial Intelligence in HR Market, By Industry9.1 Introduction9.2 Academic9.3 BFSI9.4 Government9.5 Healthcare9.6 IT & Telecom9.7 Manufacturing9.8 Retail9.9 Others

10 Americas' Artificial Intelligence in HR Market10.1 Introduction10.2 Argentina10.3 Brazil10.4 Canada10.5 Chile10.6 Colombia10.7 Mexico10.8 Peru10.9 United States10.10 Rest of Americas

11 Europe's Artificial Intelligence in HR Market11.1 Introduction11.2 Austria11.3 Belgium11.4 Denmark11.5 Finland11.6 France11.7 Germany11.8 Italy11.9 Netherlands11.10 Norway11.11 Poland11.12 Russia11.13 Spain11.14 Sweden11.15 Switzerland11.16 United Kingdom11.17 Rest of Europe

12 Middle East and Africa's Artificial Intelligence in HR Market12.1 Introduction12.2 Egypt12.3 Israel12.4 Qatar12.5 Saudi Arabia12.6 South Africa12.7 United Arab Emirates12.8 Rest of MEA

13 APAC's Artificial Intelligence in HR Market13.1 Introduction

14 Competitive Landscape14.1 Competitive Quadrant14.2 Market Share Analysis14.3 Strategic Initiatives

15 Company Profiles

For more information about this report visit https://www.researchandmarkets.com/r/ojpc01

Attachment

Here is the original post:
Worldwide Artificial Intelligence in HR Market to 2027 - Integration of Cloud and Mobile Deployment in HRM Systems Drives Growth - Yahoo Finance

Learning From Data – Online Course (MOOC)

Outline

This is an introductory course in machine learning (ML) that covers the basic theory, algorithms, and applications. ML is a key technology in Big Data, and in many financial, medical, commercial, and scientific applications. It enables computational systems to adaptively improve their performance with experience accumulated from the observed data. ML has become one of the hottest fields of study today, taken up by undergraduate and graduate students from 15 different majors at Caltech. This course balances theory and practice, and covers the mathematical as well as the heuristic aspects. The lectures below follow each other in a story-like fashion:

The 18 lectures are about 60 minutes each plus Q&A. The content of each lecture is color coded:

The Learning Problem - Introduction; supervised, unsupervised, and reinforcement learning. Components of the learning problem.

Is Learning Feasible? - Can we generalize from a limited sample to the entire space? Relationship between in-sample and out-of-sample.

The Linear Model I - Linear classification and linear regression. Extending linear models through nonlinear transforms.

Error and Noise - The principled choice of error measures. What happens when the target we want to learn is noisy.

Training versus Testing - The difference between training and testing in mathematical terms. What makes a learning model able to generalize?

Theory of Generalization - How an infinite model can learn from a finite sample. The most important theoretical result in machine learning.

The VC Dimension - A measure of what it takes a model to learn. Relationship to the number of parameters and degrees of freedom.

Bias-Variance Tradeoff - Breaking down the learning performance into competing quantities. The learning curves.

The Linear Model II - More about linear models. Logistic regression, maximum likelihood, and gradient descent.

Neural Networks - A biologically inspired model. The efficient backpropagation learning algorithm. Hidden layers.

Overfitting - Fitting the data too well; fitting the noise. Deterministic noise versus stochastic noise.

Regularization - Putting the brakes on fitting the noise. Hard and soft constraints. Augmented error and weight decay.

Validation - Taking a peek out of sample. Model selection and data contamination. Cross validation.

Support Vector Machines - One of the most successful learning algorithms; getting a complex model at the price of a simple one.

Kernel Methods - Extending SVM to infinite-dimensional spaces using the kernel trick, and to non-separable data using soft margins.

Radial Basis Functions - An important learning model that connects several machine learning models and techniques.

Three Learning Principles - Major pitfalls for machine learning practitioners; Occam's razor, sampling bias, and data snooping.

Epilogue - The map of machine learning. Brief views of Bayesian learning and aggregation methods.

You can also look for a particular topic within the lectures in the Machine Learning Video Library.

This course was broadcast live from the lecture hall at Caltech in April and May 2012. There was no 'Take 2' for the recorded videos. The lectures included live Q&A sessions with online audience participation. Here is a sample of a live lecture as the online audience saw it in real time.

See original here:
Learning From Data - Online Course (MOOC)

How the Machines Are Learning to Get Smarter – Design News

Recent developments in AI (Artificial Intelligence) technology have led to many breakthroughs and exponential growth for machines. The extent to which the entire world now relies on machines knows no bounds. In fact, at this point, AI solutions are not just a key investment opportunity for large corporations but also a major contributor towards addressing countless day-to-day problems in our lives.

A key subset of AI is machine learning, often simply known as ML. It is only due to the invaluable work that researchers and scientists put into the foundations of ML that we are now capable of harvesting maximum performance from highly competent AI-based technologies.

Related: Human Brain Inspires Design of Chips That Can Rewire Themselves

In this article, we will talk about how, over the years, humans have made machines capable of intelligence, i.e., the ability to mimic the human thought process and make decisions based on experiences.

Before we talk about the different methodologies using which humans teach machines to behave like humans, let us go over the basic definition of machine learning.

Related: Researchers Use AI and Stimulation to Strengthen the Brain

Machine learning is the method via which humans teach machines to learn from a set of historical data and enable them to perform certain actions in the future based on their past learning. Machine learning is a combination of many things, from computer algorithms and data analytics to mathematics and statistics. It is the technology that the construction of artificially intelligent systems heavily relies on.

The process of making machines learn from historical data is known as training.

The science of machine learning revolves around teaching the machine by using datasets of different sizes composed of useful or random facts and/or figures and feeding them to the machine. The essence of this activity is to help the machine observe the data, establish meaningful connections between the different pieces of the supplied information, and prepare to make decisions about incoming data by incorporating these pre-established connections, also known as rules.

Machine learning models often follow one or more of the following primary training methods.

For the initial training, we use a dataset where the input and/or expected output may or may not be clearly defined. The process of training utilizes training data. Once the machine has been trained, it is fed test data to find out whether the machine has learned from the training dataset or not.

Let us go over each of these training methods in a tad more detail and explore how they are used to make machines smarter.

This type of machine learning algorithm makes use of a dataset that contains labeled data. It means you tell the machine what each item is. This way, we can theoretically pre-define the rules and all that the machine has to do is study the existing mappings and learn these rules.

We can further split supervised learning algorithms into two sub-types, classification, and regression.

Classification: This method is employed when the machine has to be trained to answer in binary terms, such as yes-no, good-bad, or true-false. The training data consists of items that have already been classified into various categories. For each category, the machine studies each item closely and identifies characteristics that are common for all the items within that category. This allows the machine to build relationships between items and their respective categories. It uses these rules to identify items in the test data and correctly classify them.

Regression: The regression model is employed when you need predictions in terms of numeric values, such as housing prices or temperatures. The training dataset contains multiple variables along with outputs that may or may not be dependent on said variables. The machine studies the input variables and figures out how, if at all, each variable affects the value of the output, leading to pattern recognition or the development of rules. For the test data, the machine uses these rules to calculate an estimate or a predicted value for the output.

The key difference between supervised and unsupervised learning is that the items are not labeled in the dataset used for the latter. Let us use an example to demonstrate this in a better manner.

Let us say that you want a machine to be able to classify the items in a dataset containing images of different types of gardening tools, such as trowels, shovels, rakes, and spades.

Under supervised learning, your training data would contain images along with their identifiers. For example, if you are inputting the image of a spade, you will tell the machine that it is a spade. The machine will then study all the spades and their common features to learn how to identify a spade in the future.

However, if you use the unsupervised learning model, you would input pictures of all sorts of gardening tools without labeling them. For example, if you input a picture of a spade, you will not tell the machine that it is a spade. The machine will have to figure out on its own how each image may (or may not) be related to the ones before it, and then put similar images into one category. Thus, the machine learns to form categories on its own without being explicitly told what the categories are. This type of training model works well for datasets where structures or patterns might not be apparent to the average human.

The third prominent method is based on the concept of reinforcement, which some of you might be familiar with if you have ever taken a Psychology 101 course. If you have ever tried to teach your dog some cool tricks by motivating it with treats, you have made use of the reward system.

Unlike the first two methods, this model relies greatly on feedback. For each decision made by the machine, you tell the machine the correct output so that it can figure out whether it made a good or bad prediction. Through repeated trial-and-error, the machine becomes increasingly accurate.

A simple real-world example of reinforcement learning can be seen in the display of online ads. The machine can determine which ads are more successful and worth showing based on how many people click on it. If the machine gets more clicks (higher reward) on a certain ad from a particular target group, it will know that the decision to display that ad to that group was a good one.

While some people seem determined on trying to settle the humans vs machines debate once and for all, others believe that this type of comparison is futile. The fact remains that the human being came first, and the machine followed. As long as our passion for growth and our need for perfection is alive, machine learning algorithms will continue to improve and become increasingly accurate, helping us achieve seemingly impossible success and accuracy rates.

Ralf Llanasas is a digital marketing expert and freelance writer. Has graduated with abachelor's degree in Information Technology, he mostly writes topics related to marketing, technology, and SaaS trends. His writingcan be seen in several publications aimed at the IT industry. He is also into photography and loves taking pictures when he is free.

More:
How the Machines Are Learning to Get Smarter - Design News

Machine learningbased observation-constrained projections reveal elevated global socioeconomic risks from wildfire – Nature.com

Applying traditional EC for global fire carbon emissions

The recently developed emergent constraint (EC) approach has demonstrated robust capability in reducing the uncertainty in characterizing or projecting Earth system variables simulated by a multimodel ensemble25,26. The basic concept of EC is that, despite the distinct model structures and parameters, there exists various across-model relationships (emergent constraints) between pairs of quantities when we analyze outputs from multiple models27. Therefore, the EC concept is especially useful to derive the relationship between a variable that is difficult or impossible to measure (e.g., future wildfires) and a second, measurable variable (e.g., historical wildfires), across multiple ESMs. We start with global total values and find significant linear relationship between historical and future global total fire carbon emission across 38 ensemble members of 13 ESMs (Supplementary Fig.2a). Because we are particularly interested in the spatial distribution of future wildfires, which are critical for quantifying future socioeconomic risks from wildfires, we further apply the EC concept to every grid cell of the globe, using either a single constraint variable (historical fire carbon emissions) or multiple constraint variables (the atmospheric and terrestrial variables in Supplementary Table2), with the latter being shown in Supplementary Fig.2b. We find insignificant linear relationships between these historical fire-relevant variables and future wildfires in the historically fire-prone regions across the analyzed 38 members of 13 ESMs. The failure of the traditional EC concept in constraining fire carbon emissions at local scales could be attributed to the highly nonlinear interactions between fire and its cross-section drivers, which is likely inadequately captured by the linear relationship under the EC assumption. Therefore, we further develop an MLT-based constraint to deal with the complex response of wildfires to environmental and socioeconomic drivers.

MLT provide powerful tools for capturing the nonlinear and interactive roles among regulators of an Earth system feature, thereby facilitating effective, multivariate constraint on wildfire activity, which represents an integrated function of climate, terrestrial ecosystem, and socioeconomic conditions. MLT have been widely applied for identifying empirical regulators32 and building prediction systems for global and regional fire activity35. To constrain the projected fire carbon emissions simulated by 13 ESMs using observational data, the current study establishes an MLT-based emergent relationship between the future fire carbon emissions and historical fire carbon emissions, climate, terrestrial ecosystem, and socioeconomic drivers.

Here, we use MLT to examine the empirical relationships between historical, observed influencing factors of wildfires and future fire carbon emissions from ESMs and then feed observational data into the trained machine learning models (Supplementary Fig.3). To train the MLT to use historical states for the prediction of future fire carbon emission, the historical and future simulations from the SSP (Shared Socioeconomic Pathway) 5-8536, a high-emission scenario, are analyzed for the currently available 13 ESMs in CMIP6 (Supplementary Table1). A subset of these ESMs (i.e., nine ESMs that provide simulation in a lower-emission scenario, SSP2-45) is also analyzed to examine the dependence of fire regimes on socioeconomic pathway. The training is conducted using the spatial sample of decadal mean predictors and target variable, both individually from each ESM and from their aggregation, with the later referred to as multimodel mean and subsequently analyzed for projecting fire carbon emission and its socioeconomic risks. Corresponding to the spatial resolution of the observational products of fire carbon emission, all model outputs are bilinearly interpolated to a 0.250.25 grid, resulting in a spatial sample of 11,325 points per model for the training. To perform the observational constraint, the historical observed predictors are then fed into the trained machine learning models. The historical predictors are listed in Supplementary Table2 with their observational data sources, temporal coverages, and spatial resolutions. For the atmospheric and terrestrial variables, the annual mean value and climatology in each of 12 calendar months are included as predictors. This training and observational constraining is performed for target decades (20112020, 20212030, 20912100), and the historical period is always 20012010. Future changes in fire carbon emission are quantified and expressed as the relative trend (% decade1) (i.e., the ratio between the absolute trend and the mean value during the 2010s), for both the default and observation-constrained ensembles.

The current spatial sample training approach establishes a history-future relationship for each pixel using the entire global sample. To minimize local prediction errors for a certain pixel, MLT search all pixels, regardless of their geographical location, to optimize the prediction model of future fires at the target pixel. In this way, a physically robust history-future relationship is established based on the global sample of locations, whereas influences of localized features, such as socioeconomic development, on wildfire trends are naturally damped in our approach (Supplementary Figs.10 and 11). The reliability of MLT is degraded when the actual observational data space is insufficiently covered by the training (historical CMIP6 simulation) data space, namely the extrapolation uncertainty. Here, we further evaluate the data space of both observation and historical simulation of the climate and fire variables (Supplementary Fig.14), and we find all these assessed variables are largely overlapped, indicating minimal extrapolation error involved in the current MLT application.

To minimize the projection uncertainty associated with the selected machine learning algorithms, this study examines three MLTrandom forest (rf), support vector machine with Radial Basis Function Kernel (svmRadialCost), and gradient boosting machine (gbm). These three algorithms differ substantially in their function. The average among these algorithms is thus believed to better capture the complex interrelation between the historical predictors and future fire carbon emissions than any single algorithm. The MLT analysis is performed using the caret, dplyr, randomForest, kernlab, and gbm packages in the R statistical software. The prediction model is fitted for each MLT using the training data set that targets each future decade, with parameters optimized for the minimum RMSE via 10-fold cross-validationin other words, using a randomly chosen nine-tenth of the entire spatial sample (n=10,193) for model fitting and the remaining one-tenth of the entire spatial sample (n=1,132) for validation, and repeating the process 10 times. For svmRadialCost, the optimal pair of cost parameter (C) and kernel parameter sigma (sigma) is searched from 30 (tuneLength=30) C candidates and their individually associated optimal sigma. For gbm, we set the complexity of trees (interaction.depth) to 3, and learning rate (shrinkage) to 0.2, and let the train function search for the optimal number of trees from 10 to 200 with an increment of 5 (10, 15, 20, , 200). For rf, the number of variables available for splitting at each tree node (mtry) is allowed to search between 5 and 50 with an increment of 1 (5, 6, 7, , 50); the number of trees is determined by the algorithm provided by randomForest package and the train function by the caret package. The cross-validation R2s exceed 0.8 (n=1,132) for all optimized MLT and all future periods. The currently examined ESMs, MLT, and hundreds of observational data set combinations constitute a multimodel, multidata set ensemble of projected fire carbon emissions for the twenty-first century. This multimodel, multidata set ensemble allows natural quantification of uncertainty in the future projection derived from observational sources and MLT, compared with a previous single-MLT, single-observation approach67.

This MLT-based observational constraining approach is validated for a historical period using the emergent relation between the fire-climate-ecosystem-socioeconomics during 19972006 and fire carbon emission during 20072016. The spatial correlation and RMSE with the observed decadal mean fire carbon emission (n=11,325) is evaluated and compared for the constrained and unconstrained ensemble, reported in the main text (Figs.1 and 2). The RMSE and R2 produced by the traditional EC approach that constrains fire carbon emissions during 20072016 with fire carbon emissions during 19972006 are reported along with the MLT-based observational constraint in Fig.1e, f. The MLT-based observational constraining approach is also applied to six ESMs that report burned area fraction, and validation is also conducted and reported in Supplementary Fig.6.

Because the MLT are trained using the global spatial sample, we expect the performance of MLT to be sensitive to the spatial resolution of the training data set. This assumption is tested by varying the interpolation grids (1, 2.5, 5, and 10 latitude by longitude) of the ESMs and fitting MLT using this specific-resolution training data for the validation period (Supplementary Fig.7). Observational data sets at 0.25 resolution are subsequently fed into the fitted MLT models, regardless of the input model data resolution. This sensitive test sheds light on the importance of spatial resolution to our observational constraining and thereby implies potential accuracy improvement of our MLT-based observation constraint with the development of higher-resolution ESMs.

Here, we define the socioeconomic exposure to wildfires as a product of decadal mean fire carbon emission and number of people, amount of GDP, and agricultural area exposed to the burning in each grid cell, following previous definition for extreme heat68. These exposure metrics measure the amount of population, GDP, and agricultural area affected by wildfires, whose severity is represented by the amount of fire carbon emission. The projected population at 1/81/8 resolution under SSP5-85 is obtained from the National Center for Atmospheric Researchs Integrated Assessment Modeling Group and the City University of New York Institute for Demographic Research69. The projected GDP at 1km resolution under SSP5 is disaggregated from national GDP projections using nighttime light and population70. The agricultural area projection at 0.050.05 resolution under SSP5-85 is obtained from the Global Change Analysis Model and a geospatial downscaling model (Demeter)71. All the projected socioeconomic variables are resampled to 0.250.25 resolution before the calculation of exposure to fire carbon emission fraction. Future changes in socioeconomic exposure to wildfires are quantified as the relative trend (% decade1) (i.e., the ratio between the absolute trend and the mean value during the 2010s) for the default and observation-constrained ensembles. These relative changes provide direct implications on what the future would be like compared with the current state, regardless of the potential biases simulated by the default ESMs.

The mechanisms underlying the projected evolution in fire carbon emissions are explored in two tasks, addressing the importance of drivers in the historical and dynamical perspectives. The first task assesses the relative contribution of each environmental and socioeconomic drivers historical distribution to the projected future wildfire distribution, for directly understanding how the current observational constraint works (Supplementary Fig.8). The second task examines the relative contribution of each drivers projected trend to the projected wildfires trends in a specific region, for disentangling the dynamical mechanisms underlying future evolution of regional wildfires (Supplementary Fig.9). These tasks benefit from the importance score as an output of MLT. Although the calculation of importance scores varies substantially by MLT, all the importance scores qualitatively reflect relative importance of each predictor when making a prediction. For each tree in both rf and gbm, the prediction accuracy on the out-of-bag portion of the data is recorded. Then, the same is done after permuting each predictor variable. For rf, the differences are averaged for each tree and normalized by the standard error. For gbm, the importance order is first calculated for each tree and then summed up over each boosting iteration. For svm, we estimate the contribution of a single variable by training the model on all variables except that specific variable. The difference in performance between that model and the one with all variables is then considered the marginal contribution of that particular variable; such marginal contribution of each variable is standardized to derive the variables relative importance. Because we apply multiple MLT in this study, the average importance scores from these MLT are reported in the corresponding figures for robustness.

In the first task, the importance of each historical driver to future global wildfire distributions is examined in three MLT models (random forest, support vector machine, and gradient boosting machine) that are trained for projecting future fire carbon emissions (Supplementary Fig.8). For the atmospheric and terrestrial variables that include annual mean and monthly climatology as predictors, to account for the overall importance of a particular variable while considering the possible information overlapping contained in each month and annual mean, the importance of each variable is represented by the highest importance score among these 13 predictors (annual mean, January, February, , December). The importance score of each historical driver reflects the relative weight of each historical, environmental driver in determining the spatial pattern of fire carbon emissions in each future decade.

In the second task, the dynamical importance of each environmental drivers future evolution is assessed for targeted tropical regions (i.e., Amazon and Congo) and major land cover types (tropical forests, other forest, shrubland, savannas, grasslands, and croplands) in both default and constrained ensembles through the importance of each drivers trend to the projected wildfire trend. For the default ensemble, the three MLT models (random forest, support vector machine, and gradient boosting machine) are used to predict the spatial distribution of simulated trends in fire carbon emission using the simulated trends in the socioeconomic, atmospheric, and terrestrial variables that are considered in our observational constraint for wildfires, for each ESM and their multimodel mean. This analysis excludes flash rate, another predictor in constraining future wildfires, because it is not dynamically simulated by most ESMs. For the observation-constrained ensemble, we first constrain the projected atmospheric and terrestrial variables in each future decade, using a similar approach as we constrain future fire carbon emissions, for each individual ESM and their multimodel aggregation. In this constraint for environmental drivers, all the variables in Supplementary Table2 are considered as predictors, thereby achieving self-consistency of the constrained future evolution of all these fire-relevant variables. Noticing that the socioeconomic trends are determined by the SSPs, future socioeconomic developments are therefore not constrained in the current approach. Then, the same three MLT models are used to predict the spatial distribution of constrained trends in fire carbon emissions using the constrained trends in those environmental and socioeconomic drivers. For computational efficiency, only the annual mean trends in the environmental drivers are constrained and analyzed in this task. The importance scores of projected trends in socioeconomic and environmental drivers reflect their dynamic role in future evolution of wildfires in the target tropical regions. Here, the Amazon and Congo regions are shown as examples of how this analysis is applied to understand regional wildfire evolutions, though the mechanism underlying the future evolution of wildfires in other regions could be similarly explored.

Read the rest here:
Machine learningbased observation-constrained projections reveal elevated global socioeconomic risks from wildfire - Nature.com

OpenShift 4.10: Red Hat teams with Nvidia to add AI and machine learning – ZDNet

Getty Images

You can run Kubernetes straight from the code, but few companies have the nerves to do it. Instead, they turn to programs such as Red Hat's OpenShift. These make orchestrating containers much easier. Now, with its most recent update, Red Hat OpenShift 4.10, is also adding artificial intelligence (AI) and machine learning (ML) functionality to its bag of tricks,

Managing AI and ML in the Enterprise

The AI and ML deployments are well underway, but for CXOs the biggest issue will be managing these initiatives, and figuring out where the data science team fits in and what algorithms to buy versus build.

Read More

Red Hat is pulling this off by allying with Nvidia. Specifically, this latest OpenShift version is certified to work with NVIDIA AI Enterprise 2.0. Nvidia AI Enterprise is an AI software suite to get both experienced and new AI companies to quickly get to work on AI development and deployment. It does this by providing proven, open-sourced containers and frameworks. These are certified to run on common data center platforms from both Red Hat and VMware vSphere with Tanzu. This setup uses Nvidia Certified servers configured with GPUs or CPU only, either on-premise on the cloud. The idea is to give customers a ready-to-run AI platform so companies can focus on creating business value from AI, not on running the AI infrastructure.

Red Hat customers now deploy Red Hat OpenShift on NVIDIA-Certified Systems with Nvidia Enterprise AI software as well as on previously supported Nvidia DGX A100 systems, another high-performance AI workload compute system. This also enables organizations to quickly deploy an AI infrastructure to consolidate and accelerate the MLOps lifecycle.

Of course, there's more to this latest OpenShift update than AI support. Red Hat OpenShift 4.10 also supports a wider spectrum of cloud-native workloads across the open hybrid cloud by supporting additional public clouds and hardware architectures. These new features and capabilities include:

Installer provisioned infrastructure (IPI) support for Azure Stack Hub and Alibaba Cloud and IBM Cloud, both available as a technology preview. Users can now use the IPI process for fully automated, integrated, one-click installation of OpenShift 4.

Running Red Hat OpenShift on Arm processors. Arm support will be available in two ways: full stack automation IPI for Amazon Web Services (AWS) and user provisioned (UPI) for bare-metal on pre-existing infrastructure.

Red Hat OpenShift availability on NVIDIA LaunchPad.NVIDIA LaunchPad provides free access to curated labs for enterprise IT and AI professionals to experience Nvidia-accelerated systems and software.

Many customers have long been awaiting the day they could run OpenShift on ARM. It offers cost savings, reduced power consumption, and, in some scenarios, performance gains. It finally appeared as a beta last summer, and now it's ready to run in production. As Eddie Ramirez, ARM's vice president of Infrastructure Line of Business, said, "By adding support for Arm to OpenShift, Red Hat is providing software developers with compelling, new choices in AI processing and helping to unlock the benefits of high performing, cost-efficient Arm-based processors in hybrid cloud-based environments."

The brand-new OpenSift also includes three new compliance operators so if your business works in retail, electrical utilities, or federal government contracting, you make certain your Kubernetes clusters comply with these standards.

Finally, OpenShift 4.10 has improved its security by making sandboxed containers, based on Kata containers, generally available. Sandboxed containers provide an optional additional layer of isolation for workloads with stringent application-level security requirements. This complements OpenShift's older built-in security functionality such as SELinux, role-based access control (RBAC), projects, security context constraints (SCCs), and Kubernetes network policies.

Security improvements have also been made to keep OpenShift clusters running well in disconnected or air-gapped settings. With this, you can mirror OpenShift images and keep them up to date even though the clusters are usually disconnected.

NVIDIA Enterprise AI 2.0 on OpenShift 4.10 and OpenShift 4.10 are now generally available.

See also

Here is the original post:
OpenShift 4.10: Red Hat teams with Nvidia to add AI and machine learning - ZDNet