Archive for the ‘Machine Learning’ Category

Can machine learning clean up the last days of ICE? – Automotive World

The automotive industry is steadily moving away from internal combustion engines (ICEs) in the wake of more stringent regulations. Some industry watchers regard electric vehicles (EVs) as the next step in vehicle development, despite high costs and infrastructural limitations in developing markets outside Europe and Asia. However, many markets remain deeply dependent on the conventional ICE vehicle. A 2020 study by Boston Consulting Group found that nearly 28% of ICE vehicles could still be on the road as late as 2035, while EVs may only account for 48% of vehicles registered on the road by this time as well.

For manufacturers, this represents a huge and multi-faceted challenge. There are not only the industrys looming and ambitious environmental targets to consider but also the drive for CASE (Connected, Autonomous, Shared and Electric) vehicles is increasing design and development complexity. Also, there are the bottom-line pressures where European R&D spend has already increased by 75% between 2011 and 2019. Enter Secondmind, a machine learning company based in the UK. The company works with automotive engineers, helping them to use data-efficient transparent machine learning that combines the subject matter expertise of today's engineers with algorithmic intelligence. Secondmind's Chief Executive Gary Brotman argues that this new breed of machine learning is required to efficiently streamline the vehicle development process, helping automotive companies accelerate the transition away from ICE and ensure sustainable design and development engineering.

Read the original post:
Can machine learning clean up the last days of ICE? - Automotive World

5 Top Deep Learning Trends in 2022 – Datamation

Deep learning (DL) could be defined as a form of machine learning based on artificial neural networks which harness multiple processing layers in order to extract progressively better and more high-level insights from data. In essence it is simply a more sophisticated application of artificial intelligence (AI) platforms and machine learning (ML).

Here are some of the top trends in deep learning:

Model Scale Up

A lot of the excitement in deep learning right now is centered around scaling up large, relatively general models (now being called foundation models). They are exhibiting surprising capabilities such as generating novel text, images from text, and video from text. Anything that scales up AI models adds yet more capabilities to deep learning. This is showing up in algorithms that go beyond simplistic responses to multi-faceted answers and actions that dig deeper into data, preferences, and potential actions.

Scale Up Limitations

However, not everyone is convinced that the scaling up of neural networks is going to continue to bear fruit. Roadblocks may lie ahead.

There is some debate about how far we can get in terms of aspects of intelligence with scaling alone, said Peter Stone, PhD, Executive Director, Sony AI America.

Current models are limited in several ways, and some of the community is rushing to point those out. It will be interesting to see what capabilities can be achieved with neural networks alone, and what novel methods will be uncovered for combining neural networks with other AI paradigms.

AI and Model Training

AI isnt something you plug in and, presto, instant insights. It takes time for the deep learning platform to analyze data sets, spot patterns, and begin to derive conclusions that have broad applicability in the real world. The good news is that AI platforms are rapidly evolving to keep up with model training demands.

Instead of weeks to learn enough to begin to function, AI platforms are undergoing fundamental innovation, and are rapidly reaching the same maturity level as data analytics. As datasets become larger, deep learning models become more resource-intensive, requiring a lot of processing power to predict, validate, and recalibrate millions of times. Graphics Processing Units (GPUs) are advancing to handle this computing and AI platforms are evolving to keep up with model training demands.

Organizations can enhance their AI platforms by combining open-source projects and commercial technologies, said Bin Fan, VP Open Source and Founding Engineer atAlluxio.

It is essential to consider skills, speed of deployment, the variety of algorithms supported, and the flexibility of the system while making decisions.

Containerized Workloads

Deep learning workloads are increasingly containerized, further supporting autonomous operations, said Fan. Container technologies enable organizations to have isolation, portability, unlimited scalability, and dynamic behavior in MLOps. Thus, AI infrastructure management would become more automated, easier, and more business-friendly than before.

Containerization being the key, Kubernetes will aid cloud-native MLOps in integrating with more mature technologies, said Fan.

To keep up with this trend, organizations can find their AI workloads running on more flexible cloud environments in conjunction with Kubernetes.

Prescriptive Modeling over Predictive Modeling

Modeling has gone through many phases over the last many years. Initial attempts tried to predict trends from historical data. This had some value, but didnt take into account factors such as context, sudden traffic spikes, and shifts in market forces. In particular, real-time data played no real part in early efforts at predictive modeling.

As unstructured data became more important, organizations wanted to mine it to glean insight. Coupled with the rise in processing power, suddenly real time analysis rose to prominence. And the immense amounts of data generated by social media has only added to the need to address real time information.

How does this relate to AI, deep learning, and automation?

Many of the current and previous industry implementations of AI have relied on the AI to inform a human of some anticipated event, who then has the expert knowledge to know what action to take, said Frans Cronje, CEO and Co-founder of DataProphet.

Increasingly, providers are moving to AI that can anticipate a future event and take the correspondent action.

This opens the door to far more effective deep learning networks. With real time data being constantly used by multi-layered neural networks, AI can be utilized to take more and more of the workload away from humans. Instead of referring the decision to a human expert, deep learning can be used to prescribe predicted decisions based on historical, real-time, and analytical data.

See original here:
5 Top Deep Learning Trends in 2022 - Datamation

6 sustainability measures of MLops and how to address them – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Artificial intelligence (AI) adoption keeps growing. According to a McKinsey survey, 56% of companies are now using AI in at least one function, up from 50% in 2020. A PwC survey found that the pandemic accelerated AI uptake and that 86% of companies say AI is becoming a mainstream technology in their company.

In the last few years, significant advances in open-source AI, such as the groundbreaking TensorFlow framework, have opened AI up to a broad audience and made the technology more accessible. Relatively frictionless use of the new technology has led to greatly accelerated adoption and an explosion of new applications. Tesla Autopilot, Amazon Alexa and other familiar use cases have both captured our imaginations and stirred controversy, but AI is finding applications in almost every aspect of our world.

Historically, machine learning (ML) the pathway to AI was reserved for academics and specialists with the necessary mathematical skills to develop complex algorithms and models. Today, the data scientists working on these projects need both the necessary knowledge and the right tools to be able to effectively productize their machine learning models for consumption at scale which can often be a hugely complicated task involving sophisticated infrastructure and multiple steps in ML workflows.

Another key piece is model lifecycle management (MLM), which manages the complex AI pipeline and helps ensure results. The proprietary enterprise MLM systems of the past were expensive, however, and yet often lagged far behind the latest technological advances in AI.

Effectively filling that operational capability gap is critical to the long-term success of AI programs because training models that give good predictions is just a small part of the overall challenge. Building ML systems that bring value to an organization is more than this. Rather than the ship-and-forget pattern typical of traditional software, an effective strategy requires regular iteration cycles with continuous monitoring, care and improvement.

Enter MLops (machine learning operations), which enables data scientists, engineering and IT operations teams to work together collaboratively to deploy ML models into production, manage them at scale and continuously monitor their performance.

MLops typically aims to address six key challenges around taking AI applications into production. These are: repeatability, availability, maintainability, quality, scalability and consistency.

Further, MLops can help simplify AI consumption so that applications can make use of machine learning models for inference (i.e., to make predictions based on data) in a scalable, maintainable manner. This capability is, after all, the primary value that AI initiatives are supposed to deliver. To dive deeper:

Repeatability is the process thatensuresthe ML modelwillrun successfully in a repeatable manner.

Availability means the ML model is deployed in a way that it is sufficiently available to be able to provide inference services to consuming applications and offer an appropriate level of service.

Maintainabilityrefers tothe processes thatenablethe ML modelto remainmaintainable on a long-term basis; for example, when retraining the model becomes necessary.

Quality: the ML model is continuously monitored to ensure it delivers predictions of tolerable quality.

Scalability means both the scalability of inference services and of the people and processes that are required to retrain the ML model when required.

Consistency: A consistent approach to ML is essential to ensuring success on the other noted measures above.

We can think of MLops as a natural extension of agile devops applied to AI and ML. Typically MLops covers the major aspects of the machine learning lifecycle data preprocessing (ingesting, analyzing and preparing data and making sure that the data is suitably aligned for the model to be trained on), model development, model training and validation, and finally, deployment.

The following six proven MLops techniques can measurably improve the efficacy of AI initiatives, in terms of time to market, outcomes and long-term sustainability.

ML pipelines typically consist of multiple steps, often orchestrated in a directed acyclic graph (DAG) that coordinates the flow of training data as well as the generation and delivery of trained ML models.

The steps within an ML pipeline can be complex. For instance, a step for fetching data in itself may require multiple subtasks to gather datasets, perform checks and execute transformations. For example data may need to be extracted from a variety of source systems perhaps data marts in a corporate data warehouse, web scraping, geospatial stores and APIs. The extracted data may then need to undergo quality and integrity checks using sampling techniques and might need to be adapted in various ways like dropping data points that are not required, aggregations such as summarizing or windowing of other data points, and so on.

Transforming the data into a format that can be used to train the machine learning ML model a process called feature engineering may benefit from additional alignment steps.

Training and testing models often require a grid search to find optimal hyperparameters, where multiple experiments are conducted in parallel until the best set of hyperparameters is identified.

Storing models requires an effective approach to versioning and a way to capture associated metadata and metrics about the model.

MLops platforms like Kubeflow, an open-source machine learning toolkit that runs on Kubernetes, translate the complex steps that compose a data science workflow into jobs that run inside Docker containers on Kubernetes, providing a cloud-native, yet platform-agnostic, interface for the component steps of ML pipelines.

Once the appropriate trained and validated model has been selected, the model needs to be deployed to a production environment where live data is available in order to produce predictions.

And theres good news here the model-as-a-service architecture has made this aspect of ML significantly easier. This approach separates the application from the model through an API, further simplifying processes such as model versioning, redeployment and reuse.

A number of open-source technologies are available that can wrap an ML model and expose inference APIs; for example, KServe and Seldon Core, which are open-source platforms for deploying ML models on Kubernetes.

Its crucial to be able to retrain and redeploy ML models in an automated fashion when significant model drift is detected.

Within the cloud-native world, KNative offers a powerful open-source platform for building serverless applications and can be used to trigger MLops pipelines running on Kubeflow or another open-source job scheduler, such as Apache Airflow.

With solutions like Seldon Core, it can be useful to create an ML deployment with two predictors e.g., allocating 90% of the traffic to the existing (champion) predictor and 10% to the new (challenger) predictor. The MLops team can then (ideally automatically) observe the quality of the predictions. Once proven, the deployment can be updated to move all traffic over to the new predictor. If, on the other hand, the new predictor is seen to perform worse than the existing predictor, 100% of the traffic can be moved back to the old predictor instead.

When production data changes over time, model performance can veer off from the baseline because of substantial variations in the new data versus the data used in training and validating the model. This can significantly harm prediction quality.

Drift detectors like Seldon Alibi Detect can be used to automatically assess model performance over time and trigger a model retrain process and automatic redeployment.

These are databases optimized for ML. Feature stores allow data scientists and data engineers to reuse and collaborate on datasets that have been prepared for machine learning so-called features. Preparing features can be a lot of work, and by sharing access to prepared feature datasets within data science teams, time to market can be greatly accelerated, whilst improving overall machine learning model quality and consistency. FEAST is one such open-source feature store that describes itself as the fastest path to operationalizing analytic data for model training and online inference.

By embracing the MLops paradigm for their data lab and approaching AI with the six sustainability measures in mind repeatability, availability, maintainability, quality, scalability and consistency organizations and departments can measurably improve data team productivity, AI project long-term success and continue to effectively retain their competitive edge.

Rob Gibbon is product manager for data platform and MLops at Canonical the publishers of Ubuntu.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

More here:
6 sustainability measures of MLops and how to address them - VentureBeat

Machine Learning Infrastructure as a Service to Witness Huge Growth by 2031 Designer Women – Designer Women

marketreports.info delivers well-researched industry-wide information on the Machine Learning Infrastructure as a Service market. It provides information on the markets essential aspects such as top participants, factors driving Machine Learning Infrastructure as a Service market growth, precise estimation of the Machine Learning Infrastructure as a Service market size, upcoming trends, changes in consumer behavioral pattern, markets competitive landscape, key market vendors, and other market features to gain an in-depth analysis of the Machine Learning Infrastructure as a Service market. Additionally, the report is a compilation of both qualitative and quantitative assessment by industry experts, as well as industry participants across the value chain. The Machine Learning Infrastructure as a Service report also focuses on the latest developments that can enhance the performance of various market segments.

This Machine Learning Infrastructure as a Service report strategically examines the micro-markets and sheds light on the impact of technology upgrades on the performance of the Machine Learning Infrastructure as a Service market. The Machine Learning Infrastructure as a Service report presents a broad assessment of the market and contains solicitous insights, historical data, and statistically supported and industry-validated market data. The Machine Learning Infrastructure as a Service report offers market projections with the help of appropriate assumptions and methodologies. The Machine Learning Infrastructure as a Service research report provides information as per the market segments such as geographies, products, technologies, applications, and industries.

To get sample Copy of the Machine Learning Infrastructure as a Service report, along with the TOC, Statistics, and Tables please visit @ marketreports.info/sample/64682/Machine-Learning-Infrastructure-as-a-Service

Key vendors engaged in the Machine Learning Infrastructure as a Service market and covered in this report: Amazon Web Services (AWS), Google, Valohai, Microsoft, VMware, Inc, PyTorch

Segment by Type Disaster Recovery as a Service (DRaaS) Compute as a Service (CaaS) Data Center as a Service (DCaaS) Desktop as a Service (DaaS) Storage as a Service (STaaS)Segment by Application Retail Logistics Telecommunications Others

The Machine Learning Infrastructure as a Service study conducts SWOT analysis to evaluate strengths and weaknesses of the key players in the Machine Learning Infrastructure as a Service market. Further, the report conducts an intricate examination of drivers and restraints operating in the Machine Learning Infrastructure as a Service market. The Machine Learning Infrastructure as a Service report also evaluates the trends observed in the parent Machine Learning Infrastructure as a Service market, along with the macro-economic indicators, prevailing factors, and market appeal according to different segments. The Machine Learning Infrastructure as a Service report also predicts the influence of different industry aspects on the Machine Learning Infrastructure as a Service market segments and regions.

Researchers also carry out a comprehensive analysis of the recent regulatory changes and their impact on the competitive landscape of the Machine Learning Infrastructure as a Service industry. The Machine Learning Infrastructure as a Service research assesses the recent progress in the competitive landscape including collaborations, joint ventures, product launches, acquisitions, and mergers, as well as investments in the sector for research and development.

Machine Learning Infrastructure as a Service Key points from Table of Content:

Scope of the study:

The research on the Machine Learning Infrastructure as a Service market focuses on mining out valuable data on investment pockets, growth opportunities, and major market vendors to help clients understand their competitors methodologies. The Machine Learning Infrastructure as a Service research also segments the Machine Learning Infrastructure as a Service market on the basis of end user, product type, application, and demography for the forecast period 20222030. Comprehensive analysis of critical aspects such as impacting factors and competitive landscape are showcased with the help of vital resources, such as charts, tables, and infographics.

This Machine Learning Infrastructure as a Service report strategically examines the micro-markets and sheds light on the impact of technology upgrades on the performance of the Machine Learning Infrastructure as a Service market.

Machine Learning Infrastructure as a Service Market Segmented by Region/Country: North America, Europe, Asia Pacific, Middle East & Africa, and Central & South America

Major highlights of the Machine Learning Infrastructure as a Service report:

Interested in purchasing Machine Learning Infrastructure as a Service full Report? Get instant copy @ marketreports.info/checkout?buynow=64682/Machine-Learning-Infrastructure-as-a-Service

Thanks for reading this article; you can also customize this report to get select chapters or region-wise coverage with regions such as Asia, North America, and Europe.

About Us

Marketreports.info is a global market research and consulting service provider specialized in offering wide range of business solutions to their clients including market research reports, primary and secondary research, demand forecasting services, focus group analysis and other services. We understand that how data is important in todays competitive environment and thus, we have collaborated with industrys leading research providers who works continuously to meet the ever-growing demand for market research reports throughout the year.

Contact Us:

CarlAllison (Head of Business Development)

Tiensestraat 32/0302,3000 Leuven, Belgium.

Market Reports

phone:+44 141 628 5998

Email: sales@marketreports.info

Website: http://www.marketreports.info

Visit link:
Machine Learning Infrastructure as a Service to Witness Huge Growth by 2031 Designer Women - Designer Women

Are babies the key to the next generation of artificial intelligence? – EurekAlert

Babies can help unlock the next generation of artificial intelligence (AI), according to Trinity College neuroscientists and colleagues who have just published new guiding principles for improving AI.

The research, published today [Wednesday 22 June 2022 ] in the journal Nature Machine Intelligence, examines the neuroscience and psychology of infant learning and distils three principles to guide the next generation of AI, which will help overcome the most pressing limitations of machine learning.

Dr Lorijn Zaadnoordijk, Marie Skodowska-Curie Research Fellow at Trinity College explained:

Artificial intelligence (AI) has made tremendous progress in the last decade, giving us smart speakers, autopilots in cars, ever-smarter apps, and enhanced medical diagnosis. These exciting developments in AI have been achieved thanks to machine learning which uses enormous datasets to train artificial neural network models. However, progress is stalling in many areas because the datasets that machines learn from must be painstakingly curated by humans. But we know that learning can be done much more efficiently, because infants dont learn this way! They learn by experiencing the world around them, sometimes by even seeing something just once.

In their article Lessons from infant learning for unsupervised machine learning, Dr Lorijn Zaadnoordijk and Professor Rhodri Cusack, from the Trinity College Institute of Neuroscience, and Dr Tarek R. Besold from TU Eindhoven, the Netherlands, argue that better ways to learn from unstructured data are needed. For the first time, they make concrete proposals about what particular insights from infant learning can be fruitfully applied in machine learning and how exactly to apply these learnings.

Machines, they say, will need in-built preferences to shape their learning from the beginning. They will need to learn from richer datasets that capture how the world is looking, sounding, smelling, tasting and feeling. And, like infants, they will need to have a developmental trajectory, where experiences and networks change as they grow up.

Dr. Tarek R. Besold, Researcher, Philosophy & Ethics group at TU Eindhoven, said:

As AI researchers we often draw metaphorical parallels between our systems and the mental development of human babies and children. It is high time to take these analogies more seriously and look at the rich knowledge of infant development from psychology and neuroscience, which may help us overcome the most pressing limitations of machine learning.

Professor Rhodri Cusack, The Thomas Mitchell Professor of Cognitive Neuroscience, Director of Trinity College Institute of Neuroscience, added:

Artificial neural networks were in parts inspired by the brain. Similar to infants, they rely on learning, but current implementations are very different from human (and animal) learning. Through interdisciplinary research, babies can help unlock the next generation of AI.

For more information:

http://www.tcd.ie/neuroscience

http://www.cusacklab.org

http://www.tarekbesold.com

ends/

Nature Machine Intelligence

Experimental study

People

Lessons from infant learning for unsupervised machine learning

22-Jun-2022

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

More:
Are babies the key to the next generation of artificial intelligence? - EurekAlert