Archive for the ‘Machine Learning’ Category

Machine Learning Improves Weather and Climate Models – Eos

Both weather and climate models have improved drastically in recent years, as advances in one field have tended to benefit the other. But there is still significant uncertainty in model outputs that are not quantified accurately. Thats because the processes that drive climate and weather are chaotic, complex, and interconnected in ways that researchers have yet to describe in the complex equations that power numerical models.

Historically, researchers have used approximations called parameterizations to model the relationships underlying small-scale atmospheric processes and their interactions with large-scale atmospheric processes. Stochastic parameterizations have become increasingly common for representing the uncertainty in subgrid-scale processes, and they are capable of producing fairly accurate weather forecasts and climate projections. But its still a mathematically challenging method. Now researchers are turning to machine learning to provide more efficiency to mathematical models.

Here Gagne et al. evaluate the use of a class of machine learning networks known as generative adversarial networks (GANs) with a toy model of the extratropical atmospherea model first presented by Edward Lorenz in 1996 and thus known as the L96 system that has been frequently used as a test bed for stochastic parameterization schemes. The researchers trained 20 GANs, with varied noise magnitudes, and identified a set that outperformed a hand-tuned parameterization in L96. The authors found that the success of the GANs in providing accurate weather forecasts was predictive of their performance in climate simulations: The GANs that provided the most accurate weather forecasts also performed best for climate simulations, but they did not perform as well in offline evaluations.

The study provides one of the first practically relevant evaluations for machine learning for uncertain parameterizations. The authors conclude that GANs are a promising approach for the parameterization of small-scale but uncertain processes in weather and climate models. (Journal of Advances in Modeling Earth Systems (JAMES), https://doi.org/10.1029/2019MS001896, 2020)

Kate Wheeling, Science Writer

Visit link:
Machine Learning Improves Weather and Climate Models - Eos

Self-supervised learning is the future of AI – The Next Web

Despite the huge contributions of deep learning to the field of artificial intelligence, theres something very wrong with it: It requires huge amounts of data. This is one thing that boththe pioneersandcritics of deep learningagree on. In fact, deep learning didnt emerge as the leading AI technique until a few years ago because of the limited availability of useful data and the shortage of computing power to process that data.

Reducing the data-dependency of deep learning is currently among the top priorities of AI researchers.

In hiskeynote speech at the AAAI conference, computer scientist Yann LeCun discussed the limits of current deep learning techniques and presented the blueprint for self-supervised learning, his roadmap to solve deep learnings data problem. LeCun is one of thegodfathers of deep learningand the inventor ofconvolutional neural networks (CNN), one of the key elements that have spurred a revolution in artificial intelligence in the past decade.

Self-supervised learning is one of several plans to create data-efficient artificial intelligence systems. At this point, its really hard to predict which technique will succeed in creating the next AI revolution (or if well end up adopting a totally different strategy). But heres what we know about LeCuns masterplan.

First, LeCun clarified that what is often referred to as the limitations of deep learning is, in fact, a limit ofsupervised learning. Supervised learning is the category of machine learning algorithms that require annotated training data. For instance, if you want to create an image classification model, you must train it on a vast number of images that have been labeled with their proper class.

[Deep learning] is not supervised learning. Its not justneural networks. Its basically the idea of building a system by assembling parameterized modules into a computation graph, LeCun said in his AAAI speech. You dont directly program the system. You define the architecture and you adjust those parameters. There can be billions.

Deep learning can be applied to different learning paradigms, LeCun added, including supervised learning,reinforcement learning, as well as unsupervised or self-supervised learning.

But the confusion surrounding deep learning and supervised learning is not without reason. For the moment, the majority of deep learning algorithms that have found their way into practical applications are based on supervised learning models, which says a lot aboutthe current shortcomings of AI systems. Image classifiers, facial recognition systems, speech recognition systems, and many of the other AI applications we use every day have been trained on millions of labeled examples.

Reinforcement learning and unsupervised learning, the other categories of learning algorithms, have so far found very limited applications.

Supervised deep learning has given us plenty of very useful applications, especially in fields such ascomputer visionand some areas of natural language processing. Deep learning is playing an increasingly important role in sensitive applications, such as cancer detection. It is also proving to be extremely useful in areas where the scale of the problem is beyond being addressed with human efforts, such aswith some caveatsreviewing the huge amount of content being posted on social media every day.

If you take deep learning from Facebook, Instagram, YouTube, etc., those companies crumble, LeCun says. They are completely built around it.

But as mentioned, supervised learning is only applicable where theres enough quality data and the data can capture the entirety of possible scenarios. As soon as trained deep learning models face novel examples that differ from their training examples, they start to behave in unpredictable ways. In some cases,showing an object from a slightly different anglemight be enough to confound a neural network into mistaking it with something else.

ImageNet vs reality: In ImageNet (left column) objects are neatly positioned, in ideal background and lighting conditions. In the real world, things are messier (source: objectnet.dev)

Deep reinforcement learning has shownremarkable results in games and simulation. In the past few years, reinforcement learning has conquered many games that were previously thought to off-limits for artificial intelligence. AI programs have already decimated human world champions atStarCraft 2, Dota, and the ancient Chinese board game Go.

But the way these AI programs learn to solve problems is drastically different from that of humans. Basically, a reinforcement learning agent starts with a blank slate and is only provided with a basic set of actions it can perform in its environment. The AI is then left on its own to learn through trial-and-error how to generate the most rewards (e.g., win more games).

This model works when the problem space is simple and you have enough compute power to run as many trial-and-error sessions as possible. In most cases, reinforcement learning agents take an insane amount of sessions to master games. The huge costs have limited reinforcement learning research to research labsowned or funded by wealthy tech companies.

Reinforcement learning agents must be trained on hundreds of years worth of session to master games, much more than humans can play in a lifetime (source: Yann LeCun).

Reinforcement learning systems are very bad attransfer learning. A bot that plays StarCraft 2 at grandmaster level needs to be trained from scratch if it wants to play Warcraft 3. In fact, even small changes to the StarCraft game environment can immensely degrade the performance of the AI. In contrast, humans are very good at extracting abstract concepts from one game and transferring it to another game.

Reinforcement learning really shows its limits when it wants to learn to solve real-world problems that cant be simulated accurately. What if you want to train a car to drive itself? And its very hard to simulate this accurately, LeCun said, adding that if we wanted to do it in real life, we would have to destroy many cars. And unlike simulated environments, real life doesnt allow you to run experiments in fast forward, and parallel experiments, when possible, would result in even greater costs.

LeCun breaks down the challenges of deep learning into three areas.

First, we need to develop AI systems that learn with fewer samples or fewer trials. My suggestion is to use unsupervised learning, or I prefer to call it self-supervised learning because the algorithms we use are really akin to supervised learning, which is basically learning to fill in the blanks, LeCun says. Basically, its the idea of learning to represent the world before learning a task. This is what babies and animals do. We run about the world, we learn how it works before we learn any task. Once we have good representations of the world, learning a task requires few trials and few samples.

Babies develop concepts of gravity, dimensions, and object persistence in the first few months after their birth. While theres debate on how much of these capabilities are hardwired into the brain and how much of it is learned, what is for sure is that we develop many of our abilities simply by observing the world around us.

The second challenge is creating deep learning systems that can reason. Current deep learning systems are notoriously bad at reasoning and abstraction, which is why they need huge amounts of data to learn simple tasks.

The question is, how do we go beyond feed-forward computation and system 1? How do we make reasoning compatible with gradient-based learning? How do we make reasoning differentiable? Thats the bottom line, LeCun said.

System 1 is the kind of learning tasks that dont require active thinking, such as navigating a known area or making small calculations. System 2 is the more active kind of thinking, which requires reasoning.Symbolic artificial intelligence, the classic approach to AI, has proven to be much better at reasoning and abstraction.

But LeCun doesnt suggest returning to symbolic AI or tohybrid artificial intelligence systems, as other scientists have suggested. His vision for the future of AI is much more in line with that of Yoshua Bengio, another deep learning pioneer, who introduced the concept ofsystem 2 deep learningat NeurIPS 2019 and further discussed it at AAAI 2020. LeCun, however, did admit that nobody has a completely good answer to which approach will enable deep learning systems to reason.

The third challenge is to create deep learning systems that can lean and plan complex action sequences, and decompose tasks into subtasks. Deep learning systems are good at providing end-to-end solutions to problems but very bad at breaking them down into specific interpretable and modifiable steps. There have been advances in creatinglearning-based AI systems that can decompose images, speech, and text. Capsule networks, invented by Geoffry Hinton, address some of these challenges.

But learning to reason about complex tasks is beyond todays AI. We have no idea how to do this, LeCun admits.

The idea behind self-supervised learning is to develop a deep learning system that can learn to fill in the blanks.

You show a system a piece of input, a text, a video, even an image, you suppress a piece of it, mask it, and you train a neural net or your favorite class or model to predict the piece thats missing. It could be the future of a video or the words missing in a text, LeCun says.

The closest we have to self-supervised learning systems are Transformers, an architecture that has proven very successful innatural language processing. Transformers dont require labeled data. They are trained on large corpora of unstructured text such as Wikipedia articles. And theyve proven to be much better than their predecessors at generating text, engaging in conversation, and answering questions. (But they are stillvery far from really understanding human language.)

Transformers have become very popular and are the underlying technology for nearly all state-of-the-art language models, including Googles BERT, Facebooks RoBERTa,OpenAIs GPT2, and GooglesMeena chatbot.

More recently, AI researchers have proven thattransformers can perform integration and solve differential equations, problems that require symbol manipulation. This might be a hint that the evolution of transformers might enable neural networks to move beyond pattern recognition and statistical approximation tasks.

So far, transformers have proven their worth in dealing with discreet data such as words and mathematical symbols. Its easy to train a system like this because there is some uncertainty about which word could be missing but we can represent this uncertainty with a giant vector of probabilities over the entire dictionary, and so its not a problem, LeCun says.

But the success of Transformers has not transferred to the domain of visual data. It turns out to be much more difficult to represent uncertainty and prediction in images and video than it is in text because its not discrete. We can produce distributions over all the words in the dictionary. We dont know how to represent distributions over all possible video frames, LeCun says.

For each video segment, there are countless possible futures. This makes it very hard for an AI system to predict a single outcome, say the next few frames in a video. The neural network ends up calculating the average of possible outcomes, which results in blurry output.

This is the main technical problem we have to solve if we want to apply self-supervised learning to a wide variety of modalities like video, LeCun says.

LeCuns favored method to approach supervised learning is what he calls latent variable energy-based models. The key idea is to introduce a latent variable Z which computes the compatibility between a variable X (the current frame in a video) and a prediction Y (the future of the video) and selects the outcome with the best compatibility score. In his speech, LeCun further elaborates on energy-based models and other approaches to self-supervised learning.

Energy-based models use a latent variable Z to compute the compatibility between a variable X and a prediction Y and select the outcome with the best compatibility score (image credit: Yann LeCun).

I think self-supervised learning is the future. This is whats going to allow to our AI systems, deep learning system to go to the next level, perhaps learn enough background knowledge about the world by observation, so that some sort of common sense may emerge, LeCun said in his speech at the AAAI Conference.

One of the key benefits of self-supervised learning is the immense gain in the amount of information outputted by the AI. In reinforcement learning, training the AI system is performed at scalar level; the model receives a single numerical value as reward or punishment for its actions. In supervised learning, the AI system predicts a category or a numerical value for each input.

In self-supervised learning, the output improves to a whole image or set of images. Its a lot more information. To learn the same amount of knowledge about the world, you will require fewer samples, LeCun says.

We must still figure out how the uncertainty problem works, but when the solution emerges, we will have unlocked a key component of the future of AI.

If artificial intelligence is a cake, self-supervised learning is the bulk of the cake, LeCun says. The next revolution in AI will not be supervised, nor purely reinforced.

This story is republished fromTechTalks, the blog that explores how technology is solving problems and creating new ones. Like them onFacebookhere and follow them down here:

Published April 5, 2020 05:00 UTC

Read more from the original source:
Self-supervised learning is the future of AI - The Next Web

Google is using machine learning to improve the quality of Duo calls – The Verge

Google has rolled out a new technology to improve audio quality in Duo calls when the service cant maintain a steady connection called WaveNetEQ. Its based on technology from Googles DeepMind division that aims to replace audio jitter with artificial noise that sounds just like human speech, generated using machine learning.

If youve ever made a call over the internet, chances are youve experienced audio jitter. It happens when packets of audio data sent as part of the call get lost along the way or otherwise arrive late or in the wrong order. Google says that 99 percent of Duo calls experience packet loss: 20 percent of these lose over 3 percent of their audio, and 10 percent lose over 8 percent. Thats a lot of audio to replace.

Every calling app has to deal with this packet loss somehow, but Google says that these packet loss concealment (PLC) processes can struggle to fill gaps of 60ms or more without sounding robotic or repetitive. WaveNetEQs solution is based on DeepMinds neural network technology, and it has been trained on data from over 100 speakers in 48 different languages.

Here are a few audio samples from Google comparing WaveNetEQ against NetEQ, a commonly used PLC technology. Heres how it sounds when its trying to replace 60ms of packet loss:

Heres a comparison when a call is experiencing packet loss of 120ms:

Theres a limit to how much audio the system can replace, though. Googles tech is designed to replace short sounds, rather than whole words. So after 120ms, it fades out and produces silence. Google says it evaluated the system to make sure it wasnt introducing any significant new sounds. Plus, all of the processing also needs to happen on-device since Google Duo calls are end-to-end encrypted by default. Once the calls real audio resumes, WaveNetEQ will seamlessly fade back to reality.

Its a neat little bit of technology that should make calls that much bit easier to understand when the internet fails them. The technology is already available for Duo calls made on Pixel 4 phones, thanks to the handsets December feature drop, and Google says its in the process of rolling it out to other unnamed handsets.

See the rest here:
Google is using machine learning to improve the quality of Duo calls - The Verge

Parasoft wins 2020 VDC Research Embeddy Award for Its Artificial Intelligence (AI) and Machine Learning (ML) Innovation – Yahoo Finance

Parasoft C/C++test is honored for its leading technology to increase software engineer productivity and achieve safety compliance

MONROVIA, Calif., April 7, 2020 /PRNewswire/ --Parasoft, a global software testing automation leader for over 30 years, received the VDC Research Embedded Award for 2020. The technology research and consulting firm yearly recognizes cutting-edge Software and Hardware Technologies in the embedded industry. This year, Parasoft C/C++test, aunified development testing solution forsafety and securityof embedded C and C++ applications, was recognized for its new, innovative approach that expedites the adoption of software code analysis, increasing developer productivity and simplifying compliance with industry standards such as CERT C/C++, MISRA C 2012 and AUTOSAR C++14. To learn more about Parasoft C/C++test, please visit: https://www.parasoft.com/products/ctest.

Parasoft C/C++test is honored for its leading technology to increase software engineer productivity and achieve safety compliance

"Parasoft has continued its investment in the embedded market, adding new products and personnel to boost its market presence. In addition to highlighting expanded partnerships and coding-standard support, the company announced the integration of AI capabilities into its static analysis engine. While defect prioritization systems have been part of static analysis solutions for well over ten years, Parasoft's solution takes the idea a step further. Their solution now effectively learns from past interactions with identified defects and the codebase to better help users triage new findings," states Chris Rommel, EVP, VDC Research Group.

Parasoft's latest innovation applies AI/Machine Learning to the process of reviewing static analysis findings. Static analysis is a foundational part of the quality process, especially in safety-critical development (e.g., ISO26262, IEC61508), and is an effective first step to establish secure development practices. A common challenge when deploying static analysis tools is dealing with the multitude of reported findings. Scans can produce tens of thousands of findings, and teams of highly qualified resources need to go through a time-consuming process of reviewing and identifying high-priority findings. This process leads to finding and reviewing critical issues late in the cycle, delaying the delivery, and worse, allowing insecure/unsafe code to become embedded into the codebase.

Parasoft leaps forwardbeyond the rest of the competitive market by having AI/ML take into account the context of both historical interactions with the code base and prior static analysis findings to predict relevance and prioritize new findings. This innovation helps organizations achieve compliance with industry standards and offers a unique application of AI/ML in helping organizations with the adoption of Static Analysis. This innovative technology builds on Parasoft's previous AI/ML innovations in the areas of Web UI, API, and Unit testing - https://blog.parasoft.com/what-is-artificial-intelligence-in-software-testing.

"We are extremely honored to have received this award, particularly in light of the competition, VDC's expertise and knowledge of the embedded market," said Mark Lambert, VP of Products at Parasoft. "We have always been committed to innovation led by listening to our customers and leveraging capabilities that will help drive them forward. This creativity has always driven Parasoft's development and is something that has been in the company's DNA from its founding."

Story continues

About Parasoft (www.parasoft.com):Parasoft, the global leader in software testing automation, has been reducing the time, effort, and cost of delivering high-quality software to the market for the last 30+ years. Parasoft's tools support the entire software development process, from when the developer writes the first line of code all the way through unit and functional testing, to performance and security testing, leveraging simulated test environments along the way. Parasoft's unique analytics platform aggregates data from across all testing practices, providing insights up and down the testing pyramid to enable organizations to succeed in today's most strategic development initiatives, including Agile/DevOps, Continuous Testing, and the complexities of IoT.

View original content to download multimedia:http://www.prnewswire.com/news-releases/parasoft-wins-2020-vdc-research-embeddy-award-for-its-artificial-intelligence-ai--and-machine-learning-ml-innovation-301036797.html

SOURCE Parasoft

Excerpt from:
Parasoft wins 2020 VDC Research Embeddy Award for Its Artificial Intelligence (AI) and Machine Learning (ML) Innovation - Yahoo Finance

Machine Learning in Healthcare Market to Witness Tremendous Growth in Forecasted Period 2020-2027 – Bandera County Courier

Market Research Inc has included analytical data ofMachine Learning in Healthcaremarket to its massive database. The report comprises of various verticals of the businesses. The report is aggregated on the basis of different dynamic aspects of the market study. The statistical report is compiled by means of primary and secondary research methodologies. A comprehensive overview of Porters five analysis and SWOT analysis is used to examine the strength, weaknesses, threats and opportunities of the market.

Request a Sample Copy of this report @

https://www.marketresearchinc.com/request-sample.php?id=16640

Top Key Players in the Global Machine Learning in Healthcare Market Research Report:

The study further also presents the details on financial attributes such as pricing structures, shares and profit margins. In a distinctive feature the report, it includes a summary of top notch companies such asMachine Learning in Healthcare.The competitive landscape of theMachine Learning in Healthcaremarket is presented by analyzing various successful and startup industries. The economic aspects of the businesses are provided by using facts and figures.

Ask for Discount @

https://www.marketresearchinc.com/ask-for-discount.php?id=16640

The market study covers the lucrative market scope ofNorth America, Latin America, Asia-Pacific, Europe and Africaon the basis of productivity, thus focusing on the leading countries from the global regions. The report also highlights the pricing structure including cost of raw material and cost of manpower.

The report further also offers a clear picture of the various factors that demonstrate as significant business stimulants of theMachine Learning in Healthcaremarket. This market study also analyzes and presents more accurate data which helps to gauge the overall framework of the businesses. Technological advancements in globalMachine Learning in Healthcaresector is accurately examined by experts.

Key Objectives of Machine Learning in Healthcare Market Report:

Study of the annual revenues and market developments of the major players that supply Machine Learning in Healthcare Analysis of the demand for Machine Learning in Healthcare by component Assessment of future trends and growth of architecture in the Machine Learning in Healthcare market Assessment of the Machine Learning in Healthcare market with respect to the type of application Study of the market trends in various regions and countries, by component, of the Machine Learning in Healthcare market Study of contracts and developments related to the Machine Learning in Healthcare market by key players across different regions Finalization of overall market sizes by triangulating the supply-side data, which includes product developments, supply chain, and annual revenues of companies supplying Machine Learning in Healthcare across the globe.

Ask for Enquiry @

https://www.marketresearchinc.com/enquiry-before-buying.php?id=16640

In this study, the years considered to estimate the size of Machine Learning in Healthcare are as follows:

History Year: 2013-2019

Base Year: 2019

Estimated Year: 2020

Forecast Year 2020 to 2026.

About Us

Market Research Inc is farsighted in its view and covers massive ground in global research. Local or global, we keep a close check on both markets. Trends and concurrent assessments sometimes overlap and influence the other. When we say market intelligence, we mean a deep and well-informed insight into your products, market, marketing, competitors, and customers. Market research companies are leading the way in nurturing global thought leadership. We help your product/service become the best they can with our informed approach.

Contact Us

Market Research Inc

Kevin

51 Yerba Buena Lane, Ground Suite,

Inner Sunset San Francisco, CA 94103, USA

Call Us:+1 (628) 225-1818

Write Us@sales@marketresearchinc.com

https://www.marketresearchinc.com

Read this article:
Machine Learning in Healthcare Market to Witness Tremendous Growth in Forecasted Period 2020-2027 - Bandera County Courier