Archive for the ‘Machine Learning’ Category

Top 5 Deep Learning Frameworks that Techies Should Learn in 2022 – Analytics Insight

Deep learning frameworks are trending among machine learning developers

Deep learning frameworks help data scientists and ML developers in various critical tasks. As of today, both predictive analytics and machine learning are deeply integrated into business operations and have proven to be quite crucial. Integrating this advanced branch of ML can enhance efficiency and accuracy for the task-at-hand when it is trained with vast amounts of big data. In this video, we will explore the top deep learning frameworks that techies should learn this year.

Tensor Flow: The Javascript-based open-source learning platform has a wide range of tools to enable model deployment on different types of devices. While the core tools facilitate model deployment on browsers, the lite version is well-suited for mobiles and embedded devices.

PyTorch: Developed by Facebook, it is a versatile framework, originally designed to explore the entire process, from research prototyping to production deployment. It carries a C++ frontend over a Python interface.

Keras: It is an open-source framework that can run on top of Tensorflow, Theano, Microsoft Cognitive Toolkit, and Plaid ML. Keras framework is known for its speed because of built-in support for parallel processing of data processing and ML training.

Sonnet: A high-level library that is used in building complex neural network structures in Tensorflow. It simplifies the high-level architectural designs by independently creating Python objects to a graph.

MXNet: It is a highly scalable open-source Deep learning framework designed to train and deploy deep neural networks. It is capable of fast model training and supports multiple programming languages such as C, C++, Python, Julia, Matlab, etc.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

Read more from the original source:
Top 5 Deep Learning Frameworks that Techies Should Learn in 2022 - Analytics Insight

Fresh4Cast leader argues for the crucial role of machine learning in moving the industry forward – Produce Business UK

Automation is touching every industry; you cant survive in the 21st century economy without the data and the insights that come from technologies like artificial intelligence and machine learning. The food automation market, for example, isexpected to reach $29.4 billion by 2027.

Within the food space is produce and agriculture, and these are sub-spaces that havent seen quite as much advancement and adoption. Thats changing now thanks to companies likeFresh4cast, a company that uses AI forecasting to help growers and distributors improve productivity, increase margins and reduce waste. Its a solution that includes data sets build from historical, as well as trade statistics and weather, and a virtual assistant designed to automate tasks.

At the London Produce Show and Conference, we will be welcoming Fresh4casts COOMichele DallOlio.

Michele has based his career on the synergy between innovation and fresh produce. Starting with a degree in Agribusiness and a master in Management and Marketing, he explored the complexity of fresh produce data working as Head of Research for a leading Italian consultancy. He then moved to London and started a new journey withFresh4castwhere he is now the COO.

Michele spoke to us about how greater insights can help growers and distributorsDL benefit from increased insights, how that can lead to less food waste, and what hell be talking about at the London Produce Show.

Michele DallOlioCOOFresh4cast

Q: Lets kick this off by giving a little bit of an overview of yourself and about the Fresh4cast and what you do.

A: Im from Italy, I moved to London five years ago. I have always been working and studying in the fresh produce sector, from high school until now. In my career back in Italy, I was working with a lot of data, I was head of analysis in a lead consultancy there and I basically developed into a more data-oriented person with Fresh4cast. When I moved to London five years ago, I joined as Head of Customer development and now Im COO, so Im specifically looking at all the operations, the planning internally, and Im basically the interface between the customer and our production team.

Q: You said youve been in the produce space for a number of years and Im really fascinated by the idea of applying technologies like artificial intelligence and machine learning to sectors where that kind of technology really hasnt been applied before. I used to work for a motor company, for example, and that was a space that had been legacy space and the technology was very slow to develop because of the older people that were set in their ways. Do you feel like that was the same thing in the produce space? Was there a lack of innovation for a long time? And is that changing now?

A: We are definitely at a tipping point because, if you think about agriculture in general, and fresh produce is one of the sub sectors of agriculture, it is always lagging a bit behind compared to other sectors, for a variety of reasons. Service-based sectors are always more advanced, when we look at software, for instance. So, we definitely are at a tipping point, because, yes, as a sector, its a bit behind, but the benefit is that someone else already explored those paths. If youre lagging a bit behind, you know what works and what doesnt; its an important factor, especially in AI, because theres a lot of trial and error, and a lot of errors. There are a lot of very good examples where fresh produce can take inspiration from. So, the data is there, its building up and its just waiting for a machine learning application or an algorithmic forecaster to untap its potential.

Q: What do you think are some of the reasons why the space was lagging behind before?

A: Well, there are a lot of reasons; its a very difficult topic. If you think about innovation in general, not just technological innovation, its driven by key factors such as availability of talent, and being able to attract those talents in the sector. Compared to other sectors, of course, agriculture is a lower margin sector, so innovation is there but its not always the first priority. And so, people and resources are the main thing that I see at the moment that is actually changing. Until 10 years ago, you didnt see any fresh produce business having a data scientist in house or a team of people that was analyzing data, or actually hiring companies, such as Fresh4cast, for building a data set, building machine learning forecasters, and so on. Nowadays, there are a lot of requests for this, so the mentality of the top management is changing. That should drive this tipping point off of catching up with other sectors.

Q: Its funny what you said about being a little bit behind meaning that you get to actually see what works and what doesnt. I never thought of it that way before. Everybody else does this trial and error and then you come along and go, Okay, well, now we know what works, and we can just apply it.

A: When we think about the future and present, and we think, now is the present for everyone, but its not actually true because, for some people, theyre already in the future. So, we can basically copy or take a lot of inspiration from them.

Q: Talk about the ways that you apply AI and machine learning to the produce sector, and the ways that you use that data.

A: Fresh4cast has the three step approach. First of all, we have the customer as a data asset. As you know, machine learning feeds from data and learns from data, so thats the very first milestone. Building a data set is easier said than done, because its very laborious, and it requires different kinds of skills in the company, but we have different tools over there. So, whenever we have a data set that we can work with, the second bit is that we display it back to the customer using business intelligence tools that weve built. So, there is very specific data, for instance data analytics, that helps to understand the seasonality in the fresh produce business, and so on. Its about understanding what happened in the past in order to understand what is going to happen in the future. And the third point is using algorithmic forecasting, machine learning forecasting, very different tools, in order to extract even more value from that data asset, letting the machine find correlations and try to build models that will predict whats going to happen in the future, even specific inputs.

Q: So, you get the data and you have to make these forecasts based on that data. And then what do the growers and distributors do with that? How do they put it to use? What are some use cases for them?

A: Well, it depends on the supply chain. So, in order to answer your question, I need to talk about the supply chain approach of Fresh4cast. We work with the whole supply chain; we dont work only with one aspect. So, we both work with growers, with distributors, with data from retailers, for instance, and so on. And the important bit is that, for each point of the supply chain, the application changes. Ill give you two key examples: one is at production where, if a grower is going to plant this amount of strawberries, for instance, we give them the weather forecast and other inputs, so they know when to plant them and how much is going to harvest. So, in a nutshell, how many strawberries will be ready next week or in four weeks time and at what quality. On the other side, on the sales side, say there is a distributor thats supplying, for instance, a big retailer; the distributor needs to foresee and start planning for how much the retailer is going to ask in the next few weeks. So, we are talking about a forecast that tries to predict how much volume will be needed? If there is a big promo in Tesco, for instance, what is going to be the seasonality in the future? The cannibalization between the category and so on.

This is usually something that a human could do, but not at scale. There are a lot of very small tasks that a human could do, but it will take him so long that the data is already old, so it wouldnt be effective to use that forecast because we already have the actuals. A machine learning application, especially in fresh produce, is something that is automating a lot of very small tasks in a clever way. Its like a proficient assistant: it gives you an output, and the human, at the end of the day, decides what to do with it and makes decisions using this information.

Q: Youre telling growers when and how much to grow, and youre telling distributors and retailers how much theyre going to sell, is that right? So, everybody in the supply chain is getting this data to know how much to expect and how much they should expect to sell?

A: Exactly. If you want to be demand driven, you need to have a forecast in all of the key steps of your supply chain that feeds into the other. So, for instance, if you have a product that you will have next week, how much sales will you have next week? These two pieces of information together creates synergy and allows you to plan better, for instance, your warehouse activities, like how many man hours you need to pack the product.

Q: Where do you pull your data from? Like you said, youre using an existing database. Is any of your data proprietary?

A: We are a software as a service, first of all, so their data is confined inside the customers walls. It doesnt go anywhere and we only use the data for the customer. So, we dont do data aggregation with other customers or build models across customers. We do every application in isolation because we also work with fierce competitors. So, thats the way to go. We provide some data such as weather and international trade, but its all publicly available data, we dont have any proprietary data, we just have proprietary models that interpret the data.

Q: Its interesting that you dont aggregate that data. Wouldnt that be a more helpful way to get a broader view of the market?

A: We have a few cases where a few companies put together their data, but we need to have written consent. By default, we always work only with the data from the specific customer. And the reason why is that aggregation is useful for generic market trends. So, companies like Nielsen, they aggregate data across a lot of companies, so they have market trends. On our end, we tend to do the opposite: we specialize and fine tune the forecasting model specifically on that customers operations and that customer data. Because even if one company says the same thing as another one, it doesnt mean that their business structure and supply chain are similar. They could have a very different structure and, therefore, whenever you change something in the structure, the data reflects the operation. So, it would be a different kind of data.

Q: I would think that what one retailer sells would sell the same at another retailer but it sounds like maybe thats not necessarily the case.

A: We dont work directly with retailers; our customers always specialize only in fresh produce. Some of our customer data comes from the retailer, so we can forecast that, but our customers are the growers and distributors. The retailers, we can have the data about them, but they usually have their own forecasting system internally. Just to clarify.

Q: I know that you also offer a virtual analyst for your customers and Im very interested in learning more about that. I saw that it can send email reports, alerts, prepare Excel reports, and PowerPoint presentations. Whats the technology behind that?

A: Saga is our virtual assistant and you already mentioned a lot of the use cases that we use it for. Its basically a very proficient assistant that automates boring tasks. That means its very quick at doing them and it takes out that overhead of admin-based work that all the employees have in their routine job. From sales to production, they always have to work with an Excel file, for instance. With Saga, if a grower sends their estimate to the central planning team, they CC Saga in their email, then Saga is able to see the attachment, incorporate the attachment in our database, display analytics, and come back with an email report, which is very bespoke, depending on the customer. Basically, its good at interfacing, especially with email attachment and preparing reports on the fly. So, again, its all about automation, at the end of the day.

Q: Im assuming that the whole point of that is to free employees up to do more complicated tasks rather than, like you said, repetitive boring stuff that takes up a lot of time but it doesnt require much skill.

A: Exactly. The second point I mentioned before is the business intelligence bit. If you think about how much time you spend on getting the file out of ERP, for instance, elaborating with Excel, remapping, and so on, you will probably spend 80% on transforming and manipulating the data and 20% of your remaining time on actually analyzing the data and making a decision from what you just discovered. With automation, you get rid of all the preparation, so you get rid of all that 80%, but you have ready made analytics, so you can focus your attention on making better decisions for the business. And maybe you have some extra time to have coffee. Thats a very Italian thing to say, I realize.

Q: Have you been able to actually measure improved productivity for your customers? And do you have any numbers you could share with me?

A: Productivity is quite difficult. I could share with you a couple of examples of what happens, but they would be customer specific, so I would avoid that. I can share it with you, though, the improvement of our specialized business intelligence tools that allows the growers or the planner to improve their own accuracy. So, the key part of improving is measuring at the very beginning; you need to measure, understand, and after that you can improve. We have a case study where growers were producing forecasts for their crops and, using our business intelligence tool, they were measuring the accuracy of their own forecast on a daily and weekly basis. They managed to shave 20% of their total errors. So, just looking at their data and having these tools that give you key KPIs, or key performance indicators, on how good your forecast is, where your errors are, and so on, they could shave, without any other inputs, 20% of their errors out of their forecast activity.

Q: How do you measure the reduction in food waste?

A: The reduction in food waste depends, again, on the level of supply chain we are talking about. Im focusing a lot on the production side but, if you think about your sales side, if you have too much product, and you didnt know in advance, and youre not able to sell it in your warehouse, you will have whats called an overstock. Usually it is not a big problem in other categories but we are in fresh produce, so the shelf life, how long you can keep the product in the fridge, is very, very short. Thats one of the reasons why the founder, Mihai Ciobanu, actually focused on the fresh produce at the very beginning with forecasting, because its very, very difficult to forecast. And, on top of that, if you get the forecast wrong, you can lose a lot of money, basically, throwing away a product that should have been sold.

Q: Give me a preview of what youll be talking about at the London Produce Show and Conference.

A: The production will be focused on how to leverage your owndata assets and extra value from it. Specifically, we will look at how the forecasting activity, and specifically the machine learning tool, is helping both growers and distributors to improve efficiency and reduce waste in their own supply chain. We will have a couple of practical examples of how better forecasting is helping with these two topics.

Continued here:
Fresh4Cast leader argues for the crucial role of machine learning in moving the industry forward - Produce Business UK

Graph + AI Summit 2022: Industrys Only Open Conference For Accelerating Analytics and AI With Graph to Feature Speakers, Use Cases from Worlds Most…

TigerGraph, Inc.

Virtual Global Event to Take Place May 24-25, 2022; Call for Papers Open Through April 11

REDWOOD CITY, Calif., March 22, 2022 (GLOBE NEWSWIRE) -- TigerGraph, provider of a leading graph analytics platform, today announced the return of Graph + AI Summit, the only open industry conference devoted to democratizing and accelerating analytics, AI, and machine learning with graph algorithms. The virtual global event will take place May 24-25, 2022 and the call for speakers is open through April 11, 2022.

Graph + AI Summit is a global celebration of the power of graph and AI, bringing together business leaders, domain experts, and developers to explore creative ways to solve problems with graph technology, said Yu Xu, CEO and Founder, TigerGraph. We will be showcasing real-world examples of graph with AI and machine learning use cases from world-leading banks, retailers, and fintechs. Well also be revealing all 15 winners of the Graph for All Million Dollar Challenge, an exciting initiative seeking world-changing graph implementations from around the globe. Were looking forward to connecting with global graph enthusiasts this year and hope youll join us.

Past Graph + AI Summits have attracted thousands of attendees from 70+ countries. Data scientists, data engineers, architects, and business and IT executives from over 182 of the Fortune 500 companies participated in the last event alone. Past speakers from Amazon, Capgemini, Gartner, Google, Microsoft, UnitedHealth Group, JPMorgan Chase, Mastercard, NewDay, Intuit, Jaguar Land Rover, Pinterest, Stanford University, Forrester Research, Accenture, KPMG, Intel, Dell, and Xilinx along with many innovative startups shared how their organizations reaped the benefits of graph.

Graph + AI Summit 2022 Call for Papers Open Through April 11, 2022

Are you building cutting-edge graph technology solutions to help your organization adapt to an uncertain world? Maybe youre an expert in supercharging machine learning and artificial intelligence using graph algorithms. Or maybe youre a business leader who knows the value of overcoming the data silos created by legacy enterprise solutions. If any of these scenarios describe you, or if you have deep knowledge of graph technology, we want you to be a speaker at this years Graph + AI Summit.

Story continues

The conference will include keynote presentations from graph luminaries as well as industry and technology tracks. Each track will include beginner, intermediate, and advanced-level sessions. Our audience will benefit from a mix of formal presentations and interactive panel participation. Case studies are particularly welcome. Your submission may include one or more of the following topics:

Artificial intelligence use cases and case studies

Machine learning use cases and case studies

Graph neural networks

Combing Natural Language Processing (NLP) with graph

First-of-a-kind solutions combining AI, machine learning, and graph algorithms

Predictive analytics

Customer 360 and customer journey

Hyper-personalized recommendation engine

Fraud detection, anti-money laundering

Supply chain optimization

Cybersecurity

Industry-specific applications in the internet, eCommerce, banking, insurance, fintech, media, manufacturing, transportation, and healthcare industries.

Please submit your proposal by April 11, 2022 at 12:00 A.M./midnight PT here.

Registration

To register for the event, please visit https://www.tigergraph.com/graphaisummit/.

Graph for All Million Dollar Challenge Winners to be Featured at Graph + AI Summit 2022

Last month, TigerGrpah launched Graph for All Million Dollar Challenge, a global search for innovative ways to harness the power of graph technology and machine learning to solve real-world problems. The challenge brings together brilliant minds to build innovative solutions to better our future with one question: How will you change the world with graph? Since the launch, the challenge has gained major traction worldwide with over 1,000 registrations from 90+ countries so far. TigerGraph will reveal and feature all 15 winners of the challenge at the Graph + AI Summit 2022 event. For more information or to register for the challenge, please visit https://www.tigergraph.com/graph-for-all/.

Helpful Links

About TigerGraph TigerGraph is a platform for advanced analytics and machine learning on connected data. Based on the industrys first and only distributed native graph database, TigerGraphs proven technology supports advanced analytics and machine learning applications such as fraud detection, anti-money laundering (AML), entity resolution, customer 360, recommendations, knowledge graph, cybersecurity, supply chain, IoT, and network analysis. The company is headquartered in Redwood City, California, USA. Start free with tigergraph.com/cloud.

Media Contacts:

North AmericaTanya CarlssonOffleash PRtanya@offleashpr.com+1 (707) 529-6139

EMEAAnne HardingThe Message Machineanne@themessagemachine.com +44 7887 682943

The rest is here:
Graph + AI Summit 2022: Industrys Only Open Conference For Accelerating Analytics and AI With Graph to Feature Speakers, Use Cases from Worlds Most...

Machine Learning Will be one of the Best Ways to Identify Habitable Exoplanets – Universe Today

The field of extrasolar planet studies is undergoing a seismic shift. To date, 4,940 exoplanets have been confirmed in 3,711 planetary systems, with another 8,709 candidates awaiting confirmation. With so many planets available for study and improvements in telescope sensitivity and data analysis, the focus is transitioning from discovery to characterization. Instead of simply looking for more planets, astrobiologists will examine potentially-habitable worlds for potential biosignatures.

This refers to the chemical signatures associated with life and biological processes, one of the most important of which is water. As the only known solvent that life (as we know it) cannot exist, water is considered the divining rod for finding life. In a recent study, astrophysicists Dang Pham and Lisa Kaltenegger explain how future surveys (when combined with machine learning) could discern the presence of water, snow, and clouds on distant exoplanets.

Dang Pham is a graduate student with the David A. Dunlap Department of Astronomy & Astrophysics at the University of Toronto, where he specializes in planetary dynamics research. Lisa Kaltenegger is an Associate Professor in Astronomy at Cornell University, the Director of the Carl Sagan Institute, and a world-leading expert in modeling potentially habitable worlds and characterizing their atmospheres.

Water is something that all life on Earth depends on, hence its importance for exoplanet and astrobiological surveys. As Lisa Kaltenegger told Universe Today via email, this importance is reflected in NASAs slogan just follow the water which also inspired the title of their paper:

Liquid water on a planets surface is one of the smoking guns for potential life I say potential here because we dont know what else we need to get life started. But liquid water is a great start. So we used NASAs slogan of Just follow the water and asked, how can we find water on the surface of rocky exoplanets in the Habitable Zone? Doing spectroscopy is time intensive, thus we are searching for a faster way to initially identify promising planets those with liquid water on it.

Currently, astronomers have been limited to looking for Lyman-alpha line absorption, which indicates the presence of hydrogen gas in an exoplanets atmosphere. This is a byproduct of atmospheric water vapor thats been exposed to solar ultraviolet radiation, causing it to become chemically disassociated into hydrogen and molecular oxygen (O2) the former of which is lost to space while the latter is retained.

This is about to change, thanks to next-generation telescopes like the James Webb (JWST) and Nancy Grace Roman Space Telescopes (RST), as well as next-next-generation observatories like the Origins Space Telescope, the Habitable Exoplanet Observatory (HabEx), and the Large UV/Optical/IR Surveyor (LUVOIR). There are also ground-based telescopes like the Extremely Large Telescope (ELT), the Giant Magellan Telescope (GMT), and the Thirty Meter Telescope (TMT).

Thanks to their large primary mirrors and advanced suite of spectrographs, chronographs, adaptive optics, these instruments will be able to conduct Direct Imaging studies of exoplanets. This consists of studying light reflected directly from an exoplanets atmosphere or surface to obtain spectra, allowing astronomers to see what chemical elements are present. But as they indicate in their paper, this is a time-intensive process.

Astronomers start by observing thousands of stars for periodic dips in brightness, then analyzing the light curves for signs of chemical signatures. Currently, exoplanet researchers and astrobiologists rely on amateur astronomers and machine algorithms to sort through the volumes of data their telescopes obtain. Looking ahead, Pham and Kaltenegger show how more advanced machine learning will be crucial.

As they indicate, MI techniques will allow astronomers to conduct the initial characterizations of exoplanets more rapidly, allowing astronomers to prioritize targets for follow-up observations. By following the water, astronomers will be able to dedicate more of an observatorys valuable survey time to exoplanets that are more likely to provide significant returns.

Next-generation telescopes will look for water vapor in the atmosphere of planets and water on the surface of planets, said Kaltenegger. Of course, to find water on the surface of planets, you should look [for water in its] liquid, solid, and gaseous forms, as we did in our paper.

Machine learning allows us to quickly identify optimal filters, as well as the trade-off in accuracy at various signal-to-noise ratios, added Pham. In the first task, using [the open-source algorithm] XGBoost, we get a ranking of which filters are most helpful for the algorithm in its tasks of detecting water, snow, or cloud. In the second task, we can observe how much better the algorithm performs with less noise. With that, we can draw a line where getting more signal would not correspond to much better accuracy.

To make sure their algorithm was up to the task, Pham and Kaltenegger did some considerable calibrating. This consisted of creating 53,130 spectra profiles of a cold Earth with various surface components including snow, water, and water clouds. They then simulated the spectra for this water in terms of atmosphere and surface reflectivity and assigned color profiles. As Pham explained:

The atmosphere was modeled using Exo-Prime2 Exo-Prime2 has been validated by comparison to Earth in various missions. The reflectivity of surfaces like snow and water are measured on Earth by USGS. We then create colors from these spectra. We train XGBoost on these colors to perform three separate goals: detecting the existence of water, the existence of clouds, and the existence of snow.

This trained XGBoost showed that clouds and snow are easier to identify than water, which is expected since clouds and snow have a much higher albedo (greater reflectivity of sunlight) than water. They further identified five optimal filters that worked extremely well for the algorithm, all of which were 0.2 micrometers broad and in the visible light range. The final step was to perform a mock probability assessment to evaluate their planet model regarding liquid water, snow, and clouds from the set of five optimal filters they identified.

Finally, we [performed] a brief Bayesian analysis using Markov-Chain Monte Carlo (MCMC) to do the same task on the five optimal filters, as a non-machine learning method to validate our finding, said Pham. Our findings there are similar: water is harder to detect, but identifying water, snow, and cloud through photometry is feasible.

Similarly, they were surprised to see how well the trained XGBoost could identify water on the surface of rocky planets based on color alone. According to Kaltenegger, this is what filters really are: a means for separating light into discreet bins. Imagine a bin for all red light (the red filter), then a bin for all the green light, from light to dark green (the green filter), she said.

Their proposed method does not identify water in exoplanet atmospheres but on an exoplanets surface via photometry. In addition, it will not work with the Transit Method (aka. Transit Photometry), which is currently the most widely-used and effective means of exoplanet detection. This method consists of observing distant stars for periodic dips in luminosity attributed to exoplanets passing in front of the star (aka. transiting) relative to the observer.

On occasion, astronomers can obtain spectra from an exoplanets atmosphere as it makes a transit a process known as transit spectroscopy. As the suns light passes through the exoplanets atmosphere relative to the observer, astronomers will analyze it with spectrometers to determine what chemicals are there. Using its sensitive optics and suite of spectrometers, the JWST will rely on this method to characterize exoplanet atmospheres.

But as Pham and Kaltenegger indicate, their algorithm will only work with reflected light from the direct imaging of exoplanets. This is especially good news considering that spectroscopy obtained through Direct Imaging studies is likely to reveal more about exoplanets not just the chemical composition of their atmospheres. According to Kaltenegger, this creates all kinds of opportunities for next-generation missions:

This is opening up the opportunity for smaller space missions like the Nancy Roman telescope to help identify worlds that could host life. And for larger upcoming telescopes as recommended by the decadal survey it allows them to scan the rocky planets in the Habitable Zone for the most promising candidates those with water on their surface, so we spend the time characterizing the most interesting ones and effectively search for life on planets that have great conditions for it to get started.

The paper that describes their findings was recently published in the Monthly Notices of the Royal Astronomical Society (MNRAS).

Further Reading: arXiv

Like Loading...

The rest is here:
Machine Learning Will be one of the Best Ways to Identify Habitable Exoplanets - Universe Today

Ames Lab, Texas A&M team develop AI tool for discovery and prediction of new rare-earth compounds – Green Car Congress

Researchers from Ames Laboratory and Texas A&M University have trained a machine-learning (ML) model to assess the stability of new rare-earth compounds. The framework they developed builds on current state-of-the-art methods for experimenting with compounds and understanding chemical instabilities. A paper on their work is published in Acta Materialia.

Machine learning is really important here because when we are talking about new compositions, ordered materials are all very well known to everyone in the rare earth community. However, when you add disorder to known materials, its very different. The number of compositions becomes significantly larger, often thousands or millions, and you cannot investigate all the possible combinations using theory or experiments.

Ames Laboratory Scientist Prashant Singh, corresponding author

The approach is based on machine learning (ML), a form of artificial intelligence (AI), which is driven by computer algorithms that improve through data usage and experience. Researchers used the upgraded Ames Laboratory Rare Earth database (RIC 2.0) and high-throughput density-functional theory (DFT) to build the foundation for their ML model.

High-throughput screening is a computational scheme that allows a researcher to test hundreds of models quickly. DFT is a quantum mechanical method used to investigate thermodynamic and electronic properties of many body systems. Based on this collection of information, the developed ML model uses regression learning to assess phase stability of compounds.

Singh explained that the material analysis is based on a discrete feedback loop in which the AI/ML model is updated using new DFT database based on real-time structural and phase information obtained from experiments. This process ensures that information is carried from one step to the next and reduces the chance of making mistakes.

Singh et al.

Yaroslav Mudryk, the project supervisor, said that the framework was designed to explore rare earth compounds because of their technological importance, but its application is not limited to rare-earths research. The same approach can be used to train an ML model to predict magnetic properties of compounds, process controls for transformative manufacturing, and optimize mechanical behaviors.

Its not really meant to discover a particular compound. It was, how do we design a new approach or a new tool for discovery and prediction of rare earth compounds? And thats what we did.

Yaroslav Mudryk

Mudryk emphasized that this work is just the beginning. The team is exploring the full potential of this method, but they are optimistic that there will be a wide range of applications for the framework in the future.

This work was supported by Laboratory Directed Research and Development Program (LDRD) program at Ames Laboratory.

Resources

Prashant Singh, Tyler Del Rose, Guillermo Vazquez, Raymundo Arroyave, Yaroslav Mudryk (2022) Machine-learning enabled thermodynamic model for the design of new rare-earth compounds, Acta Materialia, Volume 229,117759 doi: 10.1016/j.actamat.2022.117759

The rest is here:
Ames Lab, Texas A&M team develop AI tool for discovery and prediction of new rare-earth compounds - Green Car Congress