Archive for the ‘Machine Learning’ Category

Iterative and Enko Streamline Machine Learning Model Development to Drive Data Science Best Practices Based on GitOps Workflows – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Iterative, the MLOps company dedicated to streamlining the workflow of data scientists and machine learning (ML) engineers, today announced Enko, the crop health company, has chosen Iterative-backed open source project DVC and Studio to build reproducible and modular pipelines at scale.

Enko designs safe and sustainable solutions to farmers biggest crop threats today, from pest resistance to new diseases. Inspired by the latest drug discovery and development approaches from pharma, Enko brings an innovative approach to crop health in order to meet farmers evolving needs.

Enkos Data Science team wanted to incentivize data scientists to use GitHub for their experiments in order to make a more efficient and collaborative workflow. Since Enko heavily leverages Git and GitHub, they decided to choose Iterative-backed tools rather than alternatives. DVC and Studio enable Enko to focus on building and applying innovative models to accelerate experimentation with minimal operational overhead.

"Our team has a policy that requires peer reviewed pull requests for all core infrastructure, but we found it nearly impossible to apply that to Jupyter Notebooks. This became even more challenging when the complexity of our workflows and size of file dependencies grew, said Tim Panosian, director of R&D data sciences at Enko. Now all pipelines run on DVC, which has given us the ability to streamline the process. Everyones code looks the same and expectations are clear. The big piece for us is that we know that we can rely on DVCs reproducibility to pick up where anyone left off.

With DVC and Studio, Enko is now able to track everything, efficiently and effectively collaborate in real time, and can easily pick experiments back up quickly, even weeks later, without having to search multiple tools or locations. Additionally, Studio provides transparency and allows for communication to teams that may not be as technical or knowledgeable around the model building aspects. Teams can share metrics and plots right away. Studio also gives data scientists positive feedback and encourages good behavior and discipline around running experiments and pipelines in traceable and reproducible ways.

Enko is doing important work to make new crop protection safer and more sustainable, providing a win-win to the farmer and environment alike, said Jenifer De Figueiredo, Iteratives community manager. DVC and Studio have enabled their data scientists and ML engineering team to be more productive and move them in the same direction to their goals.

DVC brings agility, reproducibility, and collaboration into the existing data science workflow. It provides users with a Git-like interface for versioning data and models, bringing version control to machine learning and solving the challenges of reproducibility. DVC is built on top of Git, creating lightweight metafiles and enabling the system to handle large files, which can't be stored in Git. The works with remote storage for large, unstructured data files in the cloud.

Iterative Studio is the collaboration layer for ML engineers and data scientists to track, visualize, and share experiments. Studio enables teams to link code, model, and data changes together in a single place. Studio is built on top of an organizations Git and tightly couples with the software development process so team members can share knowledge and automate their ML workflows.

DVC and Iterative Studio are available today to work with GitHub, GitLab, and BitBucket. To schedule a demo, visit http://www.Iterative.ai.

About Iterative

Iterative.ai, the company behind Iterative Studio and popular open-source tools DVC, CML, and MLEM, enables data science teams to build models faster and collaborate better with data-centric machine learning tools. Iteratives developer-first approach to MLOps delivers model reproducibility, governance, and automation across the ML lifecycle, all integrated tightly with software development workflows. Iterative is a remote-first company, backed by True Ventures, Afore Capital, and 468 Capital. For more information, visit Iterative.ai.

About Enko

Enko designs safe and sustainable solutions to farmers' biggest crop threats today, from pest resistance to new diseases. By applying the latest drug discovery and development approaches from pharma to plants, Enko is bringing an innovation model to agriculture and meeting farmers' evolving needs. Founded in 2017 and led by a team of proven scientists, entrepreneurs and agriculture industry veterans, Enko is backed by investors including the Bill & Melinda Gates Foundation, Anterra Capital, Finistere Ventures, Novalis LifeSciences, Germin8 Ventures, TO Ventures Food, and Rabo Food & Agri Innovation Fund. Enko is headquartered in Mystic, Connecticut. For more information, visit enkochem.com.

Read more from the original source:
Iterative and Enko Streamline Machine Learning Model Development to Drive Data Science Best Practices Based on GitOps Workflows - Business Wire

Predicting healthcare utilization in COPD patients using CT and machine learning – Health Imaging

Follow-up healthcare services were used by 35% of participants. This was found to be independent of age, sex or smoking history, but individuals with lower FEV1% were observed to utilize services more often than their peers. The model that used clinical data, pulmonary function tests and CT measurements was found to be the most accurate in predicting utilization, with an accuracy of 80%.

We found that adding imaging predictors to conventional measurements resulted in a 15% increase for correct classification, corresponding author MirandaKirby,PhD, of the Department of Physics at Toronto Metropolitan University, and co-authors wrote. Although this increase may seem small, identifying high risk patients could lead to healthcare utilization prevention through earlier treatment initiation or more careful monitoring.

The authors suggested that even small increases in prediction accuracy could translate into preventing a large number of hospitalizations at the population level.

The full study can be viewed here.

Is coronary heart disease on CT associated with early development of COPD?

CT-based radiomics features can help diagnose COPD earlier than ever before

Deep learning models predict COPD survival based on chest radiographs

CT reveals undersized lung airways as major COPD risk factor, on par with cigarette smoking

Read this article:
Predicting healthcare utilization in COPD patients using CT and machine learning - Health Imaging

Machine Learning Market Share, Application Analysis, Regional outlook, Growth, Price Trends, Key Players, Competitive Strategies and Forecast 2022 to…

UNITED STATES The global machine learning market size was US$ 11.1 billion in 2021. The global machine learning market is forecast to grow to US$ 121 billion by 2030 by registering a compound annual growth rate (CAGR) of 31% during the forecast period from 2022 to 2030.

Machine Learning MarketStatus, Trends and COVID-19 Impact Report 2021, Covid 19 Outbreak Impact research report added by Quadintel, is an in-depth analysis of market characteristics, size and growth, segmentation, regional and country breakdowns, competitive landscape, market shares, trends, and strategies for this market. It traces the markets historic and forecast market growth by geography. It places the market within the context of the widerMachine Learning Market, and compares it with other markets., market definition, regional market opportunity, sales and revenue by region, manufacturing cost analysis, Industrial Chain, market effect factors analysis, Digital Evidence Management market size forecast, market data & Graphs and Statistics, Tables, Bar &Pie Charts, and many more for business intelligence. Get complete Report(Including Full TOC, 100+ Tables & Figures, and Chart). In-depth Analysis Pre & Post COVID-19 Market Outbreak Impact Analysis & Situation by Region

Request Sample Report forMachine Learning Market : https://www.quadintel.com/request-sample/machine-learning-market-1/QI039

Factors Influencing the Market

Artificial intelligenceand other emerging technologies are changing the way industries and people work. These technologies have helped to optimize supply chains, launch new digital products and services, and transform the overall customer experience. Several tech companies are investing in this field to develop AI platforms, while several start-ups are focusing on niche domain solutions. All of these factors will significantly contribute to the growth of the global machine learning market.

Technology has paved the way for numerous applications across several industries. This technology is used in advertising, mainly to predict customer behaviour and aid in the improvement of advertising campaigns. AI-powered marketing employs a variety of models to optimize, automate, and augment data into actions. Thus, it will significantly drive the growth of the global machine learning market. Further, the technology is used in an advertising agency, mainly for security, document management, and publishing, which will contribute to the growth of the global machine learning market during the study period.

Machine learning has recently expanded into new areas. For example, the United States Army intends to use this technology in combat vehicles for predictive maintenance. Thus, such advancements will benefit the market. Apart from that, organizations around the world use machine learning to enable better client experience, which will be opportunistic for the industry players. However, insufficient knowledge related to technology may limit the growth of the market.

COVID-19 Impact Analysis

Machine learning and AI have significantly helped fight the COVID-19 pandemic, which escalated the growth of the overall market. Patients hospitalized with coronavirus disease (COVID-19) are at high risk; however, Machine learning (ML) algorithms were used in predicting mortality in COVID-19 hospitalized patients. Several studies found that machine learning can efficiently help tackle the COVID-19 pandemic by collecting data related to virus spread. Thus, such benefits of the technology have shaped its growth during the COVID-19 pandemic.

Regional Analysis

North America is forecast to hold the highest share in the machine learning market due to the rising penetration of advanced technology across all industrial verticals. Furthermore, rising investments in this sector will also contribute to the growth of the market. For instance, JPMorgan Chase & Co. invested in Limeglass, an AI, ML, and NLP provider in 2019 with the aim to analyse institutional research.The Asia-Pacific machine learning market is forecast to record a substantial growth rate due to the growing expansion of the e-commerce, and online streaming industry. Additionally, the rising adoption of industrial robots, particularly in China, Japan, and South Korea, will also contribute to the growth of the machine learning market.

Request To Download Sample of This Strategic Report : https://www.quadintel.com/request-sample/machine-learning-market-1/QI039

Competitors in the MarketIBM CorporationSAP SEMicrosoft CorporationHuawei TechnologiesHCL TechnologiesAccenture PlcSchneider ElectricHoneywell InternationalRockwell AutomationSchlumberger LimitedOther Prominent Players

Market SegmentationThe global machine learning market segmentation focuses on Application, Solution Type, and Region.

By Application:Advertising & mediaBFSIGovernmentHealthcareRetailTelecomUtilitiesManufacturing

By Solution Type:SoftwareHardwareServices

Get a Sample PDF copy of the report : https://www.quadintel.com/request-sample/machine-learning-market-1/QI039

By Region? North Americao The U.S.o Canadao Mexico? Europe? Western Europeo The UKo Germanyo Franceo Italyo Spaino Rest of Western Europe? Eastern Europeo Polando Russiao Rest of Eastern Europe? Asia Pacifico Chinao Indiao Japano Australia & New Zealando ASEANo Rest of Asia Pacific? Middle East & Africa (MEA)o UAEo Saudi Arabiao South Africao Rest of MEA? South Americao Brazilo Argentinao Rest of South America

Access Full Report, here : https://www.quadintel.com/request-sample/machine-learning-market-1/QI039

Key Questions Answered in the Market Report

How did the COVID-19 pandemic impact the adoption of by various pharmaceutical and life sciences companies? What is the outlook for the impact market during the forecast period 2021-2030? What are the key trends influencing the impact market? How will they influence the market in short-, mid-, and long-term duration? What is the end user perception toward? How is the patent landscape for pharmaceutical quality? Which country/cluster witnessed the highest patent filing from January 2014-June 2021? What are the key factors impacting the impact market? What will be their impact in short-, mid-, and long-term duration? What are the key opportunities areas in the impact market? What is their potential in short-, mid-, and long-term duration? What are the key strategies adopted by companies in the impact market? What are the key application areas of the impact market? Which application is expected to hold the highest growth potential during the forecast period 2021-2030? What is the preferred deployment model for the impact? What is the growth potential of various deployment models present in the market? Who are the key end users of pharmaceutical quality? What is their respective share in the impact market? Which regional market is expected to hold the highest growth potential in the impact market during the forecast period 2021-2030? Which are the key players in the impact market?

About Quadintel:

We are the best market research reports provider in the industry. Quadintel believes in providing quality reports to clients to meet the top line and bottom line goals which will boost your market share in todays competitive environment. Quadintel is a one-stop solution for individuals, organizations, and industries that are looking for innovative market research reports.

Get in Touch with Us:

Quadintel:Email:sales@quadintel.comAddress: Office 500 N Michigan Ave, Suite 600, Chicago, Illinois 60611, UNITED STATESTel: +1 888 212 3539 (US TOLL FREE)Website:https://www.quadintel.com/

Read the original:
Machine Learning Market Share, Application Analysis, Regional outlook, Growth, Price Trends, Key Players, Competitive Strategies and Forecast 2022 to...

Machine Learning Shows That More Reptile Species May Be at Risk of Extinction Than Previously Thought – SciTechDaily

Potamites montanicola, classified as Critically Endangered by automated the assessment method and as Data Deficient by the IUCN Red List of Threatened Species. Credit: Germn Chvez, Wikimedia Commons (CC-BY 3.0)

Machine learning tool estimates extinction risk for species previously unprioritized for conservation.

Species at risk of extinction are identified in the iconic Red List of Threatened Species, published by the International Union for Conservation of Nature (IUCN). A new study presents a novel machine learning tool for assessing extinction risk and then uses this tool to show that reptile species which are unlisted due to lack of assessment or data are more likely to be threatened than assessed species. The study, by Gabriel Henrique de Oliveira Caetano at Ben-Gurion University of the Negev, Israel, and colleagues, was published on May 26th in the journal PLOS Biology.

The IUCNs Red List of Threatened Species is the most comprehensive assessment of the extinction risk of species and informs conservation policy and practices around the world. However, the process for categorizing species is time-consuming, laborious, and subject to bias, depending heavily on manual curation by human experts. Therefore, many animal species have not been evaluated, or lack sufficient data, creating gaps in protective measures.

To assess 4,369 reptile species that were previously unable to be prioritized for conservation and develop accurate methods for assessing the extinction risk of obscure species, these scientists created a machine learning computer model. The model assigned IUCN extinction risk categories to the 40% of the worlds reptiles that lacked published assessments or are classified as DD (Data Deficient) at the time of the study. The researchers validated the models accuracy, comparing it to the Red List risk categorizations.

The authors found that the number of threatened species is much higher than reflected in the IUCN Red List and that both unassessed (Not Evaluated or NE) and Data Deficient reptiles were more likely to be threatened than assessed species. Future studies are needed to better understand the specific factors underlying extinction risk in threatened reptile taxa, to obtain better data on obscure reptile taxa, and to create conservation plans that include newly identified, threatened species.

According to the authors, Altogether, our models predict that the state of reptile conservation is far worse than currently estimated, and that immediate action is necessary to avoid the disappearance of reptile biodiversity. Regions and taxa we identified as likely to be more threatened should be given increased attention in new assessments and conservation planning. Lastly, the method we present here can be easily implemented to help bridge the assessment gap on other less known taxa.

Coauthor Shai Meiri adds, Importantly, the additional reptile species identified as threatened by our models are not distributed randomly across the globe or the reptilian evolutionary tree. Our added information highlights that there are more reptile species in peril especially in Australia, Madagascar, and the Amazon basin all of which have a high diversity of reptiles and should be targeted for extra conservation efforts. Moreover, species-rich groups, such as geckos and elapids (cobras, mambas, coral snakes, and others), are probably more threatened than the Global Reptile Assessment currently highlights, these groups should also be the focus of more conservation attention

Coauthor Uri Roll adds, Our work could be very important in helping the global efforts to prioritize the conservation of species at risk for example using the IUCN red-list mechanism. Our world is facing a biodiversity crisis, and severe man-made changes to ecosystems and species, yet funds allocated for conservation are very limited. Consequently, it is key that we use these limited funds where they could provide the most benefits. Advanced tools- such as those we have employed here, together with accumulating data, could greatly cut the time and cost needed to assess extinction risk, and thus pave the way for more informed conservation decision making.

Reference: Automated assessment reveals that the extinction risk of reptiles is widely underestimated across space and phylogeny by Gabriel Henrique de Oliveira Caetano, David G. Chapple, Richard Grenyer, Tal Raz, Jonathan Rosenblatt, Reid Tingley, Monika Bhm, Shai Meiri and Uri Roll. 26 May 2022, PLOS Biology.DOI: 10.1371/journal.pbio.3001544

Link:
Machine Learning Shows That More Reptile Species May Be at Risk of Extinction Than Previously Thought - SciTechDaily

AI and machine learning are improving weather forecasts, but they won’t replace human experts – The Conversation

A century ago, English mathematician Lewis Fry Richardson proposed a startling idea for that time: constructing a systematic process based on math for predicting the weather. In his 1922 book, Weather Prediction By Numerical Process, Richardson tried to write an equation that he could use to solve the dynamics of the atmosphere based on hand calculations.

It didnt work because not enough was known about the science of the atmosphere at that time. Perhaps some day in the dim future it will be possible to advance the computations faster than the weather advances and at a cost less than the saving to mankind due to the information gained. But that is a dream, Richardson concluded.

A century later, modern weather forecasts are based on the kind of complex computations that Richardson imagined and theyve become more accurate than anything he envisioned. Especially in recent decades, steady progress in research, data and computing has enabled a quiet revolution of numerical weather prediction.

For example, a forecast of heavy rainfall two days in advance is now as good as a same-day forecast was in the mid-1990s. Errors in the predicted tracks of hurricanes have been cut in half in the last 30 years.

There still are major challenges. Thunderstorms that produce tornadoes, large hail or heavy rain remain difficult to predict. And then theres chaos, often described as the butterfly effect the fact that small changes in complex processes make weather less predictable. Chaos limits our ability to make precise forecasts beyond about 10 days.

As in many other scientific fields, the proliferation of tools like artificial intelligence and machine learning holds great promise for weather prediction. We have seen some of whats possible in our research on applying machine learning to forecasts of high-impact weather. But we also believe that while these tools open up new possibilities for better forecasts, many parts of the job are handled more skillfully by experienced people.

Today, weather forecasters primary tools are numerical weather prediction models. These models use observations of the current state of the atmosphere from sources such as weather stations, weather balloons and satellites, and solve equations that govern the motion of air.

These models are outstanding at predicting most weather systems, but the smaller a weather event is, the more difficult it is to predict. As an example, think of a thunderstorm that dumps heavy rain on one side of town and nothing on the other side. Furthermore, experienced forecasters are remarkably good at synthesizing the huge amounts of weather information they have to consider each day, but their memories and bandwidth are not infinite.

Artificial intelligence and machine learning can help with some of these challenges. Forecasters are using these tools in several ways now, including making predictions of high-impact weather that the models cant provide.

In a project that started in 2017 and was reported in a 2021 paper, we focused on heavy rainfall. Of course, part of the problem is defining heavy: Two inches of rain in New Orleans may mean something very different than in Phoenix. We accounted for this by using observations of unusually large rain accumulations for each location across the country, along with a history of forecasts from a numerical weather prediction model.

We plugged that information into a machine learning method known as random forests, which uses many decision trees to split a mass of data and predict the likelihood of different outcomes. The result is a tool that forecasts the probability that rains heavy enough to generate flash flooding will occur.

We have since applied similar methods to forecasting of tornadoes, large hail and severe thunderstorm winds. Other research groups are developing similar tools. National Weather Service forecasters are using some of these tools to better assess the likelihood of hazardous weather on a given day.

Researchers also are embedding machine learning within numerical weather prediction models to speed up tasks that can be intensive to compute, such as predicting how water vapor gets converted to rain, snow or hail.

Its possible that machine learning models could eventually replace traditional numerical weather prediction models altogether. Instead of solving a set of complex physical equations as the models do, these systems instead would process thousands of past weather maps to learn how weather systems tend to behave. Then, using current weather data, they would make weather predictions based on what theyve learned from the past.

Some studies have shown that machine learning-based forecast systems can predict general weather patterns as well as numerical weather prediction models while using only a fraction of the computing power the models require. These new tools dont yet forecast the details of local weather that people care about, but with many researchers carefully testing them and inventing new methods, there is promise for the future.

There are also reasons for caution. Unlike numerical weather prediction models, forecast systems that use machine learning are not constrained by the physical laws that govern the atmosphere. So its possible that they could produce unrealistic results for example, forecasting temperature extremes beyond the bounds of nature. And it is unclear how they will perform during highly unusual or unprecedented weather phenomena.

And relying on AI tools can raise ethical concerns. For instance, locations with relatively few weather observations with which to train a machine learning system may not benefit from forecast improvements that are seen in other areas.

Another central question is how best to incorporate these new advances into forecasting. Finding the right balance between automated tools and the knowledge of expert human forecasters has long been a challenge in meteorology. Rapid technological advances will only make it more complicated.

Ideally, AI and machine learning will allow human forecasters to do their jobs more efficiently, spending less time on generating routine forecasts and more on communicating forecasts implications and impacts to the public or, for private forecasters, to their clients. We believe that careful collaboration between scientists, forecasters and forecast users is the best way to achieve these goals and build trust in machine-generated weather forecasts.

Here is the original post:
AI and machine learning are improving weather forecasts, but they won't replace human experts - The Conversation