Archive for the ‘Machine Learning’ Category

The data science and AI market may be out for a recalibration – ZDNet

Shutterstock

Being a data scientist was supposed to be "the sexiest job of the 21st century". Whether the famous Harvard Business Review aphorism of 2012 holds water is somewhat subjective, depending on how you interpret "sexy". However, the data around data scientists, as well as related data engineering and data analyst roles, are starting to ring alarms.

The subjective part about HBR's aphorism is whether you actually enjoy finding and cleaning up data, building and debugging data pipelines and integration code, as well as building and improving machine learning models. That list of tasks, in that order, is what data scientists spend most of their time on.

Some people are genuinely attracted to data-centered careers by the job description; the growth in demand and salaries more attracts others. While the dark sides of the job description itself are not unknown, the growth and salaries part was not disputed much. That, however, may be changing: data scientist roles are still in demand but are not immune to market turmoil.

At the beginning of 2022, the first sign that something may be changing became apparent. As an IEEE Spectrum analysis of data released by online recruitment firmDiceshowed, in 2021, AI and machine learning salaries dropped, even though, on average, U.S. tech salaries climbed nearly 7%.

Overall, 2021 was a good year for tech professionals in the United States, with the average salary up 6.9% to $104,566. However, as the IEEE Spectrum notes, competition for machine learning, natural language processing, and AI experts softened, with average salaries dropping 2.1%, 7.8%, and 8.9%, respectively.

It's the first time this has occurred in recent years, as average U.S. salaries for software engineers with expertise in machine learning, for example, jumped 22% in 2019 over 2018, then went up another 3.1% in 2020. At the same time, demand for data scientist roles does not show any signs of subsiding -- on the contrary.

Developer recruitment platforms report seeing a sharp rise in the demand for data science-related IT skills. The latestIT Skills Reportby developer screening and interview platform DevSkiller recorded a 295% increase in the number of data science-related tasks recruiters were setting for candidates in the interview process during 2021.

CodinGame and CoderPad's2022 Tech Hiring Surveyalso identified data science as a profession for which demand greatly outstrips supply, along with DevOps and machine-learning specialists. As a result, ZDNet's Owen Hughes notes, employers will have to reassess both the salaries and benefits packages they offer employees if they hope to remain competitive.

The data science and AI market is sending mixed signals

Plus, 2021 saw what came to be known as the "Great Resignation" or "Great Reshuffle" -- a time when everyone is rethinking everything, including their careers. In theory, having a part of the workforce redefine their trajectory and goals and/or resign should increase demand and salaries -- analyses on why data scientists quit and what employers can do to retain themstarted making the rounds.

Then along came the layoffs, including layoffs of data scientist, data engineer and data analyst roles. As LinkedIn's analysis of the latest round of layoffs notes, the tech sector's tumultuous year has been denoted by daily announcements of layoffs, hiring freezes and rescinded job offers.

About 17,000 workers from more than 70 tech startups globally were laid off in May, a 350% jump from April. This is the most significant number of lost jobs in the sector since May 2020, at the height of the pandemic. In addition, tech giants such asNetflixandPayPalare also shedding jobs, whileUber,Lyft,SnapandMetahave slowed hiring.

According to data shared by the tech layoff tracking siteLayoffs.fyi, layoffs range from 7% to 33% of the workforce in the companies tracked. Drilling down at company-specific data shows that those include data-oriented roles, too.

Looking at data from FinTech Klarna and insurance startup PolicyGenius layoffs, for example, shows that data scientist, data engineer and data analyst roles are affected at both junior and senior levels. In both companies, those roles amount to about 4% of the layoffs.

What are we to make of those mixed signals then? Demand for data science-related tasks seems to be going on strong, but salaries are dropping, and those roles are not immune to layoffs either. Each of those signals comes with its own background and implications. Let's try to unpack them, and see what their confluence means for job seekers and employers.

As Dice chief marketing officer Michelle Marian told IEEE Spectrum, there are a variety of factors likely contributing to the decreases in machine learning and AI salaries, with one important consideration being that more technologists are learning and mastering these skill sets:

"The increases in the talent pool over time can result in employers needing to pay at least slightly less, given that the skill sets are easier to find. We have seen this occur with a range of certifications and other highly specialized technology skills", said Marian.

That seems like a reasonable conclusion. However, for data science and machine learning, there may be something else at play, too. Data scientists and machine learning experts are not only competing against each other but also increasingly against automation. As Hong Kong-based quantitative portfolio manager Peter Yuen notes, quants have seen this all before.

Prompted by news of top AI researchers landing salaries in the $1 million range, Yuen writes that this "should be more accurately interpreted as a continuation of a long trend of high-tech coolies coding themselves out of their jobs upon a backdrop of global oversupply of skilled labour".

If three generations of quants' experience in automating financial markets are anything to go by, Yuen writes, the automation of rank-and-file AI practitioners across many industries is perhaps only a decade or so away. After that, he adds, a small group of elite AI practitioners will have made it to managerial or ownership status while the remaining are stuck in average-paid jobs tasked with monitoring and maintaining their creations.

We may already be at the initial stages in this cycle, as evidenced by developments such as AutoML and libraries of off-the-shelf machine learning models. If history is anything to go by, then what Yuen describes will probably come to pass, too, inevitably leading to questions about how displaced workers can "move up the stack".

However, it's probably safe to assume that data science roles won't have to worry about that too much in the immediate future. After all, another oft-cited fact about data science projects is that ~80% of them still failfor a number of reasons. One of the most public cases of data science failure was Zillow.

Zillow's business came to rely heavily on the data science team to build accurate predictive models for its home buying service. As it turned out, the models were not so accurate. As a result, the company's stock went down over 30% in 5 days, the CEO put a lot of blame on the data science team, and 25% of the staff got laid off.

Whether or not the data science team was at fault at Zillow is up for debate. As for recent layoffs, they should probably be seen as part of a greater turn in the economy rather than a failure of data science teams per se. As Data Science Central Community Editor Kurt Cagle writes, there is talk of a looming AI winter, harkening back to the period in the 1970s when funding for AI ventures dried up altogether.

Cagle believes that while an AI Winter is unlikely, an AI Autumn with a cooling off of an over-the-top venture capital field in the space can be expected. The AI Winter of the 1970s was largely due to the fact that the technology was not up to the task, and there was not enough digitized data to go about.

The dot-com bubble era may have some lessons in store for today's data science roles

Today much greater compute power is available, and the amount of data is skyrocketing too. Cagle argues that the problem could be that we are approaching the limits of the currently employed neural network architectures. Cagle adds that a period in which brilliant minds can actually rest and innovate rather than simply apply established thinking would likely do the industry some good.

Like many others, Cagle is pointing out deficiencies in the "deep learning will be able to do everything" school of thought. This critique seems valid, and incorporating approaches that are overlooked today could drive progress in the field. However, let's not forget that the technology side of things is not all that matters here.

Perhaps recent history can offer some insights: what can the history of software development and the internet teach us? In some ways, the point where we are at now is reminiscent of the dot-com bubble era: increased availability of capital, excessive speculation, unrealistic expectations, and through-the-ceiling valuations. Today, we may be headed towards the bursting of the AI bubble.

That does not mean that data science roles will lose their appeal overnight or that what they do is without value. After all, software engineers are still in demand for all the progress and automation that software engineering has seen in the last few decades. But it probably means that a recalibration is due, and expectations should be managed accordingly.

See the rest here:
The data science and AI market may be out for a recalibration - ZDNet

Iterative and Enko Streamline Machine Learning Model Development to Drive Data Science Best Practices Based on GitOps Workflows – Business Wire

SAN FRANCISCO--(BUSINESS WIRE)--Iterative, the MLOps company dedicated to streamlining the workflow of data scientists and machine learning (ML) engineers, today announced Enko, the crop health company, has chosen Iterative-backed open source project DVC and Studio to build reproducible and modular pipelines at scale.

Enko designs safe and sustainable solutions to farmers biggest crop threats today, from pest resistance to new diseases. Inspired by the latest drug discovery and development approaches from pharma, Enko brings an innovative approach to crop health in order to meet farmers evolving needs.

Enkos Data Science team wanted to incentivize data scientists to use GitHub for their experiments in order to make a more efficient and collaborative workflow. Since Enko heavily leverages Git and GitHub, they decided to choose Iterative-backed tools rather than alternatives. DVC and Studio enable Enko to focus on building and applying innovative models to accelerate experimentation with minimal operational overhead.

"Our team has a policy that requires peer reviewed pull requests for all core infrastructure, but we found it nearly impossible to apply that to Jupyter Notebooks. This became even more challenging when the complexity of our workflows and size of file dependencies grew, said Tim Panosian, director of R&D data sciences at Enko. Now all pipelines run on DVC, which has given us the ability to streamline the process. Everyones code looks the same and expectations are clear. The big piece for us is that we know that we can rely on DVCs reproducibility to pick up where anyone left off.

With DVC and Studio, Enko is now able to track everything, efficiently and effectively collaborate in real time, and can easily pick experiments back up quickly, even weeks later, without having to search multiple tools or locations. Additionally, Studio provides transparency and allows for communication to teams that may not be as technical or knowledgeable around the model building aspects. Teams can share metrics and plots right away. Studio also gives data scientists positive feedback and encourages good behavior and discipline around running experiments and pipelines in traceable and reproducible ways.

Enko is doing important work to make new crop protection safer and more sustainable, providing a win-win to the farmer and environment alike, said Jenifer De Figueiredo, Iteratives community manager. DVC and Studio have enabled their data scientists and ML engineering team to be more productive and move them in the same direction to their goals.

DVC brings agility, reproducibility, and collaboration into the existing data science workflow. It provides users with a Git-like interface for versioning data and models, bringing version control to machine learning and solving the challenges of reproducibility. DVC is built on top of Git, creating lightweight metafiles and enabling the system to handle large files, which can't be stored in Git. The works with remote storage for large, unstructured data files in the cloud.

Iterative Studio is the collaboration layer for ML engineers and data scientists to track, visualize, and share experiments. Studio enables teams to link code, model, and data changes together in a single place. Studio is built on top of an organizations Git and tightly couples with the software development process so team members can share knowledge and automate their ML workflows.

DVC and Iterative Studio are available today to work with GitHub, GitLab, and BitBucket. To schedule a demo, visit http://www.Iterative.ai.

About Iterative

Iterative.ai, the company behind Iterative Studio and popular open-source tools DVC, CML, and MLEM, enables data science teams to build models faster and collaborate better with data-centric machine learning tools. Iteratives developer-first approach to MLOps delivers model reproducibility, governance, and automation across the ML lifecycle, all integrated tightly with software development workflows. Iterative is a remote-first company, backed by True Ventures, Afore Capital, and 468 Capital. For more information, visit Iterative.ai.

About Enko

Enko designs safe and sustainable solutions to farmers' biggest crop threats today, from pest resistance to new diseases. By applying the latest drug discovery and development approaches from pharma to plants, Enko is bringing an innovation model to agriculture and meeting farmers' evolving needs. Founded in 2017 and led by a team of proven scientists, entrepreneurs and agriculture industry veterans, Enko is backed by investors including the Bill & Melinda Gates Foundation, Anterra Capital, Finistere Ventures, Novalis LifeSciences, Germin8 Ventures, TO Ventures Food, and Rabo Food & Agri Innovation Fund. Enko is headquartered in Mystic, Connecticut. For more information, visit enkochem.com.

Read more from the original source:
Iterative and Enko Streamline Machine Learning Model Development to Drive Data Science Best Practices Based on GitOps Workflows - Business Wire

Predicting healthcare utilization in COPD patients using CT and machine learning – Health Imaging

Follow-up healthcare services were used by 35% of participants. This was found to be independent of age, sex or smoking history, but individuals with lower FEV1% were observed to utilize services more often than their peers. The model that used clinical data, pulmonary function tests and CT measurements was found to be the most accurate in predicting utilization, with an accuracy of 80%.

We found that adding imaging predictors to conventional measurements resulted in a 15% increase for correct classification, corresponding author MirandaKirby,PhD, of the Department of Physics at Toronto Metropolitan University, and co-authors wrote. Although this increase may seem small, identifying high risk patients could lead to healthcare utilization prevention through earlier treatment initiation or more careful monitoring.

The authors suggested that even small increases in prediction accuracy could translate into preventing a large number of hospitalizations at the population level.

The full study can be viewed here.

Is coronary heart disease on CT associated with early development of COPD?

CT-based radiomics features can help diagnose COPD earlier than ever before

Deep learning models predict COPD survival based on chest radiographs

CT reveals undersized lung airways as major COPD risk factor, on par with cigarette smoking

Read this article:
Predicting healthcare utilization in COPD patients using CT and machine learning - Health Imaging

Machine Learning Market Share, Application Analysis, Regional outlook, Growth, Price Trends, Key Players, Competitive Strategies and Forecast 2022 to…

UNITED STATES The global machine learning market size was US$ 11.1 billion in 2021. The global machine learning market is forecast to grow to US$ 121 billion by 2030 by registering a compound annual growth rate (CAGR) of 31% during the forecast period from 2022 to 2030.

Machine Learning MarketStatus, Trends and COVID-19 Impact Report 2021, Covid 19 Outbreak Impact research report added by Quadintel, is an in-depth analysis of market characteristics, size and growth, segmentation, regional and country breakdowns, competitive landscape, market shares, trends, and strategies for this market. It traces the markets historic and forecast market growth by geography. It places the market within the context of the widerMachine Learning Market, and compares it with other markets., market definition, regional market opportunity, sales and revenue by region, manufacturing cost analysis, Industrial Chain, market effect factors analysis, Digital Evidence Management market size forecast, market data & Graphs and Statistics, Tables, Bar &Pie Charts, and many more for business intelligence. Get complete Report(Including Full TOC, 100+ Tables & Figures, and Chart). In-depth Analysis Pre & Post COVID-19 Market Outbreak Impact Analysis & Situation by Region

Request Sample Report forMachine Learning Market : https://www.quadintel.com/request-sample/machine-learning-market-1/QI039

Factors Influencing the Market

Artificial intelligenceand other emerging technologies are changing the way industries and people work. These technologies have helped to optimize supply chains, launch new digital products and services, and transform the overall customer experience. Several tech companies are investing in this field to develop AI platforms, while several start-ups are focusing on niche domain solutions. All of these factors will significantly contribute to the growth of the global machine learning market.

Technology has paved the way for numerous applications across several industries. This technology is used in advertising, mainly to predict customer behaviour and aid in the improvement of advertising campaigns. AI-powered marketing employs a variety of models to optimize, automate, and augment data into actions. Thus, it will significantly drive the growth of the global machine learning market. Further, the technology is used in an advertising agency, mainly for security, document management, and publishing, which will contribute to the growth of the global machine learning market during the study period.

Machine learning has recently expanded into new areas. For example, the United States Army intends to use this technology in combat vehicles for predictive maintenance. Thus, such advancements will benefit the market. Apart from that, organizations around the world use machine learning to enable better client experience, which will be opportunistic for the industry players. However, insufficient knowledge related to technology may limit the growth of the market.

COVID-19 Impact Analysis

Machine learning and AI have significantly helped fight the COVID-19 pandemic, which escalated the growth of the overall market. Patients hospitalized with coronavirus disease (COVID-19) are at high risk; however, Machine learning (ML) algorithms were used in predicting mortality in COVID-19 hospitalized patients. Several studies found that machine learning can efficiently help tackle the COVID-19 pandemic by collecting data related to virus spread. Thus, such benefits of the technology have shaped its growth during the COVID-19 pandemic.

Regional Analysis

North America is forecast to hold the highest share in the machine learning market due to the rising penetration of advanced technology across all industrial verticals. Furthermore, rising investments in this sector will also contribute to the growth of the market. For instance, JPMorgan Chase & Co. invested in Limeglass, an AI, ML, and NLP provider in 2019 with the aim to analyse institutional research.The Asia-Pacific machine learning market is forecast to record a substantial growth rate due to the growing expansion of the e-commerce, and online streaming industry. Additionally, the rising adoption of industrial robots, particularly in China, Japan, and South Korea, will also contribute to the growth of the machine learning market.

Request To Download Sample of This Strategic Report : https://www.quadintel.com/request-sample/machine-learning-market-1/QI039

Competitors in the MarketIBM CorporationSAP SEMicrosoft CorporationHuawei TechnologiesHCL TechnologiesAccenture PlcSchneider ElectricHoneywell InternationalRockwell AutomationSchlumberger LimitedOther Prominent Players

Market SegmentationThe global machine learning market segmentation focuses on Application, Solution Type, and Region.

By Application:Advertising & mediaBFSIGovernmentHealthcareRetailTelecomUtilitiesManufacturing

By Solution Type:SoftwareHardwareServices

Get a Sample PDF copy of the report : https://www.quadintel.com/request-sample/machine-learning-market-1/QI039

By Region? North Americao The U.S.o Canadao Mexico? Europe? Western Europeo The UKo Germanyo Franceo Italyo Spaino Rest of Western Europe? Eastern Europeo Polando Russiao Rest of Eastern Europe? Asia Pacifico Chinao Indiao Japano Australia & New Zealando ASEANo Rest of Asia Pacific? Middle East & Africa (MEA)o UAEo Saudi Arabiao South Africao Rest of MEA? South Americao Brazilo Argentinao Rest of South America

Access Full Report, here : https://www.quadintel.com/request-sample/machine-learning-market-1/QI039

Key Questions Answered in the Market Report

How did the COVID-19 pandemic impact the adoption of by various pharmaceutical and life sciences companies? What is the outlook for the impact market during the forecast period 2021-2030? What are the key trends influencing the impact market? How will they influence the market in short-, mid-, and long-term duration? What is the end user perception toward? How is the patent landscape for pharmaceutical quality? Which country/cluster witnessed the highest patent filing from January 2014-June 2021? What are the key factors impacting the impact market? What will be their impact in short-, mid-, and long-term duration? What are the key opportunities areas in the impact market? What is their potential in short-, mid-, and long-term duration? What are the key strategies adopted by companies in the impact market? What are the key application areas of the impact market? Which application is expected to hold the highest growth potential during the forecast period 2021-2030? What is the preferred deployment model for the impact? What is the growth potential of various deployment models present in the market? Who are the key end users of pharmaceutical quality? What is their respective share in the impact market? Which regional market is expected to hold the highest growth potential in the impact market during the forecast period 2021-2030? Which are the key players in the impact market?

About Quadintel:

We are the best market research reports provider in the industry. Quadintel believes in providing quality reports to clients to meet the top line and bottom line goals which will boost your market share in todays competitive environment. Quadintel is a one-stop solution for individuals, organizations, and industries that are looking for innovative market research reports.

Get in Touch with Us:

Quadintel:Email:sales@quadintel.comAddress: Office 500 N Michigan Ave, Suite 600, Chicago, Illinois 60611, UNITED STATESTel: +1 888 212 3539 (US TOLL FREE)Website:https://www.quadintel.com/

Read the original:
Machine Learning Market Share, Application Analysis, Regional outlook, Growth, Price Trends, Key Players, Competitive Strategies and Forecast 2022 to...

Machine Learning Shows That More Reptile Species May Be at Risk of Extinction Than Previously Thought – SciTechDaily

Potamites montanicola, classified as Critically Endangered by automated the assessment method and as Data Deficient by the IUCN Red List of Threatened Species. Credit: Germn Chvez, Wikimedia Commons (CC-BY 3.0)

Machine learning tool estimates extinction risk for species previously unprioritized for conservation.

Species at risk of extinction are identified in the iconic Red List of Threatened Species, published by the International Union for Conservation of Nature (IUCN). A new study presents a novel machine learning tool for assessing extinction risk and then uses this tool to show that reptile species which are unlisted due to lack of assessment or data are more likely to be threatened than assessed species. The study, by Gabriel Henrique de Oliveira Caetano at Ben-Gurion University of the Negev, Israel, and colleagues, was published on May 26th in the journal PLOS Biology.

The IUCNs Red List of Threatened Species is the most comprehensive assessment of the extinction risk of species and informs conservation policy and practices around the world. However, the process for categorizing species is time-consuming, laborious, and subject to bias, depending heavily on manual curation by human experts. Therefore, many animal species have not been evaluated, or lack sufficient data, creating gaps in protective measures.

To assess 4,369 reptile species that were previously unable to be prioritized for conservation and develop accurate methods for assessing the extinction risk of obscure species, these scientists created a machine learning computer model. The model assigned IUCN extinction risk categories to the 40% of the worlds reptiles that lacked published assessments or are classified as DD (Data Deficient) at the time of the study. The researchers validated the models accuracy, comparing it to the Red List risk categorizations.

The authors found that the number of threatened species is much higher than reflected in the IUCN Red List and that both unassessed (Not Evaluated or NE) and Data Deficient reptiles were more likely to be threatened than assessed species. Future studies are needed to better understand the specific factors underlying extinction risk in threatened reptile taxa, to obtain better data on obscure reptile taxa, and to create conservation plans that include newly identified, threatened species.

According to the authors, Altogether, our models predict that the state of reptile conservation is far worse than currently estimated, and that immediate action is necessary to avoid the disappearance of reptile biodiversity. Regions and taxa we identified as likely to be more threatened should be given increased attention in new assessments and conservation planning. Lastly, the method we present here can be easily implemented to help bridge the assessment gap on other less known taxa.

Coauthor Shai Meiri adds, Importantly, the additional reptile species identified as threatened by our models are not distributed randomly across the globe or the reptilian evolutionary tree. Our added information highlights that there are more reptile species in peril especially in Australia, Madagascar, and the Amazon basin all of which have a high diversity of reptiles and should be targeted for extra conservation efforts. Moreover, species-rich groups, such as geckos and elapids (cobras, mambas, coral snakes, and others), are probably more threatened than the Global Reptile Assessment currently highlights, these groups should also be the focus of more conservation attention

Coauthor Uri Roll adds, Our work could be very important in helping the global efforts to prioritize the conservation of species at risk for example using the IUCN red-list mechanism. Our world is facing a biodiversity crisis, and severe man-made changes to ecosystems and species, yet funds allocated for conservation are very limited. Consequently, it is key that we use these limited funds where they could provide the most benefits. Advanced tools- such as those we have employed here, together with accumulating data, could greatly cut the time and cost needed to assess extinction risk, and thus pave the way for more informed conservation decision making.

Reference: Automated assessment reveals that the extinction risk of reptiles is widely underestimated across space and phylogeny by Gabriel Henrique de Oliveira Caetano, David G. Chapple, Richard Grenyer, Tal Raz, Jonathan Rosenblatt, Reid Tingley, Monika Bhm, Shai Meiri and Uri Roll. 26 May 2022, PLOS Biology.DOI: 10.1371/journal.pbio.3001544

Link:
Machine Learning Shows That More Reptile Species May Be at Risk of Extinction Than Previously Thought - SciTechDaily