Archive for the ‘Machine Learning’ Category

Predicting healthcare utilization in COPD patients using CT and machine learning – Health Imaging

Follow-up healthcare services were used by 35% of participants. This was found to be independent of age, sex or smoking history, but individuals with lower FEV1% were observed to utilize services more often than their peers. The model that used clinical data, pulmonary function tests and CT measurements was found to be the most accurate in predicting utilization, with an accuracy of 80%.

We found that adding imaging predictors to conventional measurements resulted in a 15% increase for correct classification, corresponding author MirandaKirby,PhD, of the Department of Physics at Toronto Metropolitan University, and co-authors wrote. Although this increase may seem small, identifying high risk patients could lead to healthcare utilization prevention through earlier treatment initiation or more careful monitoring.

The authors suggested that even small increases in prediction accuracy could translate into preventing a large number of hospitalizations at the population level.

The full study can be viewed here.

Is coronary heart disease on CT associated with early development of COPD?

CT-based radiomics features can help diagnose COPD earlier than ever before

Deep learning models predict COPD survival based on chest radiographs

CT reveals undersized lung airways as major COPD risk factor, on par with cigarette smoking

Read this article:
Predicting healthcare utilization in COPD patients using CT and machine learning - Health Imaging

Machine Learning Market Share, Application Analysis, Regional outlook, Growth, Price Trends, Key Players, Competitive Strategies and Forecast 2022 to…

UNITED STATES The global machine learning market size was US$ 11.1 billion in 2021. The global machine learning market is forecast to grow to US$ 121 billion by 2030 by registering a compound annual growth rate (CAGR) of 31% during the forecast period from 2022 to 2030.

Machine Learning MarketStatus, Trends and COVID-19 Impact Report 2021, Covid 19 Outbreak Impact research report added by Quadintel, is an in-depth analysis of market characteristics, size and growth, segmentation, regional and country breakdowns, competitive landscape, market shares, trends, and strategies for this market. It traces the markets historic and forecast market growth by geography. It places the market within the context of the widerMachine Learning Market, and compares it with other markets., market definition, regional market opportunity, sales and revenue by region, manufacturing cost analysis, Industrial Chain, market effect factors analysis, Digital Evidence Management market size forecast, market data & Graphs and Statistics, Tables, Bar &Pie Charts, and many more for business intelligence. Get complete Report(Including Full TOC, 100+ Tables & Figures, and Chart). In-depth Analysis Pre & Post COVID-19 Market Outbreak Impact Analysis & Situation by Region

Request Sample Report forMachine Learning Market : https://www.quadintel.com/request-sample/machine-learning-market-1/QI039

Factors Influencing the Market

Artificial intelligenceand other emerging technologies are changing the way industries and people work. These technologies have helped to optimize supply chains, launch new digital products and services, and transform the overall customer experience. Several tech companies are investing in this field to develop AI platforms, while several start-ups are focusing on niche domain solutions. All of these factors will significantly contribute to the growth of the global machine learning market.

Technology has paved the way for numerous applications across several industries. This technology is used in advertising, mainly to predict customer behaviour and aid in the improvement of advertising campaigns. AI-powered marketing employs a variety of models to optimize, automate, and augment data into actions. Thus, it will significantly drive the growth of the global machine learning market. Further, the technology is used in an advertising agency, mainly for security, document management, and publishing, which will contribute to the growth of the global machine learning market during the study period.

Machine learning has recently expanded into new areas. For example, the United States Army intends to use this technology in combat vehicles for predictive maintenance. Thus, such advancements will benefit the market. Apart from that, organizations around the world use machine learning to enable better client experience, which will be opportunistic for the industry players. However, insufficient knowledge related to technology may limit the growth of the market.

COVID-19 Impact Analysis

Machine learning and AI have significantly helped fight the COVID-19 pandemic, which escalated the growth of the overall market. Patients hospitalized with coronavirus disease (COVID-19) are at high risk; however, Machine learning (ML) algorithms were used in predicting mortality in COVID-19 hospitalized patients. Several studies found that machine learning can efficiently help tackle the COVID-19 pandemic by collecting data related to virus spread. Thus, such benefits of the technology have shaped its growth during the COVID-19 pandemic.

Regional Analysis

North America is forecast to hold the highest share in the machine learning market due to the rising penetration of advanced technology across all industrial verticals. Furthermore, rising investments in this sector will also contribute to the growth of the market. For instance, JPMorgan Chase & Co. invested in Limeglass, an AI, ML, and NLP provider in 2019 with the aim to analyse institutional research.The Asia-Pacific machine learning market is forecast to record a substantial growth rate due to the growing expansion of the e-commerce, and online streaming industry. Additionally, the rising adoption of industrial robots, particularly in China, Japan, and South Korea, will also contribute to the growth of the machine learning market.

Request To Download Sample of This Strategic Report : https://www.quadintel.com/request-sample/machine-learning-market-1/QI039

Competitors in the MarketIBM CorporationSAP SEMicrosoft CorporationHuawei TechnologiesHCL TechnologiesAccenture PlcSchneider ElectricHoneywell InternationalRockwell AutomationSchlumberger LimitedOther Prominent Players

Market SegmentationThe global machine learning market segmentation focuses on Application, Solution Type, and Region.

By Application:Advertising & mediaBFSIGovernmentHealthcareRetailTelecomUtilitiesManufacturing

By Solution Type:SoftwareHardwareServices

Get a Sample PDF copy of the report : https://www.quadintel.com/request-sample/machine-learning-market-1/QI039

By Region? North Americao The U.S.o Canadao Mexico? Europe? Western Europeo The UKo Germanyo Franceo Italyo Spaino Rest of Western Europe? Eastern Europeo Polando Russiao Rest of Eastern Europe? Asia Pacifico Chinao Indiao Japano Australia & New Zealando ASEANo Rest of Asia Pacific? Middle East & Africa (MEA)o UAEo Saudi Arabiao South Africao Rest of MEA? South Americao Brazilo Argentinao Rest of South America

Access Full Report, here : https://www.quadintel.com/request-sample/machine-learning-market-1/QI039

Key Questions Answered in the Market Report

How did the COVID-19 pandemic impact the adoption of by various pharmaceutical and life sciences companies? What is the outlook for the impact market during the forecast period 2021-2030? What are the key trends influencing the impact market? How will they influence the market in short-, mid-, and long-term duration? What is the end user perception toward? How is the patent landscape for pharmaceutical quality? Which country/cluster witnessed the highest patent filing from January 2014-June 2021? What are the key factors impacting the impact market? What will be their impact in short-, mid-, and long-term duration? What are the key opportunities areas in the impact market? What is their potential in short-, mid-, and long-term duration? What are the key strategies adopted by companies in the impact market? What are the key application areas of the impact market? Which application is expected to hold the highest growth potential during the forecast period 2021-2030? What is the preferred deployment model for the impact? What is the growth potential of various deployment models present in the market? Who are the key end users of pharmaceutical quality? What is their respective share in the impact market? Which regional market is expected to hold the highest growth potential in the impact market during the forecast period 2021-2030? Which are the key players in the impact market?

About Quadintel:

We are the best market research reports provider in the industry. Quadintel believes in providing quality reports to clients to meet the top line and bottom line goals which will boost your market share in todays competitive environment. Quadintel is a one-stop solution for individuals, organizations, and industries that are looking for innovative market research reports.

Get in Touch with Us:

Quadintel:Email:sales@quadintel.comAddress: Office 500 N Michigan Ave, Suite 600, Chicago, Illinois 60611, UNITED STATESTel: +1 888 212 3539 (US TOLL FREE)Website:https://www.quadintel.com/

Read the original:
Machine Learning Market Share, Application Analysis, Regional outlook, Growth, Price Trends, Key Players, Competitive Strategies and Forecast 2022 to...

Machine Learning Shows That More Reptile Species May Be at Risk of Extinction Than Previously Thought – SciTechDaily

Potamites montanicola, classified as Critically Endangered by automated the assessment method and as Data Deficient by the IUCN Red List of Threatened Species. Credit: Germn Chvez, Wikimedia Commons (CC-BY 3.0)

Machine learning tool estimates extinction risk for species previously unprioritized for conservation.

Species at risk of extinction are identified in the iconic Red List of Threatened Species, published by the International Union for Conservation of Nature (IUCN). A new study presents a novel machine learning tool for assessing extinction risk and then uses this tool to show that reptile species which are unlisted due to lack of assessment or data are more likely to be threatened than assessed species. The study, by Gabriel Henrique de Oliveira Caetano at Ben-Gurion University of the Negev, Israel, and colleagues, was published on May 26th in the journal PLOS Biology.

The IUCNs Red List of Threatened Species is the most comprehensive assessment of the extinction risk of species and informs conservation policy and practices around the world. However, the process for categorizing species is time-consuming, laborious, and subject to bias, depending heavily on manual curation by human experts. Therefore, many animal species have not been evaluated, or lack sufficient data, creating gaps in protective measures.

To assess 4,369 reptile species that were previously unable to be prioritized for conservation and develop accurate methods for assessing the extinction risk of obscure species, these scientists created a machine learning computer model. The model assigned IUCN extinction risk categories to the 40% of the worlds reptiles that lacked published assessments or are classified as DD (Data Deficient) at the time of the study. The researchers validated the models accuracy, comparing it to the Red List risk categorizations.

The authors found that the number of threatened species is much higher than reflected in the IUCN Red List and that both unassessed (Not Evaluated or NE) and Data Deficient reptiles were more likely to be threatened than assessed species. Future studies are needed to better understand the specific factors underlying extinction risk in threatened reptile taxa, to obtain better data on obscure reptile taxa, and to create conservation plans that include newly identified, threatened species.

According to the authors, Altogether, our models predict that the state of reptile conservation is far worse than currently estimated, and that immediate action is necessary to avoid the disappearance of reptile biodiversity. Regions and taxa we identified as likely to be more threatened should be given increased attention in new assessments and conservation planning. Lastly, the method we present here can be easily implemented to help bridge the assessment gap on other less known taxa.

Coauthor Shai Meiri adds, Importantly, the additional reptile species identified as threatened by our models are not distributed randomly across the globe or the reptilian evolutionary tree. Our added information highlights that there are more reptile species in peril especially in Australia, Madagascar, and the Amazon basin all of which have a high diversity of reptiles and should be targeted for extra conservation efforts. Moreover, species-rich groups, such as geckos and elapids (cobras, mambas, coral snakes, and others), are probably more threatened than the Global Reptile Assessment currently highlights, these groups should also be the focus of more conservation attention

Coauthor Uri Roll adds, Our work could be very important in helping the global efforts to prioritize the conservation of species at risk for example using the IUCN red-list mechanism. Our world is facing a biodiversity crisis, and severe man-made changes to ecosystems and species, yet funds allocated for conservation are very limited. Consequently, it is key that we use these limited funds where they could provide the most benefits. Advanced tools- such as those we have employed here, together with accumulating data, could greatly cut the time and cost needed to assess extinction risk, and thus pave the way for more informed conservation decision making.

Reference: Automated assessment reveals that the extinction risk of reptiles is widely underestimated across space and phylogeny by Gabriel Henrique de Oliveira Caetano, David G. Chapple, Richard Grenyer, Tal Raz, Jonathan Rosenblatt, Reid Tingley, Monika Bhm, Shai Meiri and Uri Roll. 26 May 2022, PLOS Biology.DOI: 10.1371/journal.pbio.3001544

Link:
Machine Learning Shows That More Reptile Species May Be at Risk of Extinction Than Previously Thought - SciTechDaily

AI and machine learning are improving weather forecasts, but they won’t replace human experts – The Conversation

A century ago, English mathematician Lewis Fry Richardson proposed a startling idea for that time: constructing a systematic process based on math for predicting the weather. In his 1922 book, Weather Prediction By Numerical Process, Richardson tried to write an equation that he could use to solve the dynamics of the atmosphere based on hand calculations.

It didnt work because not enough was known about the science of the atmosphere at that time. Perhaps some day in the dim future it will be possible to advance the computations faster than the weather advances and at a cost less than the saving to mankind due to the information gained. But that is a dream, Richardson concluded.

A century later, modern weather forecasts are based on the kind of complex computations that Richardson imagined and theyve become more accurate than anything he envisioned. Especially in recent decades, steady progress in research, data and computing has enabled a quiet revolution of numerical weather prediction.

For example, a forecast of heavy rainfall two days in advance is now as good as a same-day forecast was in the mid-1990s. Errors in the predicted tracks of hurricanes have been cut in half in the last 30 years.

There still are major challenges. Thunderstorms that produce tornadoes, large hail or heavy rain remain difficult to predict. And then theres chaos, often described as the butterfly effect the fact that small changes in complex processes make weather less predictable. Chaos limits our ability to make precise forecasts beyond about 10 days.

As in many other scientific fields, the proliferation of tools like artificial intelligence and machine learning holds great promise for weather prediction. We have seen some of whats possible in our research on applying machine learning to forecasts of high-impact weather. But we also believe that while these tools open up new possibilities for better forecasts, many parts of the job are handled more skillfully by experienced people.

Today, weather forecasters primary tools are numerical weather prediction models. These models use observations of the current state of the atmosphere from sources such as weather stations, weather balloons and satellites, and solve equations that govern the motion of air.

These models are outstanding at predicting most weather systems, but the smaller a weather event is, the more difficult it is to predict. As an example, think of a thunderstorm that dumps heavy rain on one side of town and nothing on the other side. Furthermore, experienced forecasters are remarkably good at synthesizing the huge amounts of weather information they have to consider each day, but their memories and bandwidth are not infinite.

Artificial intelligence and machine learning can help with some of these challenges. Forecasters are using these tools in several ways now, including making predictions of high-impact weather that the models cant provide.

In a project that started in 2017 and was reported in a 2021 paper, we focused on heavy rainfall. Of course, part of the problem is defining heavy: Two inches of rain in New Orleans may mean something very different than in Phoenix. We accounted for this by using observations of unusually large rain accumulations for each location across the country, along with a history of forecasts from a numerical weather prediction model.

We plugged that information into a machine learning method known as random forests, which uses many decision trees to split a mass of data and predict the likelihood of different outcomes. The result is a tool that forecasts the probability that rains heavy enough to generate flash flooding will occur.

We have since applied similar methods to forecasting of tornadoes, large hail and severe thunderstorm winds. Other research groups are developing similar tools. National Weather Service forecasters are using some of these tools to better assess the likelihood of hazardous weather on a given day.

Researchers also are embedding machine learning within numerical weather prediction models to speed up tasks that can be intensive to compute, such as predicting how water vapor gets converted to rain, snow or hail.

Its possible that machine learning models could eventually replace traditional numerical weather prediction models altogether. Instead of solving a set of complex physical equations as the models do, these systems instead would process thousands of past weather maps to learn how weather systems tend to behave. Then, using current weather data, they would make weather predictions based on what theyve learned from the past.

Some studies have shown that machine learning-based forecast systems can predict general weather patterns as well as numerical weather prediction models while using only a fraction of the computing power the models require. These new tools dont yet forecast the details of local weather that people care about, but with many researchers carefully testing them and inventing new methods, there is promise for the future.

There are also reasons for caution. Unlike numerical weather prediction models, forecast systems that use machine learning are not constrained by the physical laws that govern the atmosphere. So its possible that they could produce unrealistic results for example, forecasting temperature extremes beyond the bounds of nature. And it is unclear how they will perform during highly unusual or unprecedented weather phenomena.

And relying on AI tools can raise ethical concerns. For instance, locations with relatively few weather observations with which to train a machine learning system may not benefit from forecast improvements that are seen in other areas.

Another central question is how best to incorporate these new advances into forecasting. Finding the right balance between automated tools and the knowledge of expert human forecasters has long been a challenge in meteorology. Rapid technological advances will only make it more complicated.

Ideally, AI and machine learning will allow human forecasters to do their jobs more efficiently, spending less time on generating routine forecasts and more on communicating forecasts implications and impacts to the public or, for private forecasters, to their clients. We believe that careful collaboration between scientists, forecasters and forecast users is the best way to achieve these goals and build trust in machine-generated weather forecasts.

Here is the original post:
AI and machine learning are improving weather forecasts, but they won't replace human experts - The Conversation

AI: The pattern is not in the data, it’s in the machine – ZDNet

A neural network transforms input, the circles on the left, to output, on the right. How that happens is a transformation of weights, center, which we often confuse for patterns in the data itself.

It's a commonplace of artificial intelligence to say that machine learning, which depends on vast amounts of data, functions by finding patterns in data.

The phrase, "finding patterns in data," in fact, has been a staple phrase of things such as data mining and knowledge discovery for years now, and it has been assumed that machine learning, and its deep learning variant especially, are just continuing the tradition of finding such patterns.

AI programs do, indeed, result in patterns, but, just as "The fault, dear Brutus, lies not in our stars but in ourselves," the fact of those patterns is not something in the data, it is what the AI program makes of the data.

Almost all machine learning models function via a learning rule that changes the so-called weights, also known as parameters, of the program as the program is fed examples of data, and, possibly, labels attached to that data. It is the value of the weights that counts as "knowing" or "understanding."

The pattern that is being found is really a pattern of how weights change. The weights are simulating how real neurons are believed to "fire", the principle formed by psychologist Donald O. Hebb, which became known as Hebbian learning, the idea that "neurons that fire together, wire together."

Also: AI in sixty seconds

It is the pattern of weight changes that is the model for learning and understanding in machine learning, something the founders of deep learning emphasized. As expressed almost forty years ago, in one of the foundational texts of deep learning, Parallel Distributed Processing, Volume I, James McClelland, David Rumelhart, and Geoffrey Hinton wrote,

What is stored is the connection strengths between units that allow these patterns to be created [] If the knowledge is the strengths of the connections, learning must be a matter of finding the right connection strengths so that the right patterns of activation will be produced under the right circumstances.

McClelland, Rumelhart, and Hinton were writing for a select audience, cognitive psychologists and computer scientists, and they were writing in a very different age, an age when people didn't make easy assumptions that anything a computer did represented "knowledge." They were laboring at a time when AI programs couldn't do much at all, and they were mainly concerned with how to produce a computation, any computation, from a fairly limited arrangement of transistors.

Then, starting with the rise of powerful GPU chips some sixteen years ago, computers really did begin to produce interesting behavior, capped off by the landmark ImageNet performance of Hinton's work with his graduate students in 2012 that marked deep learning's coming of age.

As a consequence of the new computer achievements, the popular mind started to build all kinds of mythology around AI and deep learning. There was a rush of really bad headlines likening the technology to super-human performance.

Also: Why is AI reporting so bad?

Today's conception of AI has obscured what McClelland, Rumelhart, and Hinton focused on, namely, the machine, and how it "creates" patterns, as they put it. They were very intimately familiar with the mechanics of weights constructing a pattern as a response to what was, in the input, merely data.

Why does all that matter? If the machine is the creator of patterns, then the conclusions people draw about AI are probably mostly wrong. Most people assume a computer program is perceiving a pattern in the world, which can lead to people deferring judgment to the machine. If it produces results, the thinking goes, the computer must be seeing something humans don't.

Except that a machine that constructs patterns isn't explicitly seeing anything. It's constructing a pattern. That means what is "seen" or "known" is not the same as the colloquial, everyday sense in which humans speak of themselves as knowing things.

Instead of starting from the anthropocentric question, What does the machine know? it's best to start from a more precise question, What is this program representing in the connections of its weights?

Depending on the task, the answer to that question takes many forms.

Consider computer vision. The convolutional neural network that underlies machine learning programs for image recognition and other visual perception is composed of a collection of weights that measure pixel values in a digital image.

The pixel grid is already an imposition of a 2-D coordinate system on the real world. Provided with the machine-friendly abstraction of the coordinate grid, a neural net's task of representation boils down to matching the strength of collections of pixels to a label that has been imposed, such as "bird" or "blue jay."

In a scene containing a bird, or specifically a blue jay, many things may be happening, including clouds, sunshine, and passers by. But the scene in its entirety is not the thing. What matters to the program is the collection of pixels most likely to produce an appropriate label. The pattern, in other words, is a reductive act of focus and selection inherent in the activation of neural net connections.

You might say, a program of this kind doesn't "see" or "perceive" so much as it filters.

Also: A new experiment: Does AI really know cats or dogs -- or anything?

The same is true in games, where AI has mastered chess and poker. In the full information game of chess, for DeepMind's AlphaZero program, the machine learning task boils down to crafting a probability score at each moment of how much a potential next move will lead ultimately to win, lose or draw.

Because the number of potential future game board configurations cannot be calculated even by the fastest computers, the computer's weights cut short the search for moves by doing what you might call summarizing. The program summarizes the likelihood of a success if one were to pursue several moves in a given direction, and then compares that summary to the summary of potential moves to be taken in another direction.

Whereas the state of the board at any moment the position of pieces, and which pieces remain might "mean" something to a human chess grandmaster, it's not clear the term "mean" has any meaning for DeepMind's AlphaZero for such a summarizing task.

A similar summarizing task is achieved for the Pluribus program that in 2019 conquered the hardest form of poker, No-limit Texas hold'em. That game is even more complex in that it has hidden information, the players' face down cards, and additional "stochastic" elements of bluffing. But the representation is, again, a summary of likelihoods by each turn.

Even in human language, what's in the weights is different from what the casual observer might suppose. GPT-3, the top language program from OpenAI, can produce strikingly human-like output in sentences and paragraphs.

Does the program "know" language? Its weights hold a representation of the likelihood of how individual words and even whole strings of text are found in sequence with other words and strings.

You could call that function of a neural net a summary similar to AlphaGo or Pluribus, given that the problem is rather like chess or poker. But the possible states to be represented as connections in the neural net are not just vast, they are infinite given the infinite composability of language.

On the other hand, given that the output of a language program such as GPT-3, a sentence, is a fuzzy answer rather than a discrete score, the "right answer" is somewhat less demanding than the win, lose or draw of chess or poker. You could also call this function of GPT-3 and similar programs an "indexing" or an inventory" of things in their weights.

Also: What is GPT-3? Everything your business needs to know about OpenAI's breakthrough AI language program

Do humans have a similar kind of inventory or index of language? There doesn't seem to be any indication of it so far in neuroscience. Likewise, in the expression"to tell the dancer from the dance,"does GPT-3 spot the multiple levels of significance in the phrase, or the associations? It's not clear such a question even has a meaning in the context of a computer program.

In each of these cases chess board, cards, word strings the data are what they are: a fashioned substrate divided in various ways, a set of plastic rectangular paper products, a clustering of sounds or shapes. Whether such inventions "mean" anything, collectively, to the computer, is only a way of saying that a computer becomes tuned in response, for a purpose.

The things such data prompt in the machine filters, summarizations, indices, inventories, or however you want to characterize those representations are never the thing in itself. They are inventions.

Also: DeepMind: Why is AI so good at language? It's something in language itself

But, you may say, people see snowflakes and see their differences, and also catalog those differences, if they have a mind to. True, human activity has always sought to find patterns, via various means. Direct observation is one of the simplest means, and in a sense, what is being done in a neural network is a kind of extension of that.

You could say the neural network revels what was always true in human activity for millennia, that to speak of patterns is a thing imposed on the world rather than a thing in the world. In the world, snowflakes have form but that form is only a pattern to a person who collects and indexes them and categorizes them. It is a construction, in other words.

The activity of creating patterns will increase dramatically as more and more programs are unleashed on the data of the world, and their weights are tuned to form connections that we hope create useful representations. Such representations may be incredibly useful. They may someday cure cancer. It is useful to remember, however, that the patterns they reveal are not out there in the world, they are in the eye of the perceiver.

Also: DeepMind's 'Gato' is mediocre, so why did they build it?

Excerpt from:
AI: The pattern is not in the data, it's in the machine - ZDNet