Archive for the ‘Machine Learning’ Category

AI and machine learning are improving weather forecasts, but they won’t replace human experts – The Conversation

A century ago, English mathematician Lewis Fry Richardson proposed a startling idea for that time: constructing a systematic process based on math for predicting the weather. In his 1922 book, Weather Prediction By Numerical Process, Richardson tried to write an equation that he could use to solve the dynamics of the atmosphere based on hand calculations.

It didnt work because not enough was known about the science of the atmosphere at that time. Perhaps some day in the dim future it will be possible to advance the computations faster than the weather advances and at a cost less than the saving to mankind due to the information gained. But that is a dream, Richardson concluded.

A century later, modern weather forecasts are based on the kind of complex computations that Richardson imagined and theyve become more accurate than anything he envisioned. Especially in recent decades, steady progress in research, data and computing has enabled a quiet revolution of numerical weather prediction.

For example, a forecast of heavy rainfall two days in advance is now as good as a same-day forecast was in the mid-1990s. Errors in the predicted tracks of hurricanes have been cut in half in the last 30 years.

There still are major challenges. Thunderstorms that produce tornadoes, large hail or heavy rain remain difficult to predict. And then theres chaos, often described as the butterfly effect the fact that small changes in complex processes make weather less predictable. Chaos limits our ability to make precise forecasts beyond about 10 days.

As in many other scientific fields, the proliferation of tools like artificial intelligence and machine learning holds great promise for weather prediction. We have seen some of whats possible in our research on applying machine learning to forecasts of high-impact weather. But we also believe that while these tools open up new possibilities for better forecasts, many parts of the job are handled more skillfully by experienced people.

Today, weather forecasters primary tools are numerical weather prediction models. These models use observations of the current state of the atmosphere from sources such as weather stations, weather balloons and satellites, and solve equations that govern the motion of air.

These models are outstanding at predicting most weather systems, but the smaller a weather event is, the more difficult it is to predict. As an example, think of a thunderstorm that dumps heavy rain on one side of town and nothing on the other side. Furthermore, experienced forecasters are remarkably good at synthesizing the huge amounts of weather information they have to consider each day, but their memories and bandwidth are not infinite.

Artificial intelligence and machine learning can help with some of these challenges. Forecasters are using these tools in several ways now, including making predictions of high-impact weather that the models cant provide.

In a project that started in 2017 and was reported in a 2021 paper, we focused on heavy rainfall. Of course, part of the problem is defining heavy: Two inches of rain in New Orleans may mean something very different than in Phoenix. We accounted for this by using observations of unusually large rain accumulations for each location across the country, along with a history of forecasts from a numerical weather prediction model.

We plugged that information into a machine learning method known as random forests, which uses many decision trees to split a mass of data and predict the likelihood of different outcomes. The result is a tool that forecasts the probability that rains heavy enough to generate flash flooding will occur.

We have since applied similar methods to forecasting of tornadoes, large hail and severe thunderstorm winds. Other research groups are developing similar tools. National Weather Service forecasters are using some of these tools to better assess the likelihood of hazardous weather on a given day.

Researchers also are embedding machine learning within numerical weather prediction models to speed up tasks that can be intensive to compute, such as predicting how water vapor gets converted to rain, snow or hail.

Its possible that machine learning models could eventually replace traditional numerical weather prediction models altogether. Instead of solving a set of complex physical equations as the models do, these systems instead would process thousands of past weather maps to learn how weather systems tend to behave. Then, using current weather data, they would make weather predictions based on what theyve learned from the past.

Some studies have shown that machine learning-based forecast systems can predict general weather patterns as well as numerical weather prediction models while using only a fraction of the computing power the models require. These new tools dont yet forecast the details of local weather that people care about, but with many researchers carefully testing them and inventing new methods, there is promise for the future.

There are also reasons for caution. Unlike numerical weather prediction models, forecast systems that use machine learning are not constrained by the physical laws that govern the atmosphere. So its possible that they could produce unrealistic results for example, forecasting temperature extremes beyond the bounds of nature. And it is unclear how they will perform during highly unusual or unprecedented weather phenomena.

And relying on AI tools can raise ethical concerns. For instance, locations with relatively few weather observations with which to train a machine learning system may not benefit from forecast improvements that are seen in other areas.

Another central question is how best to incorporate these new advances into forecasting. Finding the right balance between automated tools and the knowledge of expert human forecasters has long been a challenge in meteorology. Rapid technological advances will only make it more complicated.

Ideally, AI and machine learning will allow human forecasters to do their jobs more efficiently, spending less time on generating routine forecasts and more on communicating forecasts implications and impacts to the public or, for private forecasters, to their clients. We believe that careful collaboration between scientists, forecasters and forecast users is the best way to achieve these goals and build trust in machine-generated weather forecasts.

Here is the original post:
AI and machine learning are improving weather forecasts, but they won't replace human experts - The Conversation

AI: The pattern is not in the data, it’s in the machine – ZDNet

A neural network transforms input, the circles on the left, to output, on the right. How that happens is a transformation of weights, center, which we often confuse for patterns in the data itself.

It's a commonplace of artificial intelligence to say that machine learning, which depends on vast amounts of data, functions by finding patterns in data.

The phrase, "finding patterns in data," in fact, has been a staple phrase of things such as data mining and knowledge discovery for years now, and it has been assumed that machine learning, and its deep learning variant especially, are just continuing the tradition of finding such patterns.

AI programs do, indeed, result in patterns, but, just as "The fault, dear Brutus, lies not in our stars but in ourselves," the fact of those patterns is not something in the data, it is what the AI program makes of the data.

Almost all machine learning models function via a learning rule that changes the so-called weights, also known as parameters, of the program as the program is fed examples of data, and, possibly, labels attached to that data. It is the value of the weights that counts as "knowing" or "understanding."

The pattern that is being found is really a pattern of how weights change. The weights are simulating how real neurons are believed to "fire", the principle formed by psychologist Donald O. Hebb, which became known as Hebbian learning, the idea that "neurons that fire together, wire together."

Also: AI in sixty seconds

It is the pattern of weight changes that is the model for learning and understanding in machine learning, something the founders of deep learning emphasized. As expressed almost forty years ago, in one of the foundational texts of deep learning, Parallel Distributed Processing, Volume I, James McClelland, David Rumelhart, and Geoffrey Hinton wrote,

What is stored is the connection strengths between units that allow these patterns to be created [] If the knowledge is the strengths of the connections, learning must be a matter of finding the right connection strengths so that the right patterns of activation will be produced under the right circumstances.

McClelland, Rumelhart, and Hinton were writing for a select audience, cognitive psychologists and computer scientists, and they were writing in a very different age, an age when people didn't make easy assumptions that anything a computer did represented "knowledge." They were laboring at a time when AI programs couldn't do much at all, and they were mainly concerned with how to produce a computation, any computation, from a fairly limited arrangement of transistors.

Then, starting with the rise of powerful GPU chips some sixteen years ago, computers really did begin to produce interesting behavior, capped off by the landmark ImageNet performance of Hinton's work with his graduate students in 2012 that marked deep learning's coming of age.

As a consequence of the new computer achievements, the popular mind started to build all kinds of mythology around AI and deep learning. There was a rush of really bad headlines likening the technology to super-human performance.

Also: Why is AI reporting so bad?

Today's conception of AI has obscured what McClelland, Rumelhart, and Hinton focused on, namely, the machine, and how it "creates" patterns, as they put it. They were very intimately familiar with the mechanics of weights constructing a pattern as a response to what was, in the input, merely data.

Why does all that matter? If the machine is the creator of patterns, then the conclusions people draw about AI are probably mostly wrong. Most people assume a computer program is perceiving a pattern in the world, which can lead to people deferring judgment to the machine. If it produces results, the thinking goes, the computer must be seeing something humans don't.

Except that a machine that constructs patterns isn't explicitly seeing anything. It's constructing a pattern. That means what is "seen" or "known" is not the same as the colloquial, everyday sense in which humans speak of themselves as knowing things.

Instead of starting from the anthropocentric question, What does the machine know? it's best to start from a more precise question, What is this program representing in the connections of its weights?

Depending on the task, the answer to that question takes many forms.

Consider computer vision. The convolutional neural network that underlies machine learning programs for image recognition and other visual perception is composed of a collection of weights that measure pixel values in a digital image.

The pixel grid is already an imposition of a 2-D coordinate system on the real world. Provided with the machine-friendly abstraction of the coordinate grid, a neural net's task of representation boils down to matching the strength of collections of pixels to a label that has been imposed, such as "bird" or "blue jay."

In a scene containing a bird, or specifically a blue jay, many things may be happening, including clouds, sunshine, and passers by. But the scene in its entirety is not the thing. What matters to the program is the collection of pixels most likely to produce an appropriate label. The pattern, in other words, is a reductive act of focus and selection inherent in the activation of neural net connections.

You might say, a program of this kind doesn't "see" or "perceive" so much as it filters.

Also: A new experiment: Does AI really know cats or dogs -- or anything?

The same is true in games, where AI has mastered chess and poker. In the full information game of chess, for DeepMind's AlphaZero program, the machine learning task boils down to crafting a probability score at each moment of how much a potential next move will lead ultimately to win, lose or draw.

Because the number of potential future game board configurations cannot be calculated even by the fastest computers, the computer's weights cut short the search for moves by doing what you might call summarizing. The program summarizes the likelihood of a success if one were to pursue several moves in a given direction, and then compares that summary to the summary of potential moves to be taken in another direction.

Whereas the state of the board at any moment the position of pieces, and which pieces remain might "mean" something to a human chess grandmaster, it's not clear the term "mean" has any meaning for DeepMind's AlphaZero for such a summarizing task.

A similar summarizing task is achieved for the Pluribus program that in 2019 conquered the hardest form of poker, No-limit Texas hold'em. That game is even more complex in that it has hidden information, the players' face down cards, and additional "stochastic" elements of bluffing. But the representation is, again, a summary of likelihoods by each turn.

Even in human language, what's in the weights is different from what the casual observer might suppose. GPT-3, the top language program from OpenAI, can produce strikingly human-like output in sentences and paragraphs.

Does the program "know" language? Its weights hold a representation of the likelihood of how individual words and even whole strings of text are found in sequence with other words and strings.

You could call that function of a neural net a summary similar to AlphaGo or Pluribus, given that the problem is rather like chess or poker. But the possible states to be represented as connections in the neural net are not just vast, they are infinite given the infinite composability of language.

On the other hand, given that the output of a language program such as GPT-3, a sentence, is a fuzzy answer rather than a discrete score, the "right answer" is somewhat less demanding than the win, lose or draw of chess or poker. You could also call this function of GPT-3 and similar programs an "indexing" or an inventory" of things in their weights.

Also: What is GPT-3? Everything your business needs to know about OpenAI's breakthrough AI language program

Do humans have a similar kind of inventory or index of language? There doesn't seem to be any indication of it so far in neuroscience. Likewise, in the expression"to tell the dancer from the dance,"does GPT-3 spot the multiple levels of significance in the phrase, or the associations? It's not clear such a question even has a meaning in the context of a computer program.

In each of these cases chess board, cards, word strings the data are what they are: a fashioned substrate divided in various ways, a set of plastic rectangular paper products, a clustering of sounds or shapes. Whether such inventions "mean" anything, collectively, to the computer, is only a way of saying that a computer becomes tuned in response, for a purpose.

The things such data prompt in the machine filters, summarizations, indices, inventories, or however you want to characterize those representations are never the thing in itself. They are inventions.

Also: DeepMind: Why is AI so good at language? It's something in language itself

But, you may say, people see snowflakes and see their differences, and also catalog those differences, if they have a mind to. True, human activity has always sought to find patterns, via various means. Direct observation is one of the simplest means, and in a sense, what is being done in a neural network is a kind of extension of that.

You could say the neural network revels what was always true in human activity for millennia, that to speak of patterns is a thing imposed on the world rather than a thing in the world. In the world, snowflakes have form but that form is only a pattern to a person who collects and indexes them and categorizes them. It is a construction, in other words.

The activity of creating patterns will increase dramatically as more and more programs are unleashed on the data of the world, and their weights are tuned to form connections that we hope create useful representations. Such representations may be incredibly useful. They may someday cure cancer. It is useful to remember, however, that the patterns they reveal are not out there in the world, they are in the eye of the perceiver.

Also: DeepMind's 'Gato' is mediocre, so why did they build it?

Excerpt from:
AI: The pattern is not in the data, it's in the machine - ZDNet

Why CircleUp thinks machine learning may be the hottest item in consumer goods – CNBC

In this weekly series, CNBC takes a look at companies that made the inaugural Disruptor 50 list, 10 years later.

Disruptive companies have shaped the ever-growing consumer packaged goods industry in recent years, from the rise in plant-based products from companies like Beyond Meat and Impossible Foods to an increased focus on personal care products from CNBC Disruptor 50 companies like Beautycounter and Dollar Shave Club.

Consumer behaviors, demands, and expectations have started to flip the industry as well, with shoppers willing to go well beyond a grocery store shelf to find a product they want to buy. The viability of businesses built around direct-to-consumer, e-commerce, and social media has only further accelerated that.

In fact, the top 20 consumer packaged goods companies are estimated to grow five times slower than their smaller category competitors,according to an Accenture report. Add the growth of the category on top of that overall consumer packaged goods volume sales grew 4.3% in 2021 and the emphasis on finding the next big thing has become even more important for companies and investors in the space, as well as the desire for founders with those ideas to access funding.

CircleUp, whose start as a crowdfunding platform that connected accredited investors with food and beverage start-ups landed it on the inaugural CNBC Disruptor 50 list, has looked to evolve alongside the industry. Having already launched its own early-stage investment fund called CircleUp Growth Partners and a credit business that has helped it support more than 500 different brands, its next step is to open its data platform up to the industry to further facilitate more investment.

Danny Mitchell, recently named CircleUp CEO after previously serving as CFO, said that with how quickly the industry is evolving on top of companies like Amazon and Instacart changing how consumers are purchasing products on top of social media platforms, the importance of data in this space is only growing.

"You may have point-of-sale data, or something focused on social media, but you need that holistic view to get a true picture of the category, the trends and the categories, as well as individual companies," Mitchell said. "The Fortune 100 companies in this space are concerned about their existing brands being cannibalized by up-and-coming brands that you may have never even known about or went from 1,000 followers to a million followers on Instagram in six months."

That has also meant staying on top of flavor and ingredient trends with consumers perhaps more willing to try new products than ever before. Mitchell pointed to Asian-inspired sparkling water brand Sanzo, which CircleUp Growth Partners led a $10 million Series A round in February and which features flavors like lychee, calamansi lime, and yuzu ginger.

"You're asking these open-ended questions like is an ingredient as popular today as it was three years ago or even three months?" Mitchell said. "These are the kinds of things that we're trying to constantly analyze and that we can provide clients." Mitchell said Helio, the data platform, should appeal to those Fortune 100 brands trying to stay ahead of the curve with new products while also looking for possible acquisitions, investment firms, and even smaller companies looking for market insights as they grow revenue.

Answering those sorts of questions will likely become even more important as concerns over inflation and a potential recession heighten the focus on consumer spending.

Mitchell said that he believes consumer staples will continue to perform better than peer companies and that many of the early-stage companies that CircleUp is drawing attention to "have product fit but generally have revenue," making some of those bets a bit less risky.

"It's a difficult time but I think that the consumer space will perform better and the opportunities in M&A, and from a bottom-line return from an investment standpoint, are better than the other sectors that we face," he said.

While CircleUp is hoping to facilitate more activity in the CPG space, the company itself does not have any plans to enter the capital markets this coming year, Mitchell said, adding that he expects to the company to "start looking at potential fundraising" next year.

Sign upfor our weekly, original newsletter that goes beyond the annual Disruptor 50 list, offering a closer look at list-making companies and their innovative founders.

The rest is here:
Why CircleUp thinks machine learning may be the hottest item in consumer goods - CNBC

How AI and machine learning are reshaping the way transit systems move traffic patterns REJournals – REjournals.com

Of the many ways artificial intelligence and machine learning are poised to improve modern life, the promise of impacting mass transit is significant. The world is much different compared with the early days of the pandemic, and people around the world are again leveraging mobility and transit systems for work, leisure and more.

Across the U.S., traditional mass transit systems including buses, subways and personal vehicles have returned to struggling through gridlock, rider levels and congestion. However, advanced AI and machine learning solutions built on cloud-based platforms are being deployed to reduce these frustrations.

Transportation presents exciting opportunities with AI

Transportation is one of the most important areas in which modern AI provides a significant advantage over conventional algorithms used in traditional transit system technology.

AI promises to streamline traffic flow and reduce congestion for many of todays busiest roadways and thoroughfares. Smart traffic light systems and the cloud technology platforms they operate on are now designed to manage and predict traffic more efficiently, which can save money and create more efficiencies not only for the cities themselves, but for individuals. AI and machine learning today can process highly complex data and traffic trends and suggest optimum routing for drivers in real-time based on specific traffic conditions.

As a result of drastically improved processing power, transit system technologies are now used in various IoT (Internet of Things) devices to achieve real-time image recognition and prediction that took place in legacy data centers during the last half century. This new decentralized-focused architecture helps increase the implementation of machine learning and AI.

Todays recognition algorithms offer enhanced insight on the mix of density, traffic and overall rate of flow. Furthermore, these optimized algorithms can leverage data points by region resulting in a streamline pattern to reduce traffic problems while redistributing flow more optimally. Municipal transit systems can then make better decision-making power, and the control system has a much higher degree of failure tolerance as was previously demonstrated in legacy hub-and-spoke systems.

AI is already impacting transit systems

These technologies are already being deployed around the country. As one example, the Santa Clara Valley Transportation Authority in partnership with the City of San Jos, California, has been piloting a cloud-based, AI-powered transit signal priority (TSP) system that utilizes pre-existing bus-fleet tracking sensors and city communication networks to dynamically adjust the phase and timing of traffic signals to provide sufficient green clearance time to buses while minimally impacting cross traffic.

Because the new platform leverages pre-existing infrastructure, it required no additional hardware installations inside traffic signal cabinets or buses. And unlike traditional, location-based check-in and check-out TSP solutions, the platform processes live bus location information through machine learning models and makes priority calls based on estimated times of arrival. The platform has so far improved travel times on VTAs route 77 by 18% to 20% overall, equating to a five- to six-minute reduction in signal delay.

The cloud-based transit signal priority system combines asset management and automation to produce a system capable of providing services to an entire region. Unlike hardware-based systems, this platform uses pre-existing equipment and leverages cloud technology to facilitate operations. This removes the need for vehicle detection hardware at the intersection because vehicle location is known through the CAD/AVL system. This enables both priority calls from greater distances away from signals and priority calls coordinated among a group of signals. Furthermore, the system provides real-time insights on which buses are currently receiving priority along with daily reports of performance metrics.

The advanced transit signal priority systems available today consist of two parts, a unit in the traffic cabinet and another unit placed on the vehicle. The transit priority logic is the same, regardless of the detection and communication medium. When a vehicle is within predetermined boundaries, the system places a request to the signal controller for prioritization. Since the original systems used fixed detection points, signal controllers were configured with static estimated travel times. Since travel times are dependent on several environmental factors, the industry implemented GPS based, wireless communication systems. With this method, vehicles found within detection zones replace the static detection points and the vehicles speed is used to determine arrival time.

The platform allows cities to build upon current investments in infrastructure to deploy city-wide TSP. To enable safe and secure connections with traffic signals, each city requires just one device for use that is a computer that resides at the edge and serves as the protective link between city traffic signals and the platform. It is designed to securely manage the information exchange between traffic lights and the cloud platform. It is the only additional hardware necessary, and depending on the existing city network configuration, the platform may receive vehicular data directly or via the citys network using secure connections.

Sophisticated process for prioritizing traffic

The systems method of placing priority calls to traffic signals is more sophisticated and is not constrained to fixed-point locations. Unlike the current state-of-the-art of placing priority calls from the detection of buses at specific locations that starts a pre-programmed time of arrival, this platform uses a vectorized approach. In mathematics, a vector is an arrow representing a magnitude and a direction. In this platforms software, the arrow points in the direction of the traffic light and the magnitude is the travel time.

When the system is set up, traffic signals, bus routes and bus stops all get a digital representation on this vector. This ends up producing a digital geospatial map where software is then able to track bus progression along bus routes. This results in a system that can dynamically place transit calls regardless of its location. Instead, the system makes precise priority calls based on the expected time of arrival which is the basis for all TSP check-in calls supported by all signal controller vendors. And due to the nature of the tracking algorithm, any significant changes to ETA can be adjusted. For example, if a bus was predicted to skip a bus stop but didnt, the system will detect the change and adjust the priority call accordingly.

The combination of AI, machine learning and cloud-based technology all have great potential to not only improve the current mass transit system but reimagine it all together. This advanced technology is already proving how it can improve coordination between GPS, navigational apps, connected autos, and even taxi and ride-sharing services to efficiently combine into a single transit entity based on real-time data.

In the not-too-distant future, it is expected that connected self-driving cars and trucks will be more prevalent on the roads and highways, offering even greater potential for AI to reduce both the duration and risk of rapid mobility.

Timothy Menard is the Founder and chief executive officer of LYT, provider of cloud-based smart traffic solutions. LYT makes traffic lights smart by enabling them to see and respond to traffic. By doing so LYT can prioritize first responders and public transportation vehicles so they can get to their destinations faster and safer. The additional benefit is that it streamlines overall traffic flow helping to reduce congestion and emissions in high traffic areas.

Read the original here:
How AI and machine learning are reshaping the way transit systems move traffic patterns REJournals - REjournals.com

Syapse Unveils Two New Studies on Use of Machine Learning on Real-World Data to Identify and Treat Cancer With Precision at ASCO 2022 – GlobeNewswire

SAN FRANCISCO, May 27, 2022 (GLOBE NEWSWIRE) -- Syapse, a leading real-world evidence company dedicated to extinguishing the fear and burden of serious diseases by advancing real-world care, today announced two new studies focused on how the use of machine learning on real-world data can be used to power precision medicine solutions. Syapse will be presenting at the American Society for Clinical Oncology (ASCO) Annual Meeting being held June 3-7, 2022 in Chicago.

This years ASCO is centered on a theme of innovation to make cancer care more equitable, convenient and efficient. Two studies that we are presenting align well with this objective, with a focus on how machine learning can be applied to real-world data to better bring identification of patient characteristics, and specific patient cohorts of interest, to scale, said Thomas Brown, MD, chief medical officer of Syapse. The transformational effort to pursue more personalized, targeted treatments for patients with cancer can be empowered by leveraging real-world data to produce insights in the form of real world evidence, as a complement to classical clinical trials.

Unveiled at ASCO, the Syapse studies include:

In addition to presenting this research at ASCO, Syapse has created an online ASCO hub with more information about its research, its interactive booth experience and how its work with real-world evidence is transforming data into answers that improve care for patients everywhere. For ASCO attendees, please visit Syapse at booth #18143 during the show.

AboutSyapseSyapse is a company dedicated to extinguishing the fear and burden of oncology and other serious diseases by advancing real-world care. By marrying clinical expertise with smart technologies, we transform data into evidenceand then into experiencein collaboration with our network of partners, who are committed to improving patients lives through community health systems. Together, we connect comprehensive patient insights to our network, to empower our partners in driving real impact and improving access to high-quality care.

Syapse ContactChristian Edgington, Media & Engagementcedgington@realchemistry.com

The rest is here:
Syapse Unveils Two New Studies on Use of Machine Learning on Real-World Data to Identify and Treat Cancer With Precision at ASCO 2022 - GlobeNewswire