Archive for the ‘Artificial Intelligence’ Category

Climate Action Study 2022: From Sustainability to Purpose – Explore what Consumers and Industry Experts Think About Artificial Intelligence -…

DUBLIN--(BUSINESS WIRE)--The "From Sustainability to Purpose: Climate Action" report has been added to ResearchAndMarkets.com's offering.

This report explores what consumers and industry experts think about artificial intelligence, including concerns such as data exploitation, and advantages such as increasing efficiency and innovation. Case studies underline the developments. Topics explored include emerging technologies like robots, virtual reality in the car and education robots.

Strategy Briefings offer unique insight into emerging trends world-wide. Aimed squarely at strategists and planners, they draw on the analyst's vast information resources to give top line insight across markets and within consumer segments.

Why buy this report?

Key Topics Covered:

1. Introduction

2. Meeting Consumer Needs

3. Conclusion

For more information about this report visit https://www.researchandmarkets.com/r/oyat2i

Go here to read the rest:
Climate Action Study 2022: From Sustainability to Purpose - Explore what Consumers and Industry Experts Think About Artificial Intelligence -...

Artificial intelligence drives the way to net-zero emissions – Sustainability Magazine

Op-ed: Aaron Yeardley, Carbon Reduction Engineer, Tunley Engineering

The fourth industrial revolution (Industry 4.0) is already happening, and its transforming the way manufacturing operations are carried out. Industry 4.0 is a product of the digital era as automation and data exchange in manufacturing technologies shift the central industrial control system to a smart setup that bridges the physical and digital world, addressed via the Internet of Things (IoT).

Industry 4.0 is creating cyber-physical systems that can network a production process enabling value creation and real-time optimisation. The main factor driving the revolution is the advances in artificial intelligence (AI) and machine learning. The complex algorithms involved in AI use the data collected from cyber-physical systems, resulting in smart manufacturing.

The impact that Industry 4.0 will have on manufacturing will be astronomical as operations can be automatically optimised to produce increased profit margins. However, the use of AI and smart manufacturing can also benefit the environment. The technologies used to optimise profits can also be used to produce insights into a companys carbon footprint and accelerate its sustainability. Some of these methods are available to help companies reduce their GHG emissions now. Other methods have the potential to reduce global GHG emissions in the future.

Scope 3 emissions are the emissions from a companys supply chain, both upstream and downstream activities. This means scope 3 covers all of a companys GHG emission sources except those that are directly created by the company and those created from using electricity. It comes as no surprise that on average Scope 3 emissions are 5.5 times greater than the combined amount from Scope 1 and Scope 2. Therefore, companies should ensure all three scopes are quantitated in their GHG emissions baseline.

However, in comparison to Scope 1 and Scope 2 emissions, Scope 3 emissions are difficult to measure and calculate. This is because of a lack of transparency in supply chains, lack of connections with suppliers, and complex industrial standards that provide misleading information. The major issues concerning Scope 3 emissions are as follows:

AI-based tools can help establish baseline Scope 3 emissions for companies as they are used to model an entire supply chain. The tools can quickly and efficiently sort through large volumes of data collected from sensors. If a company deploys enough sensors across the whole area of operations, it can identify sources of emissions and even detect methane plumes.

A digital twin is an AI model that works as a digital representation of a physical piece of equipment or an entire system. A digital twin can help the industry optimise energy management by using the AI surrogate models to better monitor and distribute energy resources and provide forecasts to allow for better preparation. A digital twin will optimise many sources of data and bring them onto a dashboard so that users can visualise it in real-time. For example, a case study in the Nanyang Technological University used digital twins across 200 campus buildings over five years and managed to save 31% in energy and 9,600 tCO2e. The research used IESs ICL technology to plan, operate, and manage campus facilities to minimise energy consumption.

Digital twins can be used as virtual replicas of building systems, industrial processes, vehicles, and many other opportunities. The virtual environment enables more testing and iterations so that everything can be optimised to its best performance. This means digital twins can be used to optimise building management making smart strategies that are based on carbon reduction.

Predictive maintenance of machines and equipment used in industry is now becoming common practice because it saves companies costs in performing scheduled maintenance, or costs in fixing broken equipment. The AI-based tool uses machine learning to learn how historical sensor data maps to historical maintenance records. Once a machine learning algorithm is trained using the historical data, it can successfully predict when maintenance is required based on live sensor readings in a plant. Predictive maintenance accurately models the wear and tear of machinery that is currently in use.

The best part of predictive maintenance is that it does not require additional costs for extra monitoring. Algorithms have been created that provide accurate predictions based on operational telemetry data that is already available. Predictive maintenance combined with other AI-based methods such as maintenance time estimation and maintenance task scheduling can be used to create an optimal maintenance workflow for industrial processes. Conversely, improving current maintenance regimes which often contribute to unplanned downtime, quality defects and accidents is appealing for everybody.

An optimal maintenance schedule produced from predictive maintenance prevents work that often is not required. Carbon savings will be made via the controlled deployment of spare parts, less travel for people to come to the site, and less hot shooting of spare parts. Intervening with maintenance only when required and not a moment too late will save on the use of electricity, efficiency (by preventing declining performance) and human labour. Additionally, systems can employ predictive maintenance on pipes that are liable to spring leaks, to minimise the direct release of GHGs such as HFCs and natural gas. Thus, it has huge potential for carbon savings.

Research has shown that underpinning the scheduling of maintenance activities on predictive maintenance and maintenance time estimation can produce an optimal maintenance scheduling (Yeardley, Ejeh, Allen, Brown, & Cordiner, 2021). The work optimised the scheduling by minimising costs based on plant layout, downtime, and labour constraints. However, scheduling can also be planned by optimising the schedule concerning carbon emissions. In this situation, maintenance activities can be performed so that fewer journeys are made and GHG emissions are saved.

The internet of things (IoT) is the digital industrial control system, a network of physical objects that are connected over the internet by sensors, software and other technologies that exchange data with each thing. In time, the implementation of the IoT will be worldwide and every single production process and supply chain will be available as a virtual image.

Open access to a worldwide implementation of the IoT has the potential to provide a truly circular economy. Product designers can use the information available from the IoT and create value from other peoples waste. Theoretically, we could establish a work where manufacturing processes are all linked so that there is zero extracted raw materials, zero waste disposed and net-zero emissions.

Currently, the world has developed manufacturing processes one at a time, not interconnected value chains across industries. It may be a long time until the IoT creates the worldwide virtual image required, but once it has the technology is powerful enough to address losses from each process and exchange material between connected companies. Both materials and energy consumption can be shared to lower CO2 emissions drastically. It may take decades, but the IoT provides the technology to create a circular economy.

ConclusionAI has enormous potential to benefit the environment and drive the world to net-zero. The current portfolio of research being conducted at the Alan Turning Institute (UKs national centre for data science) includes projects that explore how machine learning can be part of the solution to climate change. For example, an electricity control room algorithm is being developed to provide decision support and ensure energy security for a decarbonised system. The national grids electricity planning is improved by forecasting the electricity demand and optimising the schedule. Further, Industry 4.0 can plan for the impact that global warming and decarbonisation strategies have on our lives.

Read this article:
Artificial intelligence drives the way to net-zero emissions - Sustainability Magazine

Artificial intelligence tapped to fight Western wildfires – Portland Press Herald – Press Herald

DENVER With wildfires becoming bigger and more destructive as the West dries out and heats up, agencies and officials tasked with preventing and battling the blazes could soon have a new tool to add to their arsenal of prescribed burns, pick axes, chain saws and aircraft.

The high-tech help could come by way of an area not normally associated with fighting wildfires: artificial intelligence. And space.

Lockheed Martin Space, based in Jefferson County, is tapping decades of experience of managing satellites, exploring space and providing information for the U.S. military to offer more accurate data quicker to ground crews. They are talking to the U.S. Forest Service, university researchers and a Colorado state agency about how their their technology could help.

By generating more timely information about on-the-ground conditions and running computer programs to process massive amounts of data, Lockheed Martin representatives say they can map fire perimeters in minutes rather than the hours it can take now. They say the artificial intelligence, or AI, and machine learning the company has applied to military use can enhance predictions about a fires direction and speed.

The scenario that wildland fire operators and commanders work in is very similar to that of the organizations and folks who defend our homeland and allies. Its a dynamic environment across multiple activities and responsibilities, said Dan Lordan, senior manager for AI integration at Lockheed Martins Artificial Intelligence Center.

Lockheed Martin aims to use its technology developed over years in other areas to reduce the time it takes to gather information and make decisions about wildfires, said Rich Carter, business development director for Lockheed Martin Spaces Mission Solutions.

The quicker you can react, hopefully then you can contain the fire faster and protect peoples properties and lives, Carter said.

The concept of a regular fire season has all but vanished as drought and warmer temperatures make Western lands ripe for ignition. At the end of December, the Marshall fire burned 991 homes and killed two people in Boulder County. The Denver area just experienced its third driest-ever April with only 0.06 of an inch of moisture, according to the National Weather Service.

Colorado had the highest number of fire-weather alerts in April than any other April in the past 15 years. Crews have quickly contained wind-driven fires that forced evacuations along the Front Range and on the Eastern Plains. But six families in Monte Vista lost their homes in April when a fire burned part of the southern Colorado town.

Since 2014, the Colorado Division of Fire Prevention and Control has flown planes equipped with infrared and color sensors to detect wildfires and provide the most up-to-date information possible to crews on the ground. The onboard equipment is integrated with the Colorado Wildfire Information System, a database that provides images and details to local fire managers.

Last year we found almost 200 new fires that nobody knew anything about, said Bruce Dikken, unit chief for the agencys multi-mission aircraft program. I dont know if any of those 200 fires would have become big fires. I know they didnt become big fires because we found them.

When the two Pilatus PC-12 airplanes began flying in 2014, Colorado was the only state with such a program conveying the information in near real time, Dikken said. Lockheed Martin representatives have spent time in the air on the planes recently to see if its AI can speed up the process.

We dont find every single fire that we fly over and it can certainly be faster if we could employ some kind of technology that might, for instance, automatically draw the fire perimeter, Dikken said. Right now, its very much a manual process.

Something like the 2020 Cameron Peak fire, which at 208,663 acres is Colorados largest wildfire, could take hours to map, Dikken said.

And often the people on the planes are tracking several fires at the same time. Dikken said the faster they can collect and process the data on a fires perimeter, the faster they can move to the next fire. If it takes a couple of hours to map a fire, what I drew at the beginning may be a little bit different now, he said.

Lordan said Lockheed Martin engineers who have flown with the state crews, using the video and images gathered on the flights, have been able to produce fire maps in as little as 15 minutes.

The company has talked to the state about possibly carrying an additional computer that could help crunch all that information and transmit the map of the fire while still in flight to crews on the ground, Dikken said. The agency is waiting to hear the results of Lockheed Martins experiences aboard the aircraft and how the AI might help the state, he added.

Actionable intelligence

The company is also talking to researchers at the U.S. Forest Service Missoula Fire Sciences Laboratory in Montana. Mark Finney, a research forester, said its early in discussions with Lockheed Martin.

They have a strong interest in applying their skills and capabilities to the wildland fire problem, and I think that would be welcome, Finney said.

The lab in Missoula has been involved in fire research since 1960 and developed most of the fire-management tools used for operations and planning, Finney said. Were pretty well situated to understand where new things and capabilities might be of use in the future and some of these things certainly might be.

However, Lockheed Martin is focused on technology and thats not really been where the most effective use of our efforts would be, Finney said.

Prevention and mitigation and preemptive kind of management activities are where the great opportunities are to change the trajectory were on, Finney said. Improving reactive management is unlikely to yield huge benefits because the underlying source of the problem is the fuel structure across large landscapes as well as climate change.

Logging and prescribed burns, or fires started under controlled conditions, are some of the management practices used to get rid of fuel sources or create a more diverse landscape. But those methods have sometimes met resistance, Finney said.

As bad as the Cameron Peak fire was, Finney said the prescribed burns the Arapaho and Roosevelt National Forests did through the years blunted the blazes intensity and changed the flames movement in spots.

Unfortunately, they hadnt had time to finish their planned work, Finney said.

Lordan said the value of artificial intelligence, whether in preventing fires or responding to a fire, is producing accurate and timely information for fire managers, what he called actionable intelligence.

One example, Lordan said, is information gathered and managed by federal agencies on the types and conditions of vegetation across the country. He said updates are done every two to three two years. Lockheed Martin uses data from satellites managed by the European Space Agency that updates the information about every five days.

Lockheed is working with Nvidia, a California software company, to produce a digital simulation of a wildfire based on an areas topography, condition of the vegetation, wind and weather to help forecast where and how it will burn. After the fact, the companies used the information about the Cameron Peak fire, plugging in the more timely satellite data on fuel conditions, and generated a video simulation that Lordan said was similar to the actual fires behavior and movement.

While appreciating the help technology provides, both Dikken with the state of Colorado and Finney with the Forest Service said there will always be a need for ground-truthing by people.

Applying AI to fighting wildfires isnt about taking people out of the loop, Lockheed Martin spokesman Chip Eschenfelder said. Somebody will always be in the loop, but people currently in the loop are besieged by so much data they cant sort through it fast enough. Thats where this is coming from.

Invalid username/password.

Please check your email to confirm and complete your registration.

Use the form below to reset your password. When you've submitted your account email, we will send an email with a reset code.

Previous

Next

See the rest here:
Artificial intelligence tapped to fight Western wildfires - Portland Press Herald - Press Herald

Traffic lights using artificial intelligence could soon make gridlock a thing of the past – Study Finds

BIRMINGHAM, United Kingdom Could artificial intelligence finally make your morning commute smooth and relatively traffic-free? Researchers from Aston University report that their new AI traffic light system effectively keeps the flow of traffic rolling and mitigates congestion by reading live camera footage and adapting traffic lights on the fly.

Simply put, if theres no cars coming from the other direction, say goodbye to those long red lights clogging up the street!

The AI utilizes a type of learning called deep reinforcement, which means the program understands when it isnt doing well (traffic is bad) and reacts. As time goes on, the algorithm learns more and more based on better results.

During a round of assessments, this first-of-its-kind AI outperformed all other tested methods. The other methods relied mostly on manually-designed phase transitions.

The research team developed and constructed a cutting-edge, photo-realistic traffic simulator called Traffic 3Dto train the AI. Traffic 3D taught the program how to best react to various traffic and weather scenarios.

The AI was then tested on real junction footage. Sure enough, it adapted well to real traffic intersections despite being trained entirely on simulations up until that point. Study authors say this indicates the AI would be effective across many real-world settings.

We have set this up as a traffic control game. The program gets a reward when it gets a car through a junction. Every time a car has to wait or theres a jam, theres a negative reward. Theres actually no input from us; we simply control the reward system, says Dr. Maria Chli, a reader in Computer Science, in a university release.

Today, most traffic light automation systems at junctions rely on magnetic induction loops,or a wire that sits on the road and recognizes when cars pass over it. The program then reacts to that stimuli. This newly devised AI, however, is able to see high traffic volume before cars have even passed the lights. It is much more responsive and can react more quickly.

The reason we have based this program on learned behaviors is so that it can understand situations it hasnt explicitly experienced before. Weve tested this with a physical obstacle that is causing congestion, rather than traffic light phasing, and the system still did well. As long as there is a causal link, the computer will ultimately figure out what that link is. Its an intensely powerful system, explains Dr. George Vogiatzis, senior lecturer in Computer Science at Aston University.

Capable of being set up to view any traffic junction, both real and simulated, the AI starts learning autonomously right away. Other areas can be tweaked as well. For example, the reward system can be manipulated to encourage fast passage for emergency vehicles. Importantly, though, the AI always teaches itself it is never programmed with specific orders.

Ideally, study authors plan on testing the system on real roads this year.

The team presented their findings at the Autonomous Agents and Multi-agent Systems Conference 2022.

See more here:
Traffic lights using artificial intelligence could soon make gridlock a thing of the past - Study Finds

Predicting Others Behavior on the Road With Artificial Intelligence – SciTechDaily

Researchers have created a machine-learning system that efficiently predicts the future trajectories of multiple road users, like drivers, cyclists, and pedestrians, which could enable an autonomous vehicle to more safely navigate city streets. If a robot is going to navigate a vehicle safely through downtown Boston, it must be able to predict what nearby drivers, cyclists, and pedestrians are going to do next. Credit: MIT

A new machine-learning system may someday help driverless cars predict the next moves of nearby drivers, pedestrians, and cyclists in real-time.

Humans may be one of the biggest roadblocks to fully autonomous vehicles operating on city streets.

If a robot is going to navigate a vehicle safely through downtown Boston, it must be able to predict what nearby drivers, pedestrians, and cyclists are going to do next.

Behavior prediction is a tough problem, however, and current artificial intelligence solutions are either too simplistic (they may assume pedestrians always walk in a straight line), too conservative (to avoid pedestrians, the robot just leaves the car in park), or can only forecast the next moves of one agent (roads typically carry many users at once.)

MIT researchers have devised a deceptively simple solution to this complicated challenge. They break a multiagent behavior prediction problem into smaller pieces and tackle each one individually, so a computer can solve this complex task in real-time.

These simulations show how the system the researchers developed can predict the future trajectories (shown using red lines) of the blue vehicles in complex traffic situations involving other cars, bicyclists, and pedestrians. Credit: MIT

Their behavior-prediction framework first guesses the relationships between two road users which car, cyclist, or pedestrian has the right of way, and which agent will yield and uses those relationships to predict future trajectories for multiple agents.

These estimated trajectories were more accurate than those from other machine-learning models, compared to real traffic flow in an enormous dataset compiled by autonomous driving company Waymo. The MIT technique even outperformed Waymos recently published model. And because the researchers broke the problem into simpler pieces, their technique used less memory.

This is a very intuitive idea, but no one has fully explored it before, and it works quite well. The simplicity is definitely a plus. We are comparing our model with other state-of-the-art models in the field, including the one from Waymo, the leading company in this area, and our model achieves top performance on this challenging benchmark. This has a lot of potential for the future, says co-lead author Xin Cyrus Huang, a graduate student in the Department of Aeronautics and Astronautics and a research assistant in the lab of Brian Williams, professor of aeronautics and astronautics and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Joining Huang and Williams on the paper are three researchers from Tsinghua University in China: co-lead author Qiao Sun, a research assistant; Junru Gu, a graduate student; and senior author Hang Zhao PhD 19, an assistant professor. The research will be presented at the Conference on Computer Vision and Pattern Recognition.

The researchers machine-learning method, called M2I, takes two inputs: past trajectories of the cars, cyclists, and pedestrians interacting in a traffic setting such as a four-way intersection, and a map with street locations, lane configurations, etc.

Using this information, a relation predictor infers which of two agents has the right of way first, classifying one as a passer and one as a yielder. Then a prediction model, known as a marginal predictor, guesses the trajectory for the passing agent, since this agent behaves independently.

A second prediction model, known as a conditional predictor, then guesses what the yielding agent will do based on the actions of the passing agent. The system predicts a number of different trajectories for the yielder and passer, computes the probability of each one individually, and then selects the six joint results with the highest likelihood of occurring.

M2I outputs a prediction of how these agents will move through traffic for the next eight seconds. In one example, their method caused a vehicle to slow down so a pedestrian could cross the street, then speed up when they cleared the intersection. In another example, the vehicle waited until several cars had passed before turning from a side street onto a busy, main road.

While this initial research focuses on interactions between two agents, M2I could infer relationships among many agents and then guess their trajectories by linking multiple marginal and conditional predictors.

The researchers trained the models using the Waymo Open Motion Dataset, which contains millions of real traffic scenes involving vehicles, pedestrians, and cyclists recorded by lidar (light detection and ranging) sensors and cameras mounted on the companys autonomous vehicles. They focused specifically on cases with multiple agents.

To determine accuracy, they compared each methods six prediction samples, weighted by their confidence levels, to the actual trajectories followed by the cars, cyclists, and pedestrians in a scene. Their method was the most accurate. It also outperformed the baseline models on a metric known as overlap rate; if two trajectories overlap, that indicates a collision. M2I had the lowest overlap rate.

Rather than just building a more complex model to solve this problem, we took an approach that is more like how a human thinks when they reason about interactions with others. A human does not reason about all hundreds of combinations of future behaviors. We make decisions quite fast, Huang says.

Another advantage of M2I is that, because it breaks the problem down into smaller pieces, it is easier for a user to understand the models decision-making. In the long run, that could help users put more trust in autonomous vehicles, says Huang.

But the framework cant account for cases where two agents are mutually influencing each other, like when two vehicles each nudge forward at a four-way stop because the drivers arent sure who should be yielding.

They plan to address this limitation in future work. They also want to use their method to simulate realistic interactions between road users, which could be used to verify planning algorithms for self-driving cars or create huge amounts of synthetic driving data to improve model performance.

Predicting future trajectories of multiple, interacting agents is under-explored and extremely challenging for enabling full autonomy in complex scenes. M2I provides a highly promising prediction method with the relation predictor to discriminate agents predicted marginally or conditionally which significantly simplifies the problem, wrote Masayoshi Tomizuka, the Cheryl and John Neerhout, Jr. Distinguished Professor of Mechanical Engineering at University of California at Berkeley and Wei Zhan, an assistant professional researcher, in an email. The prediction model can capture the inherent relation and interactions of the agents to achieve the state-of-the-art performance. The two colleagues were not involved in the research.

Reference: M2I: From Factored Marginal Trajectory Prediction to Interactive Prediction by Qiao Sun, Xin Huang, Junru Gu, Brian C. Williams and Hang Zhao. 28 March 2022, Computer Science > Robotics.arXiv:2202.11884

This research is supported, in part, by the Qualcomm Innovation Fellowship. Toyota Research Institute also provided funds to support this work.

Read the original post:
Predicting Others Behavior on the Road With Artificial Intelligence - SciTechDaily