Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence: Advancing Applications in the CPI – ChemEngOnline

A convergence of digital technologies and data science means that industrial AI is gaining ground and producing results for CPI users

As data accessibility and analysis capabilities have rapidly advanced in recent years, new digital platforms driven by artificial intelligence (AI) and machine learning (ML) are increasingly finding practical applications in industry.

Data are so readily available now. Several years ago, we didnt have the manipulation capability, the broad platform or cloud capacity to really work with large volumes of data. Weve got that now, so that has been huge in making AI more practical, says Paige Morse, industry marketing director for chemicals at Aspen Technology, Inc. (Bedford, Mass.; http://www.aspentech.com). While AI and ML have been part of the digitalization discussion for many years, these technologies have not seen a great deal of practical application in the chemical process industries (CPI) until relatively recently, says Don Mack, global alliance manager at Siemens Industry, Inc. (Alpharetta, Ga.; http://www.industry.usa.siemens.com). In order for AI to work correctly, it needs data. Control systems and historians in chemical plants have a lot of data available, but in many cases, those data have just been sitting dormant, not really being put to good use. However, new digitalization tools enable us to address some use cases for AI that until recently just werent possible.

This convergence of technologies, from smart sensors to high-performance computing and cloud storage, along with advances in data science, deep learning and access to free and open-source software, have enabled the field of industrial AI to move beyond pure research to practical applications with business benefits, says Samvith Rao, chemical and petroleum industry manager at MathWorks (Natick, Mass.; http://www.mathworks.com). Such business benefits are wide-ranging, spanning varying realms from maintenance to materials science to emerging applications like supply-chain logistics and augmented-reality (AR). MathWorks recently collaborated with a Shell petroleum refinery to use AI to automatically incorporate tagged equipment information into operators AR headsets. All equipment in the refinery is tagged with a unique code. Shell wished to extract this data from the images acquired from cameras in the field. First, image recognition and computer-vision algorithms were applied, followed by deep-learning models for object detection to perform optical character-recognition. Ultimately, equipment meta-data was projected onto AR headsets of operators in the field, explains Rao.

Another major emerging area for industrial AI is in supply-chain management. The application of AI in supply chains lets us look at different scenarios and consider alternatives. Feedback from data about whats actually occurred, including any surprising events, can be put into the model to develop better scenario options that appropriately reflect reality, says Morse.

With the wide variety of end-use applications and ever-expanding platform capabilities, determining the most streamlined way to adopt an AI-based platform into an existing process can seem daunting, but Colin Parris, senior vice president and chief technology officer at GE Digital (San Ramon, Calif.; http://www.ge.com/digital), classifies industrial AI into three discrete pillars that build upon each other to deliver value early warning of problems, continuous prediction of problems and dynamic optimization. Data are, of course, paramount in realizing all three pillars. For early warning, I have sensors to give the state of the plant, showing the anomalies when that state changes. Continuous prediction looks at condition-based data to avoid unplanned shutdowns. Here, I want to know the condition of the ball bearings, the corrosion in the pipes, understand the creep in the machines in order to then determine the best plan and not default to time-based maintenance, so I need a lot of data. And if I want to do optimization, I need even more data, says Parris. All of the data can be culminated into a digital twin, which Parris defines as a living, learning model that is continuously updated to give an exact view of an asset (Figure 1). He emphasizes that model complexity is not a given. I may be able to use a surrogate model, which is a slimmed-down version that doesnt need to know all the process nuances. I may only need to know about certain critical parts. The model will constantly use data and update itself to live.

FIGURE 1. A robust digital twin may look at an entire plant, or might be a slimmed-down model that considers only certain critical parts. The model should use AI to continuously update itself

GE Digital worked with Wacker Chemie AG (Munich, Germany; http://www.wacker.com) to apply a holistic AI hierarchy for asset-performance management (APM) at a polysilicon production plant in Tennessee. There are roughly 1,500 pressure vessels at the site, and maintenance on them takes six weeks, resulting in significant financial burden due to lost production time. Regulatory compliance meant that these vessels were supposed to be maintained every two years. But, because we were able to actually capture the digital twin and show the current state of the asset, we helped the plant achieve API 580/581 certification, which says if a plant can show a certain level of condition-based capability, they can extend the maintenance interval anywhere from 5 to 10 years based upon the condition, explains Parris. With the early-warning and continuous-prediction pieces in place, the plant was experiencing improved availability and less downtime, and was able to begin looking at dynamic optimization. For Wacker, this included investigating specific product characteristics and intelligently adjusting the processes for higher-margin products. Thats the way it tends to work you go in a stepwise fashion to ultimately get to optimization, but its really hard to get to the optimization piece unless you first really understand the asset and have a digital twin that you know is learning as you make changes, adds Parris.

Furthermore, when implementing an advanced AI solution in a new or existing process, users must consider how the platform will be used and who will actually be using it. In the past, black-box AI solutions required users with some expertise in data science or advanced statistics, which often resulted in organizational data siloes, says Allison Buenemann, industry principal chemicals, at Seeq Corp. (Seattle, Wash.; http://www.seeq.com). Now, the industry has more self-service offerings in the advanced analytics and ML space, meaning that users in many different roles can access the most relevant data and insights for their own unique job needs. For instance, front-line process experts can hit the ground running, solving problems using complex algorithms from an unintimidating low- or no-code application experience. Executives and management teams can expect an empowered workforce solving high-value business problems with rapid time to insight, adds Buenemann. This democratization of data analytics and ML across organizations means that all stakeholders can work together to drive business value. Users must be able to document the thought process behind an analysis and they also must be able to structure analysis results for easy consumption, she explains.

The massive growth in sensor volume and associated data availability have certainly helped to promote the applicability of AI in industrial environments, but computing power and network connectivity are also critical pieces of the puzzle. Yokogawa Electric Corp. (Tokyo, Japan; http://www.yokogawa.com) recently announced a proof-of-concept project to utilize fifth-generation (5G) mobile communications for AI-enabled process controllers. The project will focus on using 5G to remotely control the level in a network of water tanks. One of the major benefits of 5G connectivity in autonomous, realtime plant control, according to Hirotsugu Gotou, manager, Yokogawa products control center, is its low-latency function, which means that the network can process a large volume of data with minimal delay. Yokogawas cloud-based AI controller system employs reinforcement-learning technology to determine the optimal operation parameters for a particular control loop.

Understanding reinforcement-learning schemes, which build upon modern predictive control, is crucial for autonomous process control. Reinforcement learning is a type of machine learning in which a computer learns to perform a task through repeated trial-and-error interactions with a dynamic environment, explains Mathworks Samvith Rao. Such a platform develops control policy in real time by interacting with the process, enabling the computer to make a series of decisions that maximize a reward metric for the task without human intervention and without being explicitly programmed to achieve the task. Robust mechanisms for safe operation of a fully trained model, and indeed, for safe operation of a plant, are high priorities for further investigation, he emphasizes.

In Yokogawas reinforcement-learning proof-of-concept, the AI controls tank level and continuously receives sensor data on flowrate and level. Based on these data, the AI will learn about the operation and will repeat the process to derive the optimal operation parameters, explains Gotou. Yokogawa previously completed a demonstration project using its proprietary AI to control water level in a three-tank system (Figure 2), which showed that after 30 iterations of learning (taking less than 4 h), the AI agent was able learn from its past decisions to determine the optimal control methods. Now, the company will work with mobile network provider NTT Domoco to construct a demonstration facility for cloud-based remote control of water tank level and evaluate the communication performance of the 5G network for realtime, autonomous process control. 5G networks are not yet widely adopted in industrial settings, but other projects are also exploring these technologies for IIoT applications. In April, GE Research announced an initiative to test Verizons 5G platform in a variety of industrial applications, including realtime control of wind farms. And last year, Accenture and AT&T began a 5G proof-of-concept project to develop 5G use cases for IIoT applications at a petroleum refinery in Louisiana operated by Phillips 66.

FIGURE 2. This demonstration unit includes a three-tank network in which an autonomous, reinforcement-learning-based scheme monitors and controls water level

Another important factor is the collaborative environment that has been fostered through open-source AI platforms, explains Gino Hernandez, head of global digital business for ABB Energy Industries (Zurich, Switzerland; http://www.abb.com). As things become more open and more distributed, I think its going to enable users to apply the technologies in a more meaningful way. The more people talk about the different models and their successes using open-source type AI models, and being able to have platforms where they can import and run those models is going to be key, he notes. In the past, vendors kept their platforms closed, which limited users to develop models only for a specific digital architecture. Now, says Hernandez, more AI platforms enable users to import models including their own proprietary algorithms from various sources to develop a more robust analytics program. Some users have rich domain expertise and want to build their own platforms. They are looking for environments where they not only have the ability to potentially use vendor-developed algorithms, but also use their own algorithms and have a sandbox in which they can import their own models and begin to integrate them, he explains.

As with any digital technology, cybersecurity and protecting proprietary intellectual property (IP) are paramount, but Hernandez also brings up the idea of sharable IP as a major area of opportunity for industrial AI. We see a lot of open sharing with users looking at different models related to machinery health in the open-source space. There are definite advantages for companies being open to sharing machinery-health data in multi-tenant cloud environments, because it helps us as an industry to better capture, understand and very quickly identify when there are systemic problems within pumps, sensors, PLCs or other elements, continues Hernandez. He also believes that the industry is becoming more comfortable with the ability to securely lock certain components of proprietary data within a platform, but still be able to share other selections of more generic data within a cloud environment. Facilitating and expediting this collaborative conversation will be key in accelerating the adoption and evolution of predictive machinery-health monitoring, which is among the more mature use cases for industrial AI, notes Hernandez.

One of the most prominent uses for industrial AI continues to be predictive maintenance. Everybodys looking at how to get more throughput, and the easiest way to do that is to reduce your downtime with predictive maintenance, explains Clayton French, product owner Digital Enterprise Labs at Siemens Industry. Siemens has worked with Sinopec Groups (Beijing, China; http://www.sinopecgroup.com) Qingdao Refinery using AI to investigate critical rotating-equipment components and predict potential causes of downtime. We took six months of data and did a feasibility study, which found that eighteen hours before compressor failure, they would have been notified that the asset was having a problem, potentially saving around $300,000, says French. In another project, French notes that Siemens conducted a feasibility study in which AI was able to detect an equipment failure almost a month in advance. Such models integrate correlation analysis, pattern recognition, health monitoring and risk assessment, among others.

Furthermore, when an anomaly is detected, and a countermeasure is initiated in the plant to fix the problem, the AI can record the instance in its database. Then, the next time it senses that a similar failure is about to occur, the AI will recommend a similar countermeasure, which can reduce maintenance time in the long term. This shows that the AI is learning and taking in all of these inputs. It continues to get better after its initial implementation, adds French. He emphasizes, however, that users should practice prudence in applying AI: Not everything turns out to be worthwhile in some cases, the AI can only predict something a few minutes before it happens, so you cant do anything actionable. Our studies point out what is actionable so that users can target the most effective things to monitor.

TrendMiner N.V. (Hasselt, Belgium; http://www.trendminer.com) recently introduced its custom-built Anomaly Detection Model using ML optimized for learning normal operating conditions and detecting deviations on new incoming data, which ultimately helps to avoid sub-optimal process operation or downtime by allowing users to react at the advent of anomaly ahead of productivity losses or equipment malfunctions, explains TrendMiner director of products Nick Van Damme. The ML model interfaces with TrendMiners self-service time-series analytics platform, by collecting sensor data readings over a user-defined historical timeframe of the process or equipment being analyzed. Process and asset experts further prepare the data by leveraging built-in search and contextualization capabilities to filter out irrelevant data to confirm that the view is an accurate representation of normal, desired operating conditions.This prepared view is then used to train the Anomaly Detection Model to learn the desired process conditions by considering the unique relationships between the sensors. This will allow detection of anomalies on other historical data, and more importantly, on new incoming data for a process. The trained model will return whether a new datapoint is an outlier or not based on a given threshold and return an anomaly score. The higher the anomaly score, the more likely that the datapoint is an outlier, adds Van Damme. In a batch process use-case, the model was trained to recognize a good batch profile and use that as a benchmark to alert users of deviations. A dashboard (Figure 3) provides visualizations of the operating zones learned by the model, with the latest process data points overlaid (shown in orange in Figure 3). Such a visualization enables users to quickly evaluate current process conditions versus normal operating behavior.

FIGURE 3. A visualization engine driven by ML develops a dashboard where current incoming data can be quickly benchmarked against established operating conditions

Another maintenance example from AspenTech involves fouling in ethylene furnaces. Typically, an operator will do periodic cleanouts of coke buildup on the furnaces, but what would be better is to get a better indication of when you actually need to do a cleanout, versus just scheduling it. So what companies are doing is taking the relevant furnace operating data and being able to predict fouling to prevent unplanned downtime. Users can be sure they are cleaning out the furnace before a real operational issue occurs, notes Morse.

On the optimization side, she highlights a case where AspenTech helped a polyethylene producer to streamline transitions between product grades to maximize production value. As catalysts are changed out to accommodate different production slates, there is a transition period where the resulting product is an off-grade material. The customer was able to apply an AI hybrid-model concept to look at how reactors are actually performing, and was able to decrease the amount of transition, both in terms of volume throughput, so they werent wasting feedstock making a product they didnt want, but also by narrowing that transition time, they were also spending more reactor time making the preferred product instead of transition-grade material.

Rockwell Automation, Inc. (Milwaukee, Wis.; http://www.rockwell.com) has also done extensive work using AI to optimize catalyst yield and product selectivity in traditional polymerization processes, as well. We started using pure neural networks to try to learn polymer reaction coefficients. We lean more and more into the actual reaction kinetics and the material balance around the reactors, trying to control the polymer chain length in the reactor. This is how you can get a specific property, such as melt flow or a melt index, on a polymer, says Pal Roach, oil and gas industry consultant at Rockwell Automation. In a particular example involving Chevron Phillips, an AI-driven advanced control model was applied to cut transition times between polymer grades by four hours. This change also led to a 50% reduction in product variability. In another case involving a distillation unit for long-chain alcohols, an AI-driven scheme applied to a nonlinear controller helped to cut energy consumption by around 35% and significantly reduce product-quality variability, as well as associated waste. There are going to be more and more of these types of AI applications coming as the industry refocuses and transitions into greener energy and more environmental safety and governance consciousness, predicts Roach.

Beyond predictive maintenance, companies are also starting to use AI to translate business targets (such as financial, quality or environmental goals) into process-improvement actions. Maintenance is key, because when youre shut down, youre not making any product and youre losing money. So, once you address that problem, the next question asks how can we run even better? Then you can start looking at process optimization, says Mack. The main problem for the optimization, especially for complex production lines, is the correlation of the process variables with which the operators are confronted, combined with the high numbers of DCS alarms that couldnt be evaluated. This issue is addressed by business impact driven anomaly detection. In the past, when operators would adjust setpoints for process variables, it would be loosely tied into business objectives, such as product quality. Now, process data can be aligned with specific business targets using AI. Anomalies we might detect in the data could be affecting quality or throughput. Then, using AI, users can categorize and rank these anomalies and their impact on business goals. The end result is that the process control system, as it sees these issues occurring, will prioritize them based on the business objectives of the company, he says, adding that such an AI engine could similarly be tied to a companys sustainability goals.

As chemical manufacturers are increasingly looking toward more sustainable feedstock options, bio-based processes, such as fermentation, are reaching larger scales and necessitating more precise and predictive control. We have used AI on corn-to-bioethanol fermentation optimization and seen yield increases from 2 to 5%, so that means youre getting more alcohol from the same amount of corn. And weve also seen overall production capacity increases as high as 20%, says Michael Tay, advanced analytics product manager at Rockwell Automation. To build the AI model for fermentation, Tay explained that Rockwell began with classic biofermentation modeling tuning the Michaelis-Menten equations, which predict the enzymatic rate of reaction, as the fundamental architecture. This enabled realtime control of the temperature profile in the fermenter. You try to keep temperatures high, but then as alcohol concentration increases, you have to cool the reactor more so that the yeast gets more life out of it, because as the alcohol concentration goes up, the yeast performance goes down. The AI is showing dynamic recognition and adaptation of the fermentation profiles, so thats sort of the key to those yield improvements. But youre also getting more alcohol out of every batch, he adds. In addition to temperature-driven optimization, Rockwell has also used AI to improve the enzyme-dosing step in biofermentation processes. If you have this causally correct model that is based on biological fundamentals, driven by data and AI, then you can optimize your batch yield to ultimately get more out of the yeast, which is your catalyst in the reactor, says Tay. AspenTech is also working on developing accurate AI and simulation models for bio-based processes like fermentation, as well as looking at advanced chemical recycling models. Were tuning those processes to be more efficient, and were approaching predictability, but the feedstock variance will be something that we will be working on constantly, adds Paige Morse.

While AI and other digital tools have historically targeted operational and financial objectives, many chemical companies are increasingly looking at process metrics that specifically consider environmental initiatives, such as reducing emissions and waste. Seeq worked with a CPI company to deploy an automated model of a sulfur oxides (SOx) detectors behavior during the time periods when its range was exceeded. Typically, accurate emissions reporting becomes more challenging when vent-stack analyzers peg out at their limits, necessitating complex, manual calculations and modeling. Seeqs model development required event identification to isolate the data set for the time periods before, during and after a detector range exceedance occurred. Regression models were fit to the data before and after the exceedance, and then extrapolated forward and backward to generate a continuous modeled signal, which is used to calculate the maximum concentration of pollutant, says Buenemann. The solution also compiled relevant visualizations into a single auto-updating report displaying data for the most recent exceedance event alongside visualizations tracking year-to-date progression toward permit limits, which enabled the company to make proactive process adjustments based on the SOx emissions trajectory.

AI plays a major role in reducing waste by helping to ensure product quality, explains Mathworks Rao, citing the example of Japanese films manufacturer Dexerials, which deployed an AI program for realtime detection of product defects. A deep-learning-based machine-vision system extracts the properties of product defects, such as color, shape and texture, from images, and classifies according to the type of defects. The system was put in place to improve upon the manual inspection system, which was an error-ridden process with low accuracy. The AI system not only improved the accuracy, but also greatly reduced product and feedstock waste and frequent production stoppage.

Beyond improving day-to-day industrial operations, AI and ML technologies are also enabling advances in the synthesis of new materials and product formulations. In developing ML-powered digital technologies that encompass the chemical knowledge for synthetic processes and materials formulation, IBM (Armonk, N.Y.; http://www.ibm.com) took inspiration from sources very far removed from chemistry image processing and language translation. We learned that some of the technologies that have been developed for image processing were actually applicable in the context of materials formulation, so we took those concepts and brought them into the chemical space, allowing us to reduce the dimensionality of chemical problems, explains Teo Laino, distinguished researcher at IBM Research Europe. IBM is partnering with Evonik Industries AG (Essen, Germany; http://www.evonik.com) to apply such a scheme to aid in optimizing polymer formulations. Quite often, when companies are working on formulating materials, such as polymers, the amount of data is relatively sparse compared to the dimensionality of the problem. The use of technologies that reduce the size of the problem means that there are fewer degrees of freedom, which are easier to match with available data. This is optimal, because users can make good use of data and can really see sensible benefits, he adds. Typically, optimizing a material to meet specific property requirements could take months, but IBMs platform for this inverse design process can significantly decrease that time, he says.

In designing a cognitive solution for chemical synthesis, IBM trained digital architectures that are normally used for translating between languages to create a digital solution that can optimize synthetic routes for molecules (Figure 4). By starting with technologies typically used for natural language processing, we recast the problem of predicting the chemical reactivity between molecules as a translation problem between different languages, explains Laino. Notably, the ML scheme has been validated in a large number of test cases, since IBM first made the platform (IBM RXN for chemistry, rxn.res.ibm.com) freely available online in 2018.This is one of the most complicated tasks in the materials industry today, and it is where ML can help to greatly speed up the design process. You can reduce the number of tests and trials and go more directly to the domain of the material formulation that is going to satisfy your requirements, says Laino.

FIGURE 4. AI can be used to quickly determine synthetic routes for new molecules

We built a community of more than 25,000 users that have been using the models almost 4 million times. You can use our digital models for driving and programming commercial automation hardware, and you can run chemical synthesis from home wherever you have a web browser. Its a fantastic way of providing a work-from-home environment, even for experimental chemists, says Laino. IBM calls this technology IBM RoboRXN (Figure 5) and is using its ML synthesis capabilities for in-house research related to designing novel materials for atmospheric carbon-capture applications. IBMs ML platform has also been adopted by Diamond Light Source (Oxfordshire, U.K.; http://www.diamond.ac.uk), the U.K.s national synchrotron science facility, to operate their fully autonomous chemical laboratory. They are coupling their own automated lab hardware with IBMs predictive platform to drive their chemical-synthesis experiments, adds Laino.

Some of IBMs other notable projects include its ten-year relationship with the Cleveland Clinic for deployment of AI for advancing life sciences research and drug design chemistry; and a collaboration with Mitsui Chemicals, Inc. (Tokyo; http://www.mitsuichemicals.com) to develop an AI-empowered blockchain platform promoting plastics circularity and materials traceability.

FIGURE 5. Open-source AI platforms enable experiments to be run remotely, bringing a new level of autonomous operations into chemistry laboratories

AI and ML are also proving to be effective technologies for accelerating the product-development cycle. Dow Polyurethanes (Midland, Mich.; http://www.dow.com/polyurethanes) and Microsoft collaborated to create the Predictive Intelligence platform product formulation and development. The platform harnesses materials-science data captured from decades of formulations and experimental trials and applies AI and ML to rapidly develop optimal product formulations for customers, explains Alan Robinson, North America commercial vice president, Dow Polyurethanes. Predictive Intelligence allows us to not only discover the chemistry and what a formulation needs to look like, but now we can also look at how we simulate trials. In the past, wed be running numerous trials that take place over a period as long as 18 months, and now we can do that with a couple clicks of a button, says Robinson.

The demands of end-use polyurethane applications mean that finding the best chemistry for a particular product can be quite complex. In a typical year were releasing hundreds of new products, and in a typical formulation, there might be a dozen components that are individually mixed at different levels in different orders. We also have to think of all of the different tooling and equipment that the materials will be subject to, as well as the kinetics that have to be played out. So, the challenge was how to take all the kinetics, rheology and formulation data and create a system that could move us forward, explains Dave Parrillo, vice president R&D, Dow Industrial Intermediates & Infrastructure.

To build such a complex platform, Dow relied on theory-based neural networks that incorporated critical correlations for kinetics and rheology. In a typical neural network, you feed it lots of data, which it learns from, and behind the scenes, its tuning its knobs and weighing different influences. We can now influence those knobs with theoretical correlations so that the system not only learns, but gets smarter over time, and also starts to explore spaces where we might not have as much data. It folds theoretical, empirical, semi-empirical, and experimental information into a single tool, says Parrillo. One of the first major applications that Dow is trialing for the platform is polyurethane mattresses, with multiple applications to follow in 2022.

Customers might be looking at a number of parameter constraints, from hardness and density, to airflow and viscoelastic recovery. Weve actually asked the AI engine to give us a series of formulations and then benchmark those formulations in the laboratory, and the accuracy is extraordinarily high, emphasizes Parrillo. The Predictive Intelligence platform will be available to customers beginning later this year.

FIGURE 6. AI can be used to rapidly and accurately validate pharmaceutical products for defects, which reduces manual inspection requirements

Once a product formulation is developed and manufacturing has begun, inspection and validation are key. Stevanato Group (Padua, Italy; http://www.stevanatogroup.com) recently launched an AI platform focused on visual inspection of biopharmaceutical products, looking at both particle and cosmetic defects (Figure 6). AI can improve overall inspection performance in terms of detection rate and false rejection rate. AI can help to reduce false rejects and costly interventions to parameterize the machine during production, explains Raffaele Pace, engineering vice president of operations at Stevanato Group. Recently, trials of the automatic inspection platform have produced promising results, including the ability to reduce falsely rejected products tenfold, with up to 99.9% accuracy, using deep learning (DL) techniques. Unlike traditional rule-based systems, DL models can generalize their predictions and be more flexible regarding variations, adds Pace. He also mentions that such advanced inspection performance can help to reduce the number of gray items, which are flagged on the production line but not rejected outright. Typically, such items require manual re-inspection, which adds time to the process. This helps the entire process become more lean and have less waste, while maintaining and improving quality, he continues. The company is currently working to enhance detection accuracy for both liquid and lyophilized products, and also developing an initiative to create pre-trained neural networks that could then adapt to specific defects and drugs. Producing such models will entail training the system with thousands of images, notes Pace.

Mary Page Bailey

View post:
Artificial Intelligence: Advancing Applications in the CPI - ChemEngOnline

How Artificial Intelligence has played a major role in fighting Covid – The National

From the personal to the professional and the micro to the macro-economic, the pandemic has highlighted just how crucial the state of global health and the policies that underpin it are to our collective survival and prosperity. Perhaps lesser appreciated, but certainly no less significant, is just how big a part Artificial Intelligence has to play, says a leading expert in the field.

Weve had an unprecedented amount of sharing of data globally, of live daily updates on data across the board, whether it has to do with death rates or infection rates. In the UK, we had our live tracker, we have track-and-trace that also collected data. All of this is underpinning the work that was being done to fight Covid. It is also what is ultimately the foundation for artificial intelligence, says Aldo Faisal, Professor of AI and Neuroscience at the Departments of Computing and Bioengineering at Imperial College London.

Prof Faisal leads the Brain and Behaviour Lab, which uses and develops statistical AI techniques to analyse data and predict behaviour, as well as producing medical-related robotics. Last year he was awarded a five-year UK Research and Innovation Turing AI Fellowship to develop an "AI Clinician" that will help doctors make complex decisions and relieve pressure on the NHS.

Having spent years harnessing the power of AI to develop better health care, Covid-19 was certainly no exception and Prof Faisal redirected a large portion of his labs resources to the national effort at the outset of the pandemic.

Just last month he and a team of researchers revealed their work in using machine learning to predict which Covid-19 patients in intensive care units might get worse and not respond positively to being turned on to their stomachs a technique that is commonly used to improve oxygenation of the lungs.

This only happened because we look at the trajectories of patients on a daily basis, says Prof Faisal, who first studied in Germany, where he received a number of awards and distinctions, before continuing his education as a Junior Fellow at the University of Cambridge.

In collaboration with a digital healthcare company his lab ran a survey of Covid-19 symptoms worldwide with one million respondents which, though not yet peer-reviewed, has shown that standard Covid-19 symptoms, such as loss of taste and smell, are not consistent across countries.

Suddenly symptoms in Africa or India present themselves very differently from symptoms in Europe. Why is that important? Because we're always talking about asymptomatic transmission, and the challenges [involved], the German-born professor tells The National.

From lung scan imaging for preliminary detection to the rapid review of research and, of course, the worldwide dissemination of mortality figures, algorithms have been deployed far and wide to help better understand and combat the virus.

I've seen things advance in weeks, that would have taken probably a decade to happen. And the question is, how much of that legacy experience from a citizen's viewpoint is going to transform in the long term? What is acceptable? asks Prof Faisal, who is also the Founding Director of the 20 million ($28.3m) UKRI Centre for Doctoral Training in AI for Healthcare.

Privacy, data and bias remain the omnipresent issues trailing behind the proliferation of AI across sectors, but a public health emergency like Covid-19 tends for better or worse to quieten such resistance.

There is a massive shortage of doctors worldwide. What AI can do is address some of the unmet personnel needs

Nevertheless, ardent proponents of AI welcome the legislative safeguards and frameworks they say would help foster greater trust among the public, as well as increased collaboration among institutions.

Addressing an online forum of AI Healthcare experts earlier this year, the Conservative MP and former Minister of State, George Freeman, said governments had a difficult but important role to play in instilling excitement instead of fear into the public. The big challenge in this space is to create a trust framework where people out on the streets can have confidence that this big system for using massive computer power to find value in the healthcare system is working for them, not on them, said the founder of Reform for Resilience, an international initiative aimed at promoting strategic reform of health care.

Mr Freeman said the steady rise in the wellness and wearable technology in healthcare industries suggests people are increasingly willing to take responsibility for their health but needed better architecture to do so.

We need to set some global international protocols and standards for what is and is not legitimate good practice use of AI, he said at the online forum.

I think we need to frame AI within a UK system approach in which the public would have real trust that we're going to embed that properly in a system that will make the sacrifices of this last year mean that the next generation don't have to experience it.

Regardless of where the legislation is going, the increased integration of health care with personal digital technologies is unlikely to turn back. Utilising AI does not, however, mean dispensing with doctors and medical professionals, says Amr Nimer, a neurosurgeon at Imperial College NHS Trust and a colleague of Prof Faisal.

There is a massive shortage of doctors worldwide. What AI can do is address some of the unmet personnel needs. The idea behind the deployment of AI agents is not to replace doctors or healthcare professionals, but to help automate some of the tasks that can be done much more efficiently by machines, so that we as healthcare professionals can concentrate on actual patient care. AI will augment, rather than replace, healthcare professionals, Mr Nimer told The National.

Over the past year the Dubai-born neurosurgeon has been working with Prof Faisal in the Brain Behaviour Lab on a project to train surgeons using AI.

It's based on the principles of economy of movement and surgical efficacy. We use state-of-the art motion sensors to collect movement data from expert surgeons, and then utilise AI algorithms to answer the questions: what defines manual surgical expertise, or what makes an expert, an expert? What does behavioural data show us about the manual skills of surgical experts [versus] novices? Once we have an entirely data-driven objective definition of expertise in a particular procedure, we can use AI algorithms to help junior surgeons perform that procedure much more efficiently on models, rather than practising on patients first, Mr Nimer said.

Showing the wide applicability of AI, this research project shares similar research principles with that undertaken by Prof Faisals team last year with Formula E World Champion, Lucas di Grassi. Wearing a wireless electroencephalogram helmet to track his brain activity, the racing drivers eye and body movements were monitored under real-time extreme conditions. The first-time experiment aimed to better understand how an expert driver performs, so that more targeted and useful information can be given to self-driving cars.

After more than a year responding to the severities of Covid-19, the healthcare system is overwhelmingly strained. The long-term direct and indirect health effects of the virus are still revealing themselves, but initial assessments suggest a long road of continued care ahead and waiting times to treat other illnesses are now several years long. Healthcare facilities will need a huge injection of both human and financial capital, as well as the latest technology has to offer in order to cope.

The crisis precipitated a hastening of AIs foray into the medical sphere with an unprecedented sharing of data and collaboration across institutions. With medics facing ominous healthcare challenges for years to come, former sceptics may now be more willing to embrace tech that can lessen the burden. It remains to be seen, however, whether the government can provide the necessary regulatory framework to protect the interests of both the patient and the professional.

Go here to see the original:
How Artificial Intelligence has played a major role in fighting Covid - The National

Temple University Health System Selects ElectrifAi’s Practical Artificial Intelligence Solutions to Improve Financial Performance and Reduce Risk -…

JERSEY CITY, N.J., May 5, 2021 /PRNewswire/ --ElectrifAi, one of the world's leading companies in practical artificial intelligence (AI) and pre-built machine learning (ML) models, announced today its collaboration with Temple Health,which is a leadingPhiladelphia-based academic health system that is driving medical advances through clinical innovation, pioneering research and world-class education.Temple Health will leverage ElectrifAi's pre-built machine learning models for spend and contract to drive operational efficiency, cost savings, spending control, increased revenue and risk reduction.

ElectrifAi's 17 years of practical machine learning expertise with regard to spend analytics, contract management, customer/patient engagement and machine learning models will help optimize and improve the operations of Temple Health.

Edward Scott, CEO of ElectrifAi said: "For years, our customers in financial services, telecommunications and retail have been leveraging practical machine learning. It was only a matter of time before we integrated pre-built machine learning models into the healthcare environment. The healthcare community can now accelerate their machine learning efforts with our solutions to drive revenue uplift, cost reduction as well as profit and performance improvements in today's fast-changing business climate."

"ElectrifAi's advanced technology will significantly facilitate efficient contracting and financial accounting for Temple Health, with increased data-driven granularity," said Michael A. Young, MHA, FACHE, President and CEO of Temple University Health System and Temple University Hospital. "We look forward to a productive working relationship."

About ElectrifAi

ElectrifAi is a global leader in business-ready machine learning models. ElectrifAi's mission is to help organizations change the way they work through machine learning: driving revenue uplift, cost reduction as well as profit and performance improvement. Founded in 2004, ElectrifAi boasts seasoned industry leadership, a global team of domain experts, and a proven record of transforming structured and unstructured data at scale. A large library of Ai-based products reaches across business functions, data systems, and teams to drive superior results in record time. ElectrifAi has approximately 200 data scientists, software engineers and employees with a proven record of dealing with over 2,000 customer implementations, mostly for Fortune 500 companies. At the heart of ElectrifAi's mission is a commitment to making Ai and machine learning more understandable, practical and profitable for businesses and industries across the globe. ElectrifAi is headquartered in New Jersey, with offices located in Shanghai and New Delhi. To learn more visitwww.electrifAi.netand follow us on Twitter@ElectrifAiand onLinkedIn.

About Temple Health

Temple University Health System (TUHS) is a $2.2 billion academic health system dedicated to providing access to quality patient care and supporting excellence in medical education and research. The Health System includes Temple University Hospital (TUH);TUH-Episcopal Campus; TUH-Jeanes Campus; TUH-Northeastern Campus; Temple University Hospital Fox Chase Cancer Center Outpatient Department; TUH-Northeastern Endoscopy Center; The Hospital of Fox Chase Cancer Center, together with The Institute for Cancer Research, an NCI-designated comprehensive cancer center; Fox Chase Cancer Center Medical Group, Inc., The Hospital of Fox Chase Cancer Center's physician practice plan; Temple Transport Team, a ground and air-ambulance company; Temple Physicians, Inc., a network of community-based specialty and primary-care physician practices; and Temple Faculty Practice Plan, Inc., TUHS's physician practice plan. TUHS is affiliated with the Lewis Katz School of Medicine at Temple University.

Temple Health refers to the health, education and research activities carried out by the affiliates of Temple University Health System (TUHS) and by the Katz School of Medicine. TUHS neither provides nor controls the provision of health care. All health care is provided by its member organizations or independent health care providersaffiliated with TUHS member organizations. Each TUHS member organization is owned and operated pursuant to its governing documents.

Non-discrimination notice: It is the policy of Temple University Hospital and The Hospital of Fox Chase Cancer Center, that no one shall be excluded from or denied the benefits of or participation in the delivery of quality medical care on the basis of race, ethnicity, religion, sexual orientation, gender, gender identity/expression, disability, age, ancestry, color, national origin, physical ability, level of education, or source of payment.

SOURCE ElectrifAi

https://electrifai.net

Go here to read the rest:
Temple University Health System Selects ElectrifAi's Practical Artificial Intelligence Solutions to Improve Financial Performance and Reduce Risk -...

How US cities are using artificial intelligence to boost vaccine uptake – Cities Today

US President Joe Biden yesterday announced a goal for 70 percent of the adult US population to have received at least one COVID-19 vaccine shot by July 4.

Cities are playing a key role in this historic vaccination effort, not only in terms of logistics and administration but also with respect to the critical component of resident engagement.

To maximise vaccine uptake, local governments are working to mitigate any resident concerns; to counter misinformation and distrust; and to clear up confusion about practicalities. To do this effectively they need to understand in close to real-time and at scale how citizens are feeling about vaccines.

Thats why nineteen US cities and counties, including Los Angeles, Philadelphia, New Orleans and Newark, are using advanced sentiment analysis to help shape and scale their vaccine programmes.

The initiative is a collaboration between Israeli start-up Zencity and the Harvard Kennedy Schools Ash Center, with funding from the Robert Wood Johnson Foundation and support from Bennet Midland.

Through the programme, the cities and counties are using Zencitys tools to collect and analyse organic feedback from publicly available sources such as social media posts, online channels and local news sites, alongside proactive resident input from community surveys.

Zencity uses artificial intelligence (AI) to classify and sort the data to identify key topics, trends, anomalies, and sentiment.

Each city will receive a report including insights on how opinions about the vaccine break down across demographic groups; trends and themes in community sentiment toward vaccination; misinformation that might need to be addressed; and recommendations for how to communicate about vaccines. Each citys results are benchmarked against the average results from the cohort.

Assaf Frances, Director of Urban Policy, Zencity,said: These results will enable cities to make data-informed decisions as they continue to navigate vaccine rollout. This could mean anything from making the appointment scheduling process more accessible if the results show that logistical hurdles have been a major barrier to mass vaccination, to providing more education around vaccine safety and efficacy to a particular segment of the population where the data is showing more hesitancy.

Deana Gamble, Communications Director, City of Philadelphia, told Cities Today: Were currently in a pivotal moment where vaccine supply has never been greater yet there is still a significant amount of vaccine hesitancy, especially among communities of colour. We need to provide accurate and up-to-date information to those who are still unsure about the benefits of getting the vaccine and how to do so.

With this in mind, Philadelphia has launched the six-month #VaxUpPhilly marketing campaign.

Gamble said one key insight from Zencity was that Philadelphia residents report similar levels of intention to get the vaccine as the cohort average, but they are more likely to wait longer.

This speaks to intention to get vaccinated yet less urgency with residents indicating that they require more information or evidence, specifically by seeing more people they know get the vaccine, Gamble commented. This shows us that the education efforts of our #VaxUpPhilly campaign including use of myth busters and trusted, credible messengers are critical.

Philadelphia faced controversy early in its vaccine rollout. In January, the city cut ties with Philly Fighting COVID, a young start-up which was running the citys largest vaccination site, after it emerged the company had cancelled testing efforts and become a for-profit entity, and concerns were raised about its privacy policy. Philly Fighting COVID said it had the best intentions and had not sold or shared any data but the incident was still damaging for the city.

Gamble said: We certainly acknowledge the mistakes the administration made working with the group which has necessitated rebuilding trust with the public about our vaccination programme. The insights gleaned from Zencity can help us better communicate with residents, which can help us overcome the challenges caused by Philly Fighting COVID.

Liana Elliott, Deputy Chief of Staff for New Orleans Mayor LaToya Cantrell, said that although New Orleans vaccine rollout is going well, we also are hitting our plateau a little bit earlier than we thought.

Understanding nuances around vaccine sentiment can help the city push through this.

Generally, the hesitancy that we thought we were going to find was not nearly as prevalent in the communities that we expected, Elliott commented, noting lower levels of concern than anticipated in communities of colour and more of a tendency for conservative white men to have reservations.

Further, as in Philadelphia, while many people are willing to get vaccinated, some dont want to go first.

Elliott said: We worked really hard to make sure that we are working with our community partners and getting proactive about talking to people about the vaccine and bringing vaccine events into communities.

This includes encouraging people to share when they have been vaccinated on social media, urging hospitality businesses to incentivise and support staff vaccinations and making the inoculation process a positive one. For example, a brass band played to mark the opening of the vaccination site at the Ernest N. Morial Convention Center and local bars hosted shots for shots events, which Elliott described as very New Orleans.

These approaches have really encouraged people to go check it out and just go get [their vaccination] done, she said.

The Zencity analysis has also helped New Orleans to shape vaccine messages and understand who are the trusted ambassadors best placed to deliver them.

Research published in March by global communications company Edelmanfound that US residents most trust doctors, scientists and public health officials about vaccine information and are more likely to trust someone like themselves or their organisations CEO than a government official. However, Zencity data showed that New Orleans Mayor LaToya Cantrell is one of the most trusted messengers for residents.

Feedback also highlighted some ways the city needed to simplify appointment booking. It then analysed sentiment to check the improvements were working, and this is a continuous process.

If we start seeing more chatter about [something being] hard or [people not knowing] when or where to go, then that means something is broken in that chain of communication we have got to go back and fix it, Elliott said.

She added that a key benefit of the programme with Zencity is: It really helps us confirm that what we are seeing and experiencing anecdotally and locally as staff is in fact holding up across not only our city but across the country and across all the other cohort cities as well.

Sometimes its not necessarily that it informs or changes how were doing things but it affirms that were going the right way and that what were doing is working, she said.

A national report on getting residents on board with vaccinations will be published by Zencity, Harvard Kennedy Schools Ash Center, the Robert Wood Johnson Foundation and Bennett Midlandlater this month.

Image: City of New Orleans

Like what you are reading? Register now to get FREE access to premium content and to receive our newsletter.

LoginRegister

Originally posted here:
How US cities are using artificial intelligence to boost vaccine uptake - Cities Today

Top 10 Artificial Intelligence Innovation Trends to Watch Out For in 2021 – Analytics Insight

Although the COVID-19 pandemic affected many areas of industry, it did not lessen the impact of Artificial Intelligence in their daily lives. Thus, we can assume that AI-powered solutions will undoubtedly become more widely used in 2021 and beyond.

Here are the top 10Artificial Intelligence (AI) innovation trends to watch out for this year:

Knowledge will become more available in the coming years, putting digital data at higher risk of being hacked and vulnerable to hacking and phishing attempts. AI and new technologies will help the security service in combating malicious activities in all areas. With strengthened safety initiatives, AI can help prevent cybercrime in the future.

More unstructured data will be organized in the future using natural language processing and machine learning methods. Organizations can take advantage of these technologies to generate data that can be used by RPA (Robotic Process Automation) technology to automate transactional operation. RPA is one of the tech industrys fastest-growing segments. Its only drawback is that it can only work with structured data. Unstructured data can be easily translated into structured data with the aid of AI, resulting in a valuable performance.

Many industries and companies have deployed AI-powered chatbots in the previous years. Better customer service automation is possible with AI chatbots. These conversational AI chatbots will begin to learn and develop their understanding and communication with customers in 2021.

The Covid-19 pandemic is quickly shifting automation priorities away from front-end processes toward back-end processes and business resilience. Intelligent Automation can, in reality, combine robotic and digital process automation with practical AI and low-code devices. While growing their operations, these innovations will help companies become more competitive and robust.

Quantum AI is set to grow in popularity as more businesses seek to implement the technology in supercomputers. Using quantum bits, quantum computers can tackle any possible problem much faster than traditional computers. This can be useful for processing and analyzing large sets of data in real-time, as well as rapidly predicting specific patterns. In the next decade, quantum AI is predicted to make significant advances in fields such as healthcare and banking.

RPA is one of the most revolutionary AI systems for automating repetitive tasks. On the desktop, it can effectively execute a high-volume, repetitive process without making a mess. Its possible that the job entails invoicing a customer. Furthermore, it can repeat the process several times a day, freeing up human time for more productive activities.

AI is now assisting the healthcare industry in a significant way and with high precision. AI can help healthcare facilities in a variety of ways by analyzing data and predicting different outcomes. AI and machine learning tools provide insights into human health and also propose disease prevention measures. AI technologies also enable doctors to monitor their patients wellbeing from far away, thereby enhancing teleconsultation and remote care.

Artificial intelligence is a wonderful technology that, when combined with the power of the Internet of Things (IoT), can provide a powerful business solution. The convergence of these two technologies in 2021 would lead to significant changes in the automation domain.

Face recognition technology will evolve at a rapid pace in 2021 as a result of the recent Covid-19 problems. It uses biometrics to identify facial characteristics from photographs and videos, and then compares the information to an existing database.

Businesses can use edge computing to convert their daily data into actionable insights. It provides servers and storing data solutions for computers and apps to ensure a smooth operation while allowing for real-time data processing that is much more efficient than cloud computing. Edge computing will also improve the efficiency of cloud servers because it can be carried out on nodes.

Share This ArticleDo the sharing thingy

Read more from the original source:
Top 10 Artificial Intelligence Innovation Trends to Watch Out For in 2021 - Analytics Insight