Archive for the ‘Artificial Intelligence’ Category

artificial intelligence assistant uses face recognition and thermal scanning to screen for COVID-19 – Vision Systems Design

An artificial intelligence system originally designed to greet event attendees has evolved into a COVID-19 screening system that protects Canadas largest and most valuable collection of operational, historic military vehicles.

Master Cpl. Lana, an AI assistant developed by CloudConstable (Richmond Hill, Ontario, Canada; http://www.cloudconstable.com) utilizes an Intel (Santa Clara, CA, USA; http://www.intel.com) RealSense 415 3D depth camera and a FLIR (Wilsonville, OR, USA; http://www.flir.com) Lepton 2.5 thermal imaging module to greet volunteers at the Ontario Regiment Museum (Oshawa, Ontario, Canada; http://www.ontrmuseum.ca) and screen them for COVID-19 infection.

The museum originally intended to deploy the AI as a greeter at the museums front entrance, or to provide supplemental information at exhibits (Figure 1). The COVID-19 outbreak forced the museum to temporarily close for visitors, but volunteers still had to continue performing maintenance on the museums collection, however, and a second deployment of the technology was installed inside a vestibule located in the vehicle garage.

The systems hardware attaches to several brackets on a wall mount assembly using pieces of wall track. The Lepton 2.5 module, which connects to the platform using a USB2 cable, sits in a custom-fabricated housing placed above the screen. The RealSense 415 camera, which connects with the platform via USB3 cable, mounts to the same housing.

The housing attaches to a servomechanism designed with off-the-shelf parts that controls a pan/tilt mount. If the subjects face is not entirely within the cameras FOV, as determined by the systems face detection inference models, the software issues servo motion control commands in real time from the hardware platform via serial over USB API calls, to adjust the camera position until the subjects face is clearly visible An array with speaker and microphone sits below the screen.

Related: Artificial intelligence software expands capabilities of Boston Dynamics Spot robot

The company first experimented with using webcams for the system, but the cameras lacked depth sensing capability, according to Michael Pickering, President and CEO at CloudConstable. The system only interacts with users standing within approximately two yards of the screen. Doing so protects the privacy of anyone passing within the cameras FOV but not interacting with the system.

Other camera options evaluated include the Kinect Azure from Microsoft (Redmond, WA, US; http://www.microsoft.com), which had the advantage of a built-in microphone array, and several models of depth camera from ASUS (Beitou District, Taipei, Taiwan; http://www.asus.com). CloudConstable had difficulty finding any of these cameras readily available in Canada, however, and chose the RealSense camera.

Affordability drove the selection of the FLIR Lepton 2.5 module, capable of radiometric calibration with an acceptable resolution for the application, as well as the modules readily available API and SDK, says Pickering.

The AIs platform, an Intel NUC 9 Pro Kit, a PC with a 238 x 216 x 96 mm footprint, mounts behind the screen. The NUC 9 Pro includes an Intel Xeon E-2286M processor, 16 GBof DDR4-2666 memory, and an integrated UHD Graphics P630 GPU. CloudConstable chose the PC for its ability to also run a discrete GPU, in this case an ASUS Dual GeForce RTX 2070 MINI 8 GB GDDR6, to dedicate to graphics processing and ensure smooth, realistic animations. This allows inference processes to run strictly on the integrated GPU. The NUC 9 Pro also includes remote management software, allowing the company to provide off-site support.

Ambient light proves sufficient at most deployments of the AVA system, says Pickering. A simple LED light can provide extra illumination if required, such as inside the vestibule where museum volunteers go through their automated COVID-19 screening.

Volunteers stand in front of a high-definition ACER (San Jose, CA, USA; http://www.acer.com) display, on which Master Cpl. Lana appears (Figure 2). Pickering notes that the system supports multiple display types, however. The AI asks and the volunteer answers a set of COVID-19 screening questions, such as whether the volunteer is experiencing symptoms or has been exposed to anyone with the illness. The system then measures the volunteers skin temperature using the thermal imaging module.

If the volunteer correctly answers the screening questions and passes the temperature scan, they are checked in by the system and proceed into the museum for their shift. According to Jeremy Blowers, executive director of the Ontario Regiment Museum, the procedure takes less than 60 seconds to complete.

If the screening questions are not answered correctly or the temperature scan fails, the system sends an SMS message to managers phones informing them that a person in the facility has failed the COVID screening. The user does not learn that the screening failed, for fear of creating alarm. For example, a previous iteration of the system displayed on the monitor a live infrared image. Blowers asked CloudConstable to remove the image in case it showed elevated temperatures and upset the volunteer.

Related: Contactless temperature screening stations deployed in Chinese and Korean universities

In the case of a fail result, a human employee delivers a second set of screening questions. They also give the volunteer time to cool down, to account for artificially elevated skin temperatures after working outside on a hot day, for example. A second temperature scan with a hand device then takes place and management decides whether or not to allow the volunteer access to the building.

If volunteers want the system to recognize them, they must register with the software and allow to the system to learn what their face looks like. A video teaches volunteers how to work with the AI in order to allow her to recognize them, for instance by taking off their hats, eyeglasses, and/or masks during the registration process, says Blowers. Lana surprised museum staff by learning within two weeks how to recognize registered volunteers even if they had their masks on, Blowers adds

Once a volunteer registers, the AI greets them, informs them they are checked in, and thanks them for volunteering at the museum, all by name. Museum management receives compiled reports on check-in, check-out, and total volunteer hours on site.

AVAs development began in the fall of 2018 using the Intel distribution of OpenVINO toolkit, open source software designed to optimize deep learning models from pre-built frameworks and ease the deployment of inference engines onto Intel devices. CloudConstable used pre-trained convolutional neural network models for face detection and head pose detection that the company supplemented with a rules-based algorithm based on the inference results from the head pose model.

Because the AI only asks yes or no questions during the COVID-19 screening, Microsoft Azures speech-to-text API suits this and other AVA deployments, says Pickering. Head pose detection algorithms can also determine whether the volunteer nods or shakes their heads and translate the motion as a yes or no answer respectively.

All data generated by interacting with the volunteers, including the answers given to the screening questions and thermal scan results, stores on the Microsoft Azure cloud service.

No false negative cases in COVID-19 detection results exist to date, verified by a lack of reported cases among staff or their families, according to Blowers. False alarms have occurred, however, including two cases where volunteers were working outside in 42 C weather while wearing black hats, which elevated their skin temperature.

CloudConstable currently experiments with using the Intel RealSense 455 model for future AVA deployments. The camera has a wider FOV than the RealSense 415 and therefore presents less of a challenge for tall users. Both cameras use the same SDK such that the 455 can swap out with the 415 without any required software updates. The larger 455 model does require a larger mount than the 415 model, however.

See the original post:
artificial intelligence assistant uses face recognition and thermal scanning to screen for COVID-19 - Vision Systems Design

Job Ads for AI Could Soon Look Like This. Are You Ready? – ExtremeTech

This site may earn affiliate commissions from the links on this page. Terms of use.

Wanted: Human Assistant to the Artificial Intelligence

We are seeking junior and mid-level human applicants to serve as data science assistants to our departmental artificial intelligence (AI) in charge of data analytics. Responsibilities include reviewing, interpreting, and providing feedback about analytics results to the AI, and writing summary reports of AI results for human communication. Requires ability to interact with vendors and information technology staff to provide hardware support for the AI. Experience collaborating with computer-based staff a plus. Must have good human-computer interaction skills. Formal training in the ethical treatment of computers and assessment of the fairness and bias of computer-generated results preferred.

The above is a job advertisement from the future but not that far into it. It points to where we are going, and where we could be in maybe even as few as five years if we devote the resources and resolution to do the necessary research. But our recent past has shown us that we can develop the type of machines that would soon open up a whole new field of lucrative and fulfilling work.

See, over the last decade, a new computer science discipline called automated machine learning, or AutoML, has rapidly developed. AutoML grew organically in response to the many challenges of applying machine learning to the analysis of big data for the purpose of making predictions about health outcomes, economic trends, device failures, and any number of things in a wide field that are best served when rapid and comprehensive data can be analyzed.

For run-of-the-mill machine learning to work, an abundance of choices is required, ranging from the optimal method for the data being analyzed, and the parameters that should be chosen therein. For perspective, there are dozens of popular machine learning methods, each with thousands or millions of possible settings. Wading through these options can be daunting for new users and experts alike.

The promise of AutoML, then, is that the computer can find the optimal approachautomatically, significantly lowering the barrier of entry.

So how do we get to AutoML and to the job advertisement above? There are several hurdles.

The first is persistence. An artificial intelligence (AI) for AutoML must be able to analyze data continuously and without interruption. This means the AutoML AI needs to live in a robust, redundant, and reliable computing environment. This can likely be accomplished using currently available cloud computing platforms. The key advance is modifying the software to be persistent.

The second hurdle is memory and learning. An AutoML AI must have a memory of all machine learning analyses it has run and learn from that experience.PennAI, which my colleagues and I developed,is an example of an open-source AutoML tool that has both, but there arent many others. An importance would be to give AutoML the ability to learn from failure. Its current tools all learn from successes, but humans learn more from failure than success. Building this ability into AutoML AI could be quite challenging but necessary.

The third hurdle is explainability. A strength of human-based data science is our ability to ask each otherwhy. Why did you choose that algorithm? Why did you favor one result over another? Current AutoML tools do not yet allow the user to ask.

The final hurdle is human-computer interaction (HCI). What is the optimal way for a human to interact with AI doing data analytics? What is the best way for a human to give an AI feedback or provide it with knowledge? While we have made great progress in the general space of HCI, our knowledge of how to interact with AIs remains in its infancy.

It is entirely conceivable that an AI for AutoML could be built within the next few years that is persistent and can learn from experience, explain the decisions it makes as well as the results it generates, interact seamlessly with humans, and efficiently incorporate and use expert knowledge as it tries to solve a data science problem. These are all active areas of investigation and progress will depend mostly on a dedicated effort to bring these pieces together.

All that said, automated and persistent AI systems will find their place in the near future, once we make a concerted effort to thoroughly research it. We should start preparing our human-based workforce for this reality. We will need vocational programs to train humans how to interact with a persistent AI agent, in much the same way that we have programs to train others who work with and interpret specialized equipment, such as emergency room technicians. There will also need to be an educational culture shift on top of that training, as we will need to integrate AI interaction into courses covering communication, ethics, psychology, and sociology.

This technology is very much within reach. When we do reach it, well have a new, expansive field for human workers. Soon, it will be time to write a job description, but only once we figure out some crucial problems.

Now Read:

Link:
Job Ads for AI Could Soon Look Like This. Are You Ready? - ExtremeTech

Capgemini Press Release // Capgemini Research: Artificial Intelligence set to help organizations cut greenhouse gas emissions by 16% in the next 3-5…

Good morning,

Please find below the press release issued today.

Best regards,

Michele Moore DuhenGlobal PR Manager | Group Marketing & Communications

Capgemini Group | LondonTel.: +44 3709 053408 Email: Michele.MooreDuhen@capgemini.com_____________________________

Press contact:Michele Moore Duhen Tel.: +44 370 905 3408 Email: michele.mooreduhen@capgemini.com

Capgemini Research: Artificial Intelligence set to help organizations cut greenhouse gas emissions by 16% in the next 3-5 years

48% of organizations surveyed are using AI for climate action, resulting in reduced greenhouse gas emissions (GHG) and improved power efficiency

Paris, November 17, 2020 Artificial Intelligence (AI) powered use cases for climate action have the potential to help organizations fulfil up to 45% of their Economic Emission Intensity (EEI) targets of the Paris Agreement. This is according to a new research entitled, Climate AI: How artificial intelligence can power your climate action strategy, from the Capgemini Research Institute, conducted in partnership with climate change start-up right. based on science. While AI offers many climate action use cases, scaled deployment is proving elusive and just 13% of organizations are successfully combining climate vision with AI capabilities.

Two-thirds (67%) of organizations have set long-term business goals to tackle climate change. While many technologies address a specific outcome, such as carbon capture or renewable sources of energy, AI can accelerate organizations climate action across sectors and value chains; and, adoption is on the rise as more than half of organizations (53%) are moving beyond pilots or proofs of concepts1. AI use cases include improving energy efficiency, reducing dependence on fossil fuels, and optimizing processes to aid productivity. From the 800 sustainability and tech executives surveyed in 400 organizations in the automotive, industrial/process manufacturing, energy and utilities, consumer products, and retail industries, nearly half (48%) are using AI for climate action and as a result have reduced greenhouse gas emissions (GHG) by 12.9%, improved power efficiency by 10.9% and reduced waste by 11.7% since 2017.

The potential positive impact of AI is significant. Organizations can expect to cut GHG emissions by 16% in the next three to five years through AI-driven climate action projects2. Across the five sectors, the research finds that AI-powered use cases can deliver up to 45% of the Paris Accord requirement leading up to 20303. The consumer retail sector demonstrates the most potential for improvement using AI at 45% and wholesale retail the least at 11%. By analyzing more than 70 climate action AI use cases, Capgemini identified the 10 with the biggest impact. Detailed in the report, these include energy consumption and optimization platforms, algorithms to automatically identify defects and predict failures without interrupting operations, and tracing leakages at industrial sites.

Successful deployment requires barriers to be overcomeDespite the considerable potential of AI for climate action, adoption remains low. This could be due to several barriers to progress:

European climate AI champions are leading the pack Only 13% of organizations have aligned their climate vision and strategy with their AI capabilities these are who Capgemini defines as climate AI champions4. Two-fifths of these come from Europe, followed by the Americas and APAC. Climate AI champions are closer to the required Paris Agreement temperature contributions compared with their peers in both scope 1 and 2 emissions and have made considerable gains in applying AI to reduce direct emissions.

A clear knowledge gap is also emerging, as 84% of executives would rather compensate for (or offset) their carbon footprint than deploy technology solutions to reduce their footprint (16%) in the long run. This suggests a lack of awareness for AI climate action potential. According to the report organizations need to invest in AI and data science teams to understand how best to deploy AI to harness it positively for sustainability.

Leverage AIs full climate action potential, but also consider its impactDespite technology advances, AI systems and solutions can potentially consume a lot of power and can generate significant volumes of climate-changing carbon emissions. Before beginning to deploy AI use cases, organizations need to carefully assess the environmental impact, build greater awareness and build AI solutions with sustainability core design principles, to ensure that the benefits of their AI deployments outweigh their emissions cost.

Addressing climate change is everyones responsibility and AI has the potential to make a significant impact, yet only a fraction of organizations are actively using this technology to its full potential, says Anne Laure Thieullent, Vice President, Artificial Intelligence and Analytics Group Offer Leader at Capgemini. For climate action as well, execution starts from the top of the organization, by aligning the use of data & AI to its strategic corporate agenda, with sustainability at the heart of it. Without this clear direction, there is a missing link between intention and technology prioritization and execution. Organizations have the opportunity to prioritize the deployment of AI solutions to address their sustainable goals. Frameworks now exist to educate, build awareness, establish scalable operating models, and manage data to deliver tangible business outcomes with AI applied to climate action. And of course, this requires AI solutions to be designed, built, deployed and monitored with sustainable design principles to ensure overall positive environmental impact.

For further information and the recommendations based on the research, access the full reporthere.

Research MethodologyCapgemini surveyed 800 executives from 400 organizations. Each organization had two respondents: one sustainability executive and one business or technology executive. As well as the survey of executives, Capgemini surveyed a panel of 300 experts: regulators, academics, and AI subject matter experts. Capgemini complemented the surveys with in-depth interviews of over 40 sustainability experts, business/tech experts, AI practitioners and startups, think tanks and academicians working in the field of AI and/or climate change. Capgemini also partnered with right. based on science, for their expertise in XDC Model Methodology, to estimate and quantify the impact of AI on GHG emissions of organizations. It is the only methodology of its kind to integrate a full climate model (also used by the UN Intergovernmental Panel on Climate Change). It is science-based, peer-reviewed, forward-looking, Task Force on Climate-related Financial Disclosures (TCFD) compatible, aligned with the EU Green Deal, transparent and Open Source (currently for academia; fully Open Source from 2021).

About CapgeminiCapgemini is a global leader in consulting, digital transformation, technology, and engineering services. The Group is at the forefront of innovation to address the entire breadth of clients opportunities in the evolving world of cloud, digital and platforms. Building on its strong 50-year heritage and deep industry-specific expertise, Capgemini enables organizations to realize their business ambitions through an array of services from strategy to operations. A responsible and multicultural company of 265,000 people in nearly 50 countries, Capgeminis purpose is to unleash human energy through technology for an inclusive and sustainable future. With Altran, the Group reported 2019 combined global revenues of 17 billion.Visit us atwww.capgemini.com.

About the Capgemini Research InstituteThe Capgemini Research Institute is Capgeminis in-house think-tank on all things digital. The Institute publishes research on the impact of digital technologies on large traditional businesses. The team draws on the worldwide network of Capgemini experts and works closely with academic and technology partners. The Institute has dedicated research centers in India, Singapore, the United Kingdom and the United States. It was recently ranked #1 in the world for the quality of its research by independent analysts. Visit us at https://www.capgemini.com/researchinstitute/

1 Source: Capgemini Research Institute, The AI-powered enterprise: unlocking the potential of AI at scale, July 2020.

2 According to the Capgemini survey AI can potentially reduce GHG emissions by an average 16% over the next 3-5 years across Automotive, Manufacturing, Consumer Products, Retail, and Energy and Utilities sectors.

3 Capgemini uses the X-Degree Compatibility (XDC) Model, developed by right., to determine whether GHG emission reductions from AI will help align organizations climate impact to a global warming level below 2C. The XDC Model calculates the contributions of a company, portfolio or any other economic entity to climate change, answering the question: How much global warming could we expect, if the entire world operated at the same economic emission intensity as the entity in question? Results are expressed in a tangible degree Celsius (C) number: the XDC. This science-based climate metric expresses the temperature alignment of a company. The main input parameter for the XDC Model is a metric called Economic Emission Intensity (EEI). The EEI of an organization or a sector establishes a relationship between the emissions produced per generation of one-million-euro Gross Value Added (GVA). Hence, the EEI shows the ability of an organization to decouple their economic growth from their emissions.

4 To understand which organizations have achieved alignment, and which are in pole position to turn AIs climate potential into action and value, Capgemini analyzed all surveyed organizations based on two dimensions climate action vision and AI capabilities execution. Climate AI Champions have mature climate change vision, strategy, and strong record of accomplishment of AI implementation for climate action.

Here is the original post:
Capgemini Press Release // Capgemini Research: Artificial Intelligence set to help organizations cut greenhouse gas emissions by 16% in the next 3-5...

Next-Generation Industrial Robotic Capabilities Advanced by Artificial Intelligence – Robotics Tomorrow

As barriers between human activities and robotic capabilities diminish moving beyond the fenced activities of last-generation industrial robots new collaboration and workflow models are bringing humans and robots together in industry.

Case Study from | Wind River

THE CHALLENGE

Emerging instances of AI-enabled cobots, autonomous vehicles, and non-piloted drone operations are part of an expanding array of innovative use cases in industrial robotics. Industrial robotics integrated with AI are predicted to spur market growth by a projected CAGR of more than 15% in coming years, reaching USD 66.48 billion by 2027, according to Fortune Business Insights. As barriers between human activities and robotic capabilities diminish moving beyond the fenced activities of last-generation industrial robots new collaboration and workflow models are bringing humans and robots together in industry. Despite advances, however, expanding the range of use cases for robotics in Industrial IoT (IIoT) environments requires negotiating long-standing technical roadblocks. This includes the challenge of integrating diverse components across heterogeneous networks, employing machine learning to build and operate intelligent systems that adapt to workflows, and implementing responsive, low-latency communication services to interact with robotics systems in real time.

THE APPROACH

Artificial intelligence is critical to new robotics approaches. And rather than augmenting existing machine operations by bolting on AI-driven components, AI-first puts the intelligence at the forefront of the design process to perform at the core of a task. The focus is on building solutions that meld hardware and software to effectively use machine learning and AI-guided functions, performing operations with greater speed, reliability, security, and safety. As with digital transformation, the AI-first approach requires a rethinking of traditional design transforming architectures to satisfy the solution requirements over the full lifecycle, rather than just reorganizing and tinkering with existing solutions. The Wind Riverportfolio, with its multiple solutions and purpose-built embedded components, provides a flexible and agile foundation for meeting this need. Wind River solutions are elements of an extensive roadmap leading to the benefits and enhanced business value promised by todays industrial robotics.

A global leader in delivering software for intelligent connected systems, Wind River offers a comprehensive, end-to-end portfolio of solutions ideally suited to address the emerging needs of IoT, from the secure and managed intelligent devices at the edge, to the gateway, into the critical network infrastructure, and up into the cloud. Wind River technology is found in nearly 2 billion devices and is backed by world-class professional services and award-winning customer support.

Other Articles

How can industrial equipment companies keep pace with the push to economizeand modernize, to be more data-centric, and to provide safety and security in theface of constant innovation?

How can industrial robots gain new abilities that can increase their operational value while remaining safe and secure in a factory collaborating with humans?

With the accelerating growth of the Internet of Things (IoT), it is increasingly important to identify and implement safety-related systems for smart grids, connected vehicles, robotics, industrial control systems, smart factories, and more.

This post does not have any comments. Be the first to leave a comment below.

You must be logged in before you can post a comment. Login now.

Originally posted here:
Next-Generation Industrial Robotic Capabilities Advanced by Artificial Intelligence - Robotics Tomorrow

Allied Solutions partners with leading Artificial Intelligence provider Interface, to offer an Intelligent Virtual Assistant to Financial Institutions…

SAN MATEO, Calif., Nov. 17, 2020 /PRNewswire/ -- Allied Solutions, one of the largest providers of insurance, lending, and marketing products to financial institutions, has entered into a strategic partnership with Interface effective September 24, 2020.

Interface has several decades of experience building enterprise-grade technology for financial institutions. Interface's Intelligent Virtual Assistant has already enabled financial institutions across the world to achieve greater efficiencies in their top-line & bottom-line while ensuring the best customer experience. With Interface's solution, financial institutions are automating 60% of call center volume within 60 days, ensuring consumers have access to their financial services provider 24/7 with zero wait times, seeing a 500% increase in online application conversion, a 30% increase in average revenue per customer, and experiencing 0% call abandonment rates.

"We are living in a digital world where 24x7 access and self-service options are a must for all organizations providing financial services. Allied Solutions is excited to partner with Interface at this time to help aid our clients in meeting their consumers where they are at, retaining revenue and enhancing efficiencies," said Pete Hilger, Allied Solutions' CEO.

"We are excited to partner with an industry leader such as Allied Solutions who has been providing exceptional solutions to financial institutions over several years. Combining Allied Solutions' expertise and Interface's industry best Intelligent Virtual Assistant, we aim to help the majority of financial institutions in North America to leapfrog from Digital to Intelligent Banking," said Srinivas Njay, Interface, CEO.

Both Allied Solutions and Interface look forward to delivering these valuable services to financial institutions and their consumers.

About Allied Solutions, LLC

Allied Solutions is one of the largest providers of insurance, lending, and marketing products to financial institutions in the US. Allied Solutions uses technology-based products and services customized to meet the needs of 4,000 clients along with a portfolio of innovative products and services from a wide variety of providers. Allied Solutions maintains over 15 regional offices and service centers around the country and is a subsidiary of Securian Financial Group, Inc.

About Interface

Interface provides an out-of-the-box Intelligent Virtual Assistant that acts as a "personal bank teller" to help customers 24x7 through every step of the journey from being a prospect to achieving financial wellness. Interface currently powers several financial institutions across the world and is proven in production with customers already witnessing over $50M in ROI. Visit http://www.interface.aito learn more.

Media Contact:

Laura Bryant [emailprotected]+1-650-381-9283

SOURCE Interface

Follow this link:
Allied Solutions partners with leading Artificial Intelligence provider Interface, to offer an Intelligent Virtual Assistant to Financial Institutions...