Archive for the ‘Artificial Intelligence’ Category

Is Artificial Intelligence Made in Humanity’s Image? Lessons for an AI Military Education – War on the Rocks

Artificial intelligence is not like us. For all of AIs diverse applications, human intelligence is not at risk of losing its most distinctive characteristics to its artificial creations.

Yet, when AI applications are brought to bear on matters of national security, they are often subjected to an anthropomorphizing tendency that inappropriately associates human intellectual abilities with AI-enabled machines. A rigorous AI military education should recognize that this anthropomorphizing is irrational and problematic, reflecting a poor understanding of both human and artificial intelligence. The most effective way to mitigate this anthropomorphic bias is through engagement with the study of human cognition cognitive science.

This article explores the benefits of using cognitive science as part of an AI education in Western military organizations. Tasked with educating and training personnel on AI, military organizations should convey not only that anthropomorphic bias exists, but also that it can be overcome to allow better understanding and development of AI-enabled systems. This improved understanding would aid both the perceived trustworthiness of AI systems by human operators and the research and development of artificially intelligent military technology.

For military personnel, having a basic understanding of human intelligence allows them to properly frame and interpret the results of AI demonstrations, grasp the current natures of AI systems and their possible trajectories, and interact with AI systems in ways that are grounded in a deep appreciation for human and artificial capabilities.

Artificial Intelligence in Military Affairs

AIs importance for military affairs is the subject of increasing focus by national security experts. Harbingers of A New Revolution in Military Affairs are out in force, detailing the myriad ways in which AI systems will change the conduct of wars and how militaries are structured. From microservices such as unmanned vehicles conducting reconnaissance patrols to swarms of lethal autonomous drones and even spying machines, AI is presented as a comprehensive, game-changing technology.

As the importance of AI for national security becomes increasingly apparent, so too does the need for rigorous education and training for the military personnel who will interact with this technology. Recent years have seen an uptick in commentary on this subject, including in War on the Rocks. Mick Ryans Intellectual Preparation for War, Joe Chapas Trust and Tech, and Connor McLemore and Charles Clarks The Devil You Know, to name a few, each emphasize the importance of education and trust in AI in military organizations.

Because war and other military activities are fundamentally human endeavors, requiring the execution of any number of tasks on and off the battlefield, the uses of AI in military affairs will be expected to fill these roles at least as well as humans could. So long as AI applications are designed to fill characteristically human military roles ranging from arguably simpler tasks like target recognition to more sophisticated tasks like determining the intentions of actors the dominant standard used to evaluate their successes or failures will be the ways in which humans execute these tasks.

But this sets up a challenge for military education: how exactly should AIs be designed, evaluated, and perceived during operation if they are meant to replace, or even accompany, humans? Addressing this challenge means identifying anthropomorphic bias in AI.

Anthropomorphizing AI

Identifying the tendency to anthropomorphize AI in military affairs is not a novel observation. U.S. Navy Commander Edgar Jatho and Naval Postgraduate School researcher Joshua A. Kroll argue that AI is often too fragile to fight. Using the example of an automated target recognition system, they write that to describe such a system as engaging in recognition effectively anthropomorphizes algorithmic systems that simply interpret and repeat known patterns.

But the act of human recognition involves distinct cognitive steps occurring in coordination with one another, including visual processing and memory. A person can even choose to reason about the contents of an image in a way that has no direct relationship to the image itself yet makes sense for the purpose of target recognition. The result is a reliable judgment of what is seen even in novel scenarios.

An AI target recognition system, in contrast, depends heavily on its existing data or programming which may be inadequate for recognizing targets in novel scenarios. This system does not work to process images and recognize targets within them like humans. Anthropomorphizing this system means oversimplifying the complex act of recognition and overestimating the capabilities of AI target recognition systems.

By framing and defining AI as a counterpart to human intelligence as a technology designed to do what humans have typically done themselves concrete examples of AI are measured by [their] ability to replicate human mental skills, as De Spiegeleire, Maas, and Sweijs put it.

Commercial examples abound. AI applications like IBMs Watson, Apples SIRI, and Microsofts Cortana each excel in natural language processing and voice responsiveness, capabilities which we measure against human language processing and communication.

Even in military modernization discourse, the Go-playing AI AlphaGo caught the attention of high-level Peoples Liberation Army officials when it defeated professional Go player Lee Sedol in 2016. AlphaGos victories were viewed by some Chinese officials as a turning point that demonstrated the potential of AI to engage in complex analyses and strategizing comparable to that required to wage war, as Elsa Kania notes in a report on AI and Chinese military power.

But, like the attributes projected on to the AI target recognition system, some Chinese officials imposed an oversimplified version of wartime strategies and tactics (and the human cognition they arise from) on to AlphaGos performance. One strategist in fact noted that Go and warfare are quite similar.

Just as concerningly, the fact that AlphaGo was anthropomorphized by commentators in both China and America means that the tendency to oversimplify human cognition and overestimate AI is cross-cultural.

The ease with which human abilities are projected on to AI systems like AlphaGo is described succinctly by AI researcher Eliezer Yudkowsky: Anthropomorphic bias can be classed as insidious: it takes place with no deliberate intent, without conscious realization, and in the face of apparent knowledge. Without realizing it, individuals in and out of military affairs ascribe human-like significance to demonstrations of AI systems. Western militaries should take note.

For military personnel who are in training for the operation or development of AI-enabled military technology, recognizing this anthropomorphic bias and overcoming it is critical. This is best done through an engagement with cognitive science.

The Relevance of Cognitive Science

The anthropomorphizing of AI in military affairs does not mean that AI is always given high marks. It is now clich for some commentators to contrast human creativity with the fundamental brittleness of machine learning approaches to AI, with an often frank recognition of the narrowness of machine intelligence. This cautious commentary on AI may lead one to think that the overestimation of AI in military affairs is not a pervasive problem. But so long as the dominant standard by which we measure AI is human abilities, merely acknowledging that humans are creative is not enough to mitigate unhealthy anthropomorphizing of AI.

Even commentary on AI-enabled military technology that acknowledges AIs shortcomings fails to identify the need for an AI education to be grounded in cognitive science.

For example, Emma Salisbury writes in War on the Rocks that existing AI systems rely heavily on brute force processing power, yet fail to interpret data and determine whether they are actually meaningful. Such AI systems are prone to serious errors, particularly when they are moved outside their narrowly defined domain of operation.

Such shortcomings reveal, as Joe Chapa writes on AI education in the military, that an important element in a persons ability to trust technology is learning to recognize a fault or a failure. So, human operators ought to be able to identify when AIs are working as intended, and when they are not, in the interest of trust.

Some high-profile voices in AI research echo these lines of thought and suggest that the cognitive science of human beings should be consulted to carve out a path for improvement in AI. Gary Marcus is one such voice, pointing out that just as humans can think, learn, and create because of their innate biological components, so too do AIs like AlphaGo excel in narrow domains because of their innate components, richly specific to tasks like playing Go.

Moving from narrow to general AI the distinction between an AI capable of only target recognition and an AI capable of reasoning about targets within scenarios requires a deep look into human cognition.

The results of AI demonstrations like the performance of an AI-enabled target recognition system are data. Just like the results of human demonstrations, these data must be interpreted. The core problem with anthropomorphizing AI is that even cautious commentary on AI-enabled military technology hides the need for a theory of intelligence. To interpret AI demonstrations, theories that borrow heavily from the best example of intelligence available human intelligence are needed.

The relevance of cognitive science for an AI military education goes well beyond revealing contrasts between AI systems and human cognition. Understanding the fundamental structure of the human mind provides a baseline account from which artificially intelligent military technology may be designed and evaluated. It possesses implications for the narrow and general distinction in AI, the limited utility of human-machine confrontations, and the developmental trajectories of existing AI systems.

The key for military personnel is being able to frame and interpret AI demonstrations in ways that can be trusted for both operation and research and development. Cognitive science provides the framework for doing just that.

Lessons for an AI Military Education

It is important that an AI military education not be pre-planned in such detail as to stifle innovative thought. Some lessons for such an education, however, are readily apparent using cognitive science.

First, we need to reconsider narrow and general AI. The distinction between narrow and general AI is a distraction far from dispelling the unhealthy anthropomorphizing of AI within military affairs, it merely tempers expectations without engendering a deeper understanding of the technology.

The anthropomorphizing of AI stems from a poor understanding of the human mind. This poor understanding is often the implicit framework through which the person interprets AI. Part of this poor understanding is taking a reasonable line of thought that the human mind should be studied by dividing it up into separate capabilities, like language processing and transferring it to the study and use of AI.

The problem, however, is that these separate capabilities of the human mind do not represent the fullest understanding of human intelligence. Human cognition is more than these capabilities acting in isolation.

Much of AI development thus proceeds under the banner of engineering, as an endeavor not to re-create the human mind in artificial ways but to perform specialized tasks, like recognizing targets. A military strategist may point out that AI systems do not need to be human-like in the general sense, but rather that Western militaries need specialized systems which can be narrow yet reliable during operation.

This is a serious mistake for the long-term development of AI-enabled military technology. Not only is the narrow and general distinction a poor way of interpreting existing AI systems, but it clouds their trajectories as well. The fragility of existing AIs, especially deep-learning systems, may persist so long as a fuller understanding of human cognition is absent from their development. For this reason (among others), Gary Marcus points out that deep learning is hitting a wall.

An AI military education would not avoid this distinction but incorporate a cognitive science perspective on it that allows personnel in training to re-think inaccurate assumptions about AI.

Human-Machine Confrontations Are Poor Indicators of Intelligence

Second, pitting AIs against exceptional humans in domains like Chess and Go are considered indicators of AIs progress in commercial domains. The U.S. Defense Advanced Research Projects Agency participated in this trend by pitting Heron Systems F-16 AI against a skilled Air Force F-16 pilot in simulated dogfighting trials. The goals were to demonstrate AIs ability to learn fighter maneuvers while earning the respect of a human pilot.

These confrontations do reveal something: some AIs really do excel in certain, narrow domains. But anthropomorphizings insidious influence lurks just beneath the surface: there are sharp limits to the utility of human-machine confrontations if the goals are to gauge the progress of AIs or gain insight into the nature of wartime tactics and strategies.

The idea of training an AI to confront a veteran-level human in a clear-cut scenario is like training humans to communicate like bees by learning the waggle dance. It can be done, and some humans may dance like bees quite well with practice, but what is the actual utility of this training? It does not tell humans anything about the mental life of bees, nor does it gain insight into the nature of communication. At best, any lessons learned from the experience will be tangential to the actual dance and advanced better through other means.

The lesson here is not that human-machine confrontations are worthless. However, whereas private firms may benefit from commercializing AI by pitting AlphaGo against Lee Sedol or Deep Blue against Garry Kasparov, the benefits for militaries may be less substantial. Cognitive science keeps the individual grounded in an appreciation for the limited utility without losing sight of its benefits.

Human-Machine Teaming Is an Imperfect Solution

Human-machine teaming may be considered one solution to the problems of anthropomorphizing AI. To be clear, it is worth pursuing as a means of offloading some human responsibility to AIs.

But the problem of trust, perceived and actual, surfaces once again. Machines designed to take on responsibilities previously underpinned by the human intellect will need to overcome hurdles already discussed to become reliable and trustworthy for human operators understanding the human element still matters.

Be Ambitious but Stay Humble

Understanding AI is not a straightforward matter. Perhaps it should not come as a surprise that a technology with the name artificial intelligence conjures up comparisons to its natural counterpart. For military affairs, where the stakes in effectively implementing AI are far higher than for commercial applications, ambition grounded in an appreciation for human cognition is critical for AI education and training. Part of a baseline literacy in AI within militaries needs to include some level of engagement with cognitive science.

Even granting that existing AI approaches are not intended to be like human cognition, both anthropomorphizing and the misunderstandings about human intelligence it carries are prevalent enough across diverse audiences to merit explicit attention for an AI military education. Certain lessons from cognitive science are poised to be the tools with which this is done.

Vincent J. Carchidi is a Master of Political Science from Villanova University specializing in the intersection of technology and international affairs, with an interdisciplinary background in cognitive science. Some of his work has been published in AI & Society and the Human Rights Review.

Image: Joint Artificial Intelligence Center blog

Visit link:
Is Artificial Intelligence Made in Humanity's Image? Lessons for an AI Military Education - War on the Rocks

Beware of the Use of Artificial Intelligence Recruitment and Hiring Tools – JD Supra

As the use of artificial intelligence recruitment and hiring tools becomes more prevalent, it is important to remember that such processes are subject to anti-discrimination laws. Employers have an obligation to inspect such tools and processes for bias based on any protected class (including disability and age) and should have plans to provide reasonable accommodations during the recruitment and hiring process. On May 12, 2022, the Equal Employment Opportunity Commission and Justice Department issued guidance for the first time regarding the use of algorithms and artificial intelligence in employment-related decision making and the ways such tools may violate disability discrimination laws. The guidance clarifies that employers are responsible for ensuring that their hiring technologies including any artificial intelligence used comply fully with disability discrimination laws, even if the technology is administered by a third party, and that employers provide reasonable accommodation as needed. The guidance further provides that regardless of intent, if the artificial intelligence tool has the effect of screening out applicants with disabilities or adversely affecting individuals with disabilities, the employer may be violating disability discrimination laws. The guidance directs employers to be critical of artificial intelligence hiring tools that they use and recommends asking vendors a number of questions and only develop and select tools that measure abilities and qualifications that are truly necessary for the job even for people who are entitled to an on-the-job reasonable accommodation. https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence.

In addition, for the first time, the EEOC this month sued an employer related to its use of artificial intelligence hiring tools. Specifically, the EEOC sued three integrated companies providing English-language tutoring services under the iTutorGroup brand name for age discrimination for allegedly programming their online recruitment software to automatically reject older applicants because of their age. According to the EEOCs press release regarding the lawsuit: [The companies] hire thousands of tutors based in the United States each year to provide online tutoring from their homes or other remote locations. According to the EEOCs lawsuit, in 2020, [the companies] programmed their tutor application software to automatically reject female applicants age 55 or older and male applicants age 60 or older. [The companies] rejected more than 200 qualified applicants based in the United States because of their age.

Accordingly, employers need to examine their artificial intelligence recruitment and hiring tools now to ensure the algorithms in the tools do not unfairly screen out individuals based on their membership in a protected class.

See the original post:
Beware of the Use of Artificial Intelligence Recruitment and Hiring Tools - JD Supra

Artificial intelligence experts helping NATO with new ‘horizontal scanning’ initiative – FedScoop

Written by Brandi Vincent May 16, 2022 | FEDSCOOP

Editors note: This story was updated with comments from NATO.

More than 80 artificial intelligence experts from the U.S. and other nations are helping NATO explore the military implications and opportunities for leveraging the technology.

NATOs Science and Technology Organization (STO) and the NATO Communications and Information (NCI) Agency jointly hosted a workshop earlier this month in the Hague, Netherlands, marking the launch of a new strategic initiative to bolster the alliances AI approach.

AI is one of the key emerging and disruptive technologies identified by NATO as vital for the maintenance of its technological edge, NATO Chief Scientist Dr. Bryan Wells said in a press release Monday. By working together, the STO and the NCI Agency are able to bring together global experts to ensure the very best scientific expertise is available to advise NATO and its Allies and Partners on the latest scientific trends in this area.

AI scientists, ethicists and military operational experts from across Europe and North America participated in the workshop. The AI experts also met with NCI Agency scientists and engineers at the agencys lab based in the Netherlands, where they observed demonstrations of how artificial intelligence systems can be trained using NATO data to confront existing challenges, and several existing projects affiliated with the alliances different communities.

One example is the Resilience Assessment Project funded by Allied Command Transformation, a tool to assess resilience in seven key areas that NATO has defined, such as transportation, energy, communications, and others, NCI Agency Chief of Data Science and AI Dr. Michael Street told FedScoop in an email on Tuesday.

That project is based on open-source data from a wide variety of sources and is being developed in close collaboration with the end-users and subject matter experts, he explained. It enables users to better assess situations and conduct what if analyses to understand the impact of crises.

For example, the tool is being used to support exercises to understand the state of critical infrastructure and points of interest such as road networks or energy supplies, Street said.

The core tasks of the transatlantic military alliance between the U.S., Canada and 28 European nations involve collective defense, crisis management and cooperative security.

The organizations defense ministers officially signed on to an AI strategy in October, formalizing their intent to accelerate NATOs collective adoption of the technology, ensure it is deployed responsibly, and protect against threats it might pose. Building on that effort, the AI experts participating in the recent invite-only multinational workshop kicked off a strategic initiative to drive AI-focused horizon scanning a methodology for performing comprehensive assessments of possibilities and threats connected to a particular technology or other topics.

The new NATO initiative employs aeronautical scientist Theodore von Krmns foundational principle to bring armed forces and scientific personnel closer together to enhance collective knowledge and understanding, according to the press release. Such scans examine the state-of-the-art in the field, the outlook for the next decade, its relevance for the armed forces, potential avenues for investment, and more.

They have also been undertaken on laser weapons, quantum technologies and optronic 3D imaging systems.

Horizon scanning allows us to bring together technology experts and military leaders to define the medium-to-long term activities to fully benefit from a technology, such as artificial intelligence in this case, Street said. This NATO strategic initiative employs Von Karman horizon scan methodology to understand the impact of technology on defense, and vice versa.

Following the workshop, a group of experts from across the alliance will continue to work on these issues over the remainder of the year.

They will prepare a set of recommendations on how AI-based technologies could be further developed and applied for NATO use, and how defense use can contribute to AI development. The recommendations will be delivered to the NATO Science and Technology Board, the highest authority within the Science and Technology Organization, Street confirmed.

The rest is here:
Artificial intelligence experts helping NATO with new 'horizontal scanning' initiative - FedScoop

Iterative Scopes to Present Three Abstracts on Artificial Intelligence Applications for GI Endoscopy at DDW 2022 – Business Wire

CAMBRIDGE, Mass.--(BUSINESS WIRE)--Iterative Scopes, a pioneer in precision medicine technologies for gastroenterology, announced today that its artificial intelligence platforms will be featured in three abstract presentations at the upcoming Digestive Disease Week 2022 (DDW 2022). The meeting will take place virtually and onsite at the San Diego Convention Center in San Diego, CA, from May 21 to May 24.

Experts in inflammatory bowel disease (IBD) and artificial intelligence (AI) will present two abstracts discussing data on the companys endoscopic scoring algorithms in ulcerative colitis (UC), a condition included in the umbrella of IBD, developed in collaboration with Eli Lilly and Company.

The data are drawn from an innovative partnership between Iterative Scopes and Lilly, focusing on studying the effectiveness of machine learning (ML) models to automatically score endoscopic disease severity in UC. Progress in IBD research is hindered by variability in the human interpretation of endoscopic severity. This unique ML approach incorporates novel methods of interpreting and integrating visual data into the assessment of clinical trial endoscopic endpoints. This data has the potential to be considered as a substitute to human central readers, which may reduce clinical trial costs and accelerate IBD research.

Aasma Shaukat, MD, MPH, Robert M. and Mary H. Glickman Professor of Medicine and Gastroenterology at NYU Grossman School of Medicine, and a leader of the Iterative Scopes advisory board, will present the first publicly available registration trial data on SKOUT, the companys automated polyp detection algorithm for colorectal cancer screening, in a plenary session on Late-Breaking Clinical Science Abstracts. The plenary sessions at DDW are the forum for highlighting some of the years best research abstracts as determined by the conference organizers. In her discussion, Dr. Shaukat will highlight results of a multicentered, randomized clinical trial in the US assessing whether SKOUT is superior to standard colonoscopy in increasing the adenomas per colonoscopy.

SKOUT has a pending 510(k) and is not available for sale in the United States. SKOUT received its CE Mark certification in 2021.

We founded Iterative Scopes four years ago to change the trajectory of GI drug development and clinical care, and we are extremely excited to share results of Iterative Scopes work in applying cutting edge, computational approaches towards achieving this goal, said Jonathan Ng, MBBS, the founder and CEO of Iterative Scopes. We are excited to share our work with the clinical community at DDW, through these presentations and the other events surrounding DDW.

Iterative Scopes Presentations at DDW 2022:

Endoscopic Scoring Solutions in Ulcerative Colitis

Title: Can a single central reader provide a reliable ground truth (GT) for training a machine learning (ML) model that predicts endoscopic disease activity in ulcerative colitis (UC)?Date & Time: May 21, 5:15-5:30 PM PDTSession Type: Research ForumPresenter: Klaus Gottlieb, MD, JD (Senior Medical Fellow, Lilly)Presentation No: 278Location: Room: 23 - San Diego Convention Center

Title: Development of a novel ulcerative colitis (UC) endoscopic activity prediction model using machine learning (ML)Date & Time; May 23, 12:30-1:30 PM PDTSession Type: PosterPresenter: David T. Rubin, MD (Joseph B. Kirsner Professor of Medicine and Chief, Section of Gastroenterology, Hepatology and Nutrition, UChicago Medicine and Chair of Iterative Scopes Advisory Board)Presentation No: Mo 1639Location: Poster Hall - San Diego Convention Center

SKOUT, Polyp Detection in Colonoscopy

Title: Increased Adenoma Detection with the use of a novel computer aided detection device, SKOUTTM: Results of a multicenter randomized clinical trial in the USDate & Time: May 24, 8:15-8:30 AM PDTSession Type: PlenaryPresenter: Aasma Shaukat, MD, MPH (Robert M and Mary H. Glickman Professor of Medicine and Gastroenterology, Department of Medicine, NYU Grossman School of Medicine and a leader of Iterative Scopes Advisory Board)Presentation No: 5095Location: Room 3 - San Diego Convention Center

Iterative Scopes was founded in 2017 as a spin out of the Massachusetts Institute of Technology (MIT) by Dr. Ng, a physician-entrepreneur, who developed the companys foundational concepts while he was in school at MIT and Harvard. In December 2021, the company and its investors closed a $150 million Series B financing, which attracted a roster of A-list venture capitalists, big pharmaceutical companies venture arms, and individual leaders in healthcare.

About Iterative Scopes

Iterative Scopes is a pioneer in the application of artificial intelligence-based precision medicine for gastroenterology with the aim of helping to optimize clinical trials investigating treatment of inflammatory bowel disease (IBD). The technology is also designed to potentially enhance colorectal cancer screenings. Its powerful, proprietary artificial intelligence and computer vision technologies have the potential to improve the accuracy and consistency of endoscopy readings. Iterative Scopes is initially applying these advances to impact polyp detection for colorectal cancer screenings and working to standardize disease severity characterization for inflammatory bowel disease. Longer term, the company plans to establish more meaningful endpoints for GI diseases, which may be better predictors of therapeutic response and disease outcomes. Spun out of MIT in 2017, the company is based in Cambridge, Massachusetts.

About Digestive Disease Week (DDW)

Digestive Disease Week (DDW) is the largest international gathering of physicians, researchers and academics in the fields of gastroenterology, hepatology, endoscopy and gastrointestinal surgery. Jointly sponsored by the American Association for the Study of Liver Diseases (AASLD), the American Gastroenterological Association (AGA) Institute, the American Society for Gastrointestinal Endoscopy (ASGE) and the Society for Surgery of the Alimentary Tract (SSAT), DDW is an in-person and virtual meeting from May 21-24, 2022. The meeting showcases more than 5,000 abstracts and hundreds of lectures on the latest advances in GI research, medicine and technology. More information can be found at http://www.ddw.org.

Read more:
Iterative Scopes to Present Three Abstracts on Artificial Intelligence Applications for GI Endoscopy at DDW 2022 - Business Wire

AI in insurance – How is artificial intelligence impacting the insurance sector? – Appinventiv

The pandemic has impacted every industry in one way or the other. The insurance industry is no different. However, the silver lining is that it has reinforced the importance of technology more firmly, especially Artificial Intelligence (AI) and Cloud Computing for this specific sector.

Artificial Intelligence in the insurance market size is valued at USD 6.92 billion by 2028 and is expected to grow at a compound annual growth rate of 24.08% in the forecast till 2028.

Based on a survey, 21% of insurance organizations report they are preparing their workforce for collaborative, interactive, and explainable AI-based systems. It is predicted that investment in AI Insurance is ranked high on decision-makers agenda.

The growing need to offer personalized insurance services leads to the requirement of insurance automation for operational processes. AI does exactly the same by automating the operational tasks performed by humans and are done without fatigue and error in a shorter span of time.

AI has brought in a revolutionary change in the way the insurance industry performed a few years ago. Insurance was normally associated with loads of paperwork, time-consuming meetings, filing complicated claims, and waiting for months for a decision.

AI in insurance has brought in automation that has started rebuilding the trust toward insurance providers. Not only this, insurance automation helps in stimulating business growth, lowering risks and frauds, and automating various business processes to reduce overall costs.

To cut it short, it is helping insurers and policyholders alike. Heres how:

The machine learning algorithms help underwriters to gauge risk with more information that helps them to offer better and tailored premium pricing. Additionally, AI in the insurance industry is streamlining the process to connect applicants with carriers directly making the process more efficient.

With the need of the hour and to stay competitive, it has become imperative for the insurance industry to adopt the latest technologies like machine learning, robotic process automation, and more. Let us understand how can the adoption of the latest technologies add value to the existing tedious and exhaustive insurance process.

Processing claims is a complicated process. Agents are required to assess various policies and comprehend them with every detail to determine how much the customer will receive for the claim. There are many steps that are repetitive and standard tasks to be performed. Machine learning in insurance can take up such automated tasks to reduce errors and the time taken to process the claim.

To increase operational efficiency, companies have been adopting emerging technologies like AI, RPA, and the Internet of Things (IoT). The increase in connectivity, smart home assistants, fitness trackers, telematics, healthcare wearable devices, and other types of IoT devices, now allow insurers to stay connected and collect comprehensive data automatically. This data can then be imbibed into the underwriting process and claim management tasks that will help in better decision-making with reduced risks.

The process of underwriting was largely dependent on the data provided by the applicant manually by filling up regular forms. There is always a possibility of the applicant being dishonest or making mistakes that may lead to inaccurate risk assessment.

The rise in connectivity and increased use of IoT devices, help you fetch the larger datasets with correct information. Natural Language Processing (NLP) enables insurers to assess through the abstract resource to fetch apropos information to better assess the risk.

Research analysis shows that the benefits of the AI in insurance, specifically in underwriting, include the ability to:

The massive insurance industry collects approximately $ 1 trillion in premiums every year. With the size, the fraud ratio too is high. The total cost of non-health insurance fraud is estimated to be more than $40 billion per year that in turn increases the premium cost from $400 to $700 per year per family. Read below to understand how artificial intelligence in insurance claim fraud can be prevented.

Adding value to the existing process only makes sense if it brings along the visible benefits. AI in insurance brings a sigh of relief by revolutionizing it in many ways:

AI in insurance claims can handle the first notice of loss without or with minimal intervention from humans where insurers can report, route, triage, and assign claims. Chatbots can efficiently facilitate the claim reporting process as the customers can report their incidents from any device, any place, and at any time. The AI-enabled chatbots can further disperse the information for further processing.

By regulating all the processes of data capturing, claims creation, authorizations, approvals, payment tracking, and recovery tracking through AI, can be paired with other applications to streamline the fraud detection process, thus saving time and costs.

Artificial intelligence in insurance claims can reduce claims regulation costs by 20-30%, processing costs by 50-65%, and processing time by 50-90% while improving the customer service experience.

The power of Artificial Intelligence in the insurance industry has brought in a revolutionary change in the level of customer service. As mentioned above as well, chatbots are the easiest way to initiate the process and further disseminate the information to the next aligned process without human intervention making the process smooth, quick, and error-free.

AI-powered chatbots can cross-sell and upsell products based on the customers profile and history. By automating the repetitive process, scaling up the operations can be done easily while utilizing the human resources in more strategic roles.

With the advent of disruptive AI technology, Machine Learning, Deep Learning, and OCR, assessing the damage has become easier and quicker as the same can be easily done by uploading a picture of the damaged object.

Predicting the potential loss and providing recommendations make the loss estimation process quick and efficient.

With the power of AI, the fraud detection system addresses the shortcomings of hand-filled applications and provides valuable details on tips for better human judgments. Machine learning and deep learning algorithms are well equipped to identify repeated patterns that might be abnormal or fishy.

From the above-mentioned benefits and value adds, an inference can be drawn that there are fundamentally three areas in which artificial intelligence technology in insurance can bring the revolution claims process, risk assessment, and forecasting. It becomes easier to understand it with examples. Mentioned below are some of them:

Lemonade is an InsureTech startup that uses AI technology to run end-to-end insurance tasks. This has helped them in saving the operational costs that leverage them to offer reduced prices, increase customer acquisition, and elevate customer experience and engagement.

With AI, it lets the lenders assess the traditional and non-traditional data leveraging them to better gauge the risk. A better and automated underwriting process helps the company to boost profitability while reducing the risk.

Nauto is a driverless car company. Its aim is to avoid collision of the commercial fleets by reducing distracted driving. The AI-driven driver safety system uses the dual-facing camera, CV, and other algorithms to prevent risky behaviors in real-time

Implementing chatbots, NLP, and OCR are just the first step toward automation of the insurance industry. The pandemic has virtually forced us to adopt new technologies to stay in the business. This technological wave is certainly going to continue. Deep learning techniques and artificial intelligence are yet to be exploited to their potential. The scenarios will surely advance to machines mimicking the perception, reasoning, learning, and problem solving of the human mind,

It is expected that in the next decade, insurance will shift from its current state of detect and repair to predict and prevent. The users too are getting accustomed to using the advanced technologies to enhance productivity, lower costs, enhanced decision making, and enhance customer satisfaction.

The future of the insurance industry will take a steep curve to achieve new heights with the implementation of various AI technologies. It will not only impact the insurance companies but will also impact the people with insurance. Lets explore some of the trends:

We are experiencing this even today. With IoT, the number of various devices that are connected is increasing day by day. With AI, this connectivity will lead to the collection of comprehensive data. Understanding consumer behavior with this data will enable the insurance industries to come up with new product categories, more personalized pricing, and increasingly real-time service delivery.

Extended reality is the advanced form of virtual reality. It will not be necessary for the object to be insured to be physically present at the spot. Inspection will be done virtually with the help of AI technology after the claim is filed. It will be easier to provide better quotes based on the safety feature of the vehicle to be insured.

Data is the king in AI. Collecting the data from various sources and making sense of it is what AI technology is. However, ensuring that the data is precise and accurate will help in making better business decisions. Insurance companies can use accurate data to mitigate risks and frauds even before they take place.

Artificial intelligence in insurance is all set to transform the future of the insurance industry. Appinventiv can be your reliable development partner that helps you harness the benefits of automation in the insurance sector.

With our expertise in AI software development services. We have successfully helped businesses in transforming their business capabilities.

For instance, Appinventiv has successfully automated the banking process for a leading bank in Europe. The automation process helped the bank in improving the accuracy by 50% and the ATM service levels by 92% .

Also, with the help of conversational AI in banking, the client is able to handle over 50% of customer service requests through chatbot, thus reducing the manpower costs by 20%

You can also exploit the expertise of the seasoned team at Appinventiv to take a leap into the future of insurance.

AI is today and the future of insurance. Leveraging various tools of AI technology will automate the insurance processing from application state the claim settlement in no time that too with no human interventions. Saving this cost and time will help the insurance industry to come up with better product categories and personalized premium quotes that will be generated on data collected from various sources.

AI insurance is at a very nascent stage right now. It will change dramatically in the next decade.

FAQs

Q. What are the advantages of applying AI for Insurers and policyholders?

A. The benefits of applying AI for insurers are listed below-

The benefits of applying AI for policyholders are listed below-

Q. Which function of the insurance industry is expected to exploit AI in the future?

A. Although the impact of AI is holistic aiming to automate the process/functions to increase efficiency and save costs and time. However, with the use of predictive analytics, the underwriting process is the one that will adopt it to the maximum.

Q. What are the emerging AI use cases for auto insurance?

A. Primarily following are the emerging AI use cases for auto insurance

Sudeep Srivastava

The rest is here:
AI in insurance - How is artificial intelligence impacting the insurance sector? - Appinventiv