Archive for the ‘Artificial Intelligence’ Category

EU struggles to go from talk to action on artificial intelligence – Science Business

The EU is moving tentatively towards first-of-its-kind rules on the ways that companies can use artificial intelligence (AI), amid fears that the technology is galloping beyond regulators grasp.

Supporters of regulation say proper human oversight is needed for a rapidly developing technology that presents new risks to individual privacy and livelihoods. Others warn that the new rules could stifle innovation with lasting economic consequences.

We arent Big Brother China or Big Data US. We have to find our own way, said German MEP Axel Voss, who is about to take his seat on the European Parliaments new special committee on AI.

Having in mind that the AI tech is now of global strategic relevance, we have to be careful about over-regulating. Theres competition around the world. If we would like to play a role in the future, we need to do something thats not going to the extreme, said Voss, a member of the centre-right European People's Party.

In February, the European Commission presented its AI white paper, which states that new technologies in critical sectors should be subject to legislation. It likened the current situation to "the Wild West" and said it would focus on "high-risk" cases. The debate over the papers many elements will last through 2020 and into next year, when the EU executive will present its legislative proposal.

Researchers and industry are battling for influence over the AI policy.

Theres an incredible opportunity here to begin to tackle high-risk applications of AI. Theres also this real chance to set standards for the entire world, said Haydn Belfield, research associate and academic project manager at Cambridge Universitys Centre for the Study of Existential Risk.

Policymakers and the public are concerned about applications such as autonomous weapons and government social scoring systems similar to those under development in China. Facial scanning software is already creeping into use Europe, operating with little oversight.

You dont have to be an expert in AI to see theres a really high risk to peoples life and liberty from some of these new applications, said Belfield.

Big tech companies, which have made large investments in new AI applications, are wary of the EUs plans to regulate.

Google has criticised measures in the commission's AI white paper, which it says could harm the sector. Last year, the comoany issued its own guidance on the technology, arguing that although it comes with hazards, existing rules and self-regulationwill be sufficientin the vast majority of instances.

In its response to the commissions proposal, Microsoft similarly urged the EU to rely on existing laws and regulatory frameworks as much as possible. However, the US tech company added that developers should be transparent about limitations and risks inherent in the use of any AI system. If this is not done voluntarily, it should be mandated by law, at least for high-risk use cases.

Thomas Metzinger, professor of theoretical philosophy at the University of Mainz, and a member of the commission's 52-strong AI expert group says hes close to despondency because of how long its taking to regulate the field.

We can have clever discussions but what is actually being done? I have long given up on having an overview of the 160 or so ethics guidelines for AI out there in the world, he said.

Vague and non-committal guidelines

Metzinger has been strongly critical of the make-up of the commissions AI advisory group, which he says is tilted towards industry interests. Im disappointed by what weve produced. The guidelines are completely vague and non-committal. But its all relative. Compared to what China and US have produced, Europe has done better, he said.

Setting clear limits for AI is in step with Brussels more hands-on approach of recent years for the digital world. The commission is also setting red lines on privacy, antitrust and harmful internet content, which has inspired tougher rules elsewhere in the world.

Some argue that this prioritising of data protection, through the EUs flagship general data protection regulation (GDPR), has harmed AI growth in Europe.

The US and China account for almost all private AI investment in the world, according to Stanford Universitys AI index report. The European country with the most meaningful presence on AI is the UK, which has left the bloc and has hinted that it may detach itself from EU data protection laws in the future.

GDPR has slowed down AI development in Europe and potentially harmed it, says Sennay Ghebreab, associate professor of socially intelligent systems at the University of Amsterdam.

If you look at medical applications of AI, doctors are not able to use this technology yet [to the fullest]. This is an opportunity missed, he said. The dominating topics are ethics and privacy and this could lead us away from discussing the benefits that AI can bring.

GDPR is a very good piece of legislation, said Voss. But he agrees that it hasnt found the best balance between innovation and privacy. Because of its complexity, people are sometimes giving up, saying its easier to go abroad. We are finding our own way on digitisation in Europe but we shouldnt put up more bureaucratic obstacles.

Catching up

Those who support AI legislation are concerned it will take too long to regulate the sectors where it is deployed.

One highly-decorated legal expert told me it would be around nine years before a law was enforceable. Can you imagine where Google DeepMind will be in five years? said Metzinger, referring to the London lab owned by Google that is at the forefront of bringing AI to sectors like healthcare.

MEPs too are mindful of the need for speed, said Voss. Its very clear that we cant take the time we took with the GDPR. We wont catch up with the competition if it takes such a long time, he said. From the initial consultation, to implementation, GDPR took the best part of a decade to put together.

Regulation could be a fake, misleading solution, Ghebreab warned. Its the companies that use AI, rather than the technology itself, that need to be regulated. In general, top-down regulation is unlikely to lead to community-minded AI solutions. AI is in hands of big companies in US, in the hands of the government in China, and it should be in the hands of the people in Europe, Ghebreab said.

Ghebreab has been working on AI since the 1990s and has recently started a lab exploring socially minded applications, with backing from the city of Amsterdam.

As an example of how AI can help people, he points to an algorithm developed by the Swiss government and a team of researchers in the US that helps with the relocation of refugees. It aims to match refugees with regions that need their skills. Relocation today is based on capacity rather than taking into account refugees education or background, he said.

Interim solutions for AI oversight are not to everyones taste.

Self-regulation is fake and full of superficial promises that are hard to implement, said Metzinger.

The number one lesson Ive learned in Brussels is how contaminated the whole process is by industrial lobbying. Theres a lot of ethics-washing that is slowing down the path to regulation, he said.

Metzinger is aggrieved that, of the 52 experts picked to advise the commission on AI, only four were ethicists. Twenty-six are direct industry representatives, he said. There were conflicts, and people including myself did not sign off on all our work packages. Workshops organised with industry lacked transparency, said Metzinger.

In response, commission spokesman Charles Manoury said the expert panel was formed on the basis of an open selection process, following anopen call for expressions of interest.

Digital Europe, which represents tech companies such as Huawei, Google, Facebook and Amazon, was also contacted for comment.

Adhering to AI standards is ultimately in companies interests, argues Belfield. After the techlash weve been seeing, it will help to make companies seem more trustworthy again, he said.

Developing trustworthy AI is where the EU can find its niche, according to a recent report from the Carnegie Endowment for International Peace. Designed to alleviate potential harm as well as to permit accountability and oversight, this vision for AI-enabled technologies could set Europe apart from its global competitors, the report says.

The idea has particular thrust in France, where the government, alongside Canada, pushed for the creation of the new global forum on ethical AI development.

Public distrust is the fundamental brake on AI development, according to the UK governments Centre for Data Ethics and Innovation. In the absence of trust, consumers are unlikely to use new technologies or share the data needed to build them, while industry will be unwilling to engage in new innovation programmes for fear of meeting opposition and experiencing reputational damage, its AI Barometer report says.

Banning AI

One idea floated by the commission earlier this year was a temporary ban on the use of facial recognition in public areas for up to five years.

There are grave concerns about the technology, which uses surveillance cameras, computer vision, and predictive imaging to keep tabs on large groups of people.

Facial recognition is a genius technology for finding missing children but a heinous technology for profiling, propagating racism, or violating privacy, said Oren Etzioni, professor of computer science and CEO of the Allen Institute for Artificial Intelligence in Seattle.

Several state and local governments in the US have stopped law enforcement officers from using facial recognition databases. Trials of the technology in Europe have provoked a public backlash.

Privacy activists argue the technology is potentially authoritarian, because it captures images without consent. The technology can also have a racial bias. If a system is trained primarily on white male faces, but fewer women and people of colour, it will be less accurate for the latter groups.

Despite its flaws, facial recognition has potential for good, said Ghebreab, who doesnt support a moratorium. We have to be able to show how people can benefit from it; now the narrative is how people suffer from it, he said.

Voss doesnt back a ban for particular AI applications either. We should have some points in the law saying what you can and cant do with AI, otherwise youll face a ban. We should not think about an [outright] ban, he said.

Metzinger favours limiting facial recognition in some contexts, but he admits, its very difficult to tease this apart. You would still want to be able, for counter terrorism measures, to use the technology in public spaces, he said.

The Chinese government has controversially used the tool to identify pro-democracy protesters in Hong Kong, and for racial profiling and control of Uighur muslims. Face scans in China are used to pick out and fine jaywalkers and citizens in Shanghai will soon have to verify their identity in pharmacies by scanning their faces.

It comes back to whom you trust with your data, Metzinger said. I would basically still trust the German government I would never want to be in the hands of the Hungarian government though.

Defence is the other big, controversial area for AI applications. The EUs white paper mentions military AI just once, in a footnote.

Some would prefer if the EU banned the development of lethal autonomous weapons altogether, though few expect this to happen.

There is a lot we dont know. A lot is classified. But you can deduce from investment levels that theres much less happening in Europe [on military AI] than in the US and China, said Amy Ertan, cyber security researcher at the University of London.

Europe is not a player in military AI but it is making steps to change this. The European Defence Agency is running 30 projects that include AI aspects, with more in planning, said the agencys spokeswoman Elisabeth Schoeffmann.

The case for regulation

Author and programmer Brian Christian says regulating AI is a cat and mouse game.

It reminds me of financial regulation, which is very difficult to write because the techniques change so quickly. By the time you pass the law, the field has moved on, he said.

Christians new book looks at the urgent alignment problem, where AI systems dont do what we want or what we expect. A string of jaw-dropping breakthroughs have alternated with equally jaw-dropping disasters, he said.

Recent examples include Amazons AI-powered recruiting system, which filtered out applications that included womens colleges, and showed preference for CVs that included linguistic habits more prone to men, like use of the words executed and captured, said Christian. After several repairs failed, engineers quietly scuttled it entirely in 2018.

Then there was the recurring issue with Google Photos labelling pictures of black people as gorillas; after a series of fixes didnt work, engineers resorted to manually deleting the gorilla label altogether.

Stories like these illustrate why discussions on ethical responsibility have only grown more urgent, Christian said.

If you went to one of the major AI conferences, ethics and safety are now the most rapidly growing and dynamic subsets of the field. Thats either reassuring or worrying, depending on how you view these things.

Europes data privacy rules have helped ethics and safety move in from the fringes of AI, said Christian. One of the big questions for AI is transparency and explain-ability, he said. The GDPR introduces a right to know why an AI system denied you a mortgage or a credit card, for example.

The problem however is that AI decisions are not always intelligible to those who create these systems, let alone to ordinary people.

I heard about lawyers at AI companies who were complaining about the GDPR and how it demanded something that wasnt scientifically possible. Lawyers pleaded with regulators. The EU gave them two years notice on a major research problem, Christian said.

Were familiar with the idea that regulation can constrain, but here is a case where a lot of our interest in transparency and explanation was driven by a legal requirement no one knew how to meet.

See the original post here:
EU struggles to go from talk to action on artificial intelligence - Science Business

Artificial Intelligence (AI) in Education Market Advanced Technology and New Innovations by 2024 – Jewish Life News

Global Artificial Intelligence (AI) in Education Market Size study report with COVID-19 effect is considered to be an extremely knowledgeable and in-depth evaluation of the present industrial conditions along with the overall size of the Artificial Intelligence (AI) in Education industry, estimated from 2020 to 2025. The research report also provides a detailed overview of leading industry initiatives, potential market share of Artificial Intelligence (AI) in Education, and business-oriented planning, etc. The study discusses favorable factors related to current industrial conditions, levels of growth of the Artificial Intelligence (AI) in Education industry, demands, differentiable business-oriented approaches used by the manufacturers of the Artificial Intelligence (AI) in Education industry in brief about distinct tactics and futuristic prospects.

Major Players Covered in this Report are:IBM, Third Space Learning, Metacog, Jenzabar, Jellynote, Cognizant, Querium Corporation, Fishtree, Knewton, Google, DreamBox Learning, Blackboard, Nuance Communications, Cognii, Century-Tech, Osmo, Pearson, Carnegie Learning, ALEKS, Elemental Path, Quantum Adaptive Learning, Liulishuo, AWS, Microsoft, Bridge-U

Get PDF Sample Copy of the Report to understand the structure of the complete report: (Including Full TOC, List of Tables & Figures, Chart) @ https://www.marketgrowthinsight.com/sample/112161

The Artificial Intelligence (AI) in Education Market study report analyses the industrys growth patterns through Past Research and forecasts potential prospects based on comprehensive analysis. The report provides extensive market share, growth, trends , and forecasts for the 20202025 period. The study offers key information on the Artificial Intelligence (AI) in Education market status, which is a valuable source of advice and guidance for companies and individuals involved in the industry.

The research report will concentrate on leading global players in the Artificial Intelligence (AI) in Education market report, which includes details such as company profiles, product picture and specification, creation of R&D, distribution & production capability, distribution networks, quality , cost, revenue and contact information. The study report discusses legal strategies, and product development between the industry dynamics that are leading and growing and coming.

Market Segmentation:

The report is divided into major categories comprising product, distribution channel, application, and end users. Every segment is further sub-segmented into several sub-segmented that are deeply analyzed by experts to offer valuable information to the buyers and market players. Every segment is studied thoroughly in order to offer a better picture to the buyers and stakeholders to benefit from. Information like highest prevailing product, highly demanded product by the application segment and end users are rightly mentioned in the Artificial Intelligence (AI) in Education report.

To get Incredible Discounts on this Premium [emailprotected] https://www.marketgrowthinsight.com/discount/112161

Regional Insights:

The Artificial Intelligence (AI) in Education market is segmented as North America, South America, Europe, Asia Pacific, and Middle East and Africa. Researchers have thoroughly studied about the historical market. With extensive research, experts have offered details on the current and the forecast demand made by these regions. The Artificial Intelligence (AI) in Education report also includes highlights on the prevailing product demanded by end users and end customers for better understanding of product demand by producers. This will help the producers and the marketing executives to plan their production quantity and plan effective marketing strategies to more buyers. Businesses can hence, increase their product portfolio and expand their global presence. Artificial Intelligence (AI) in Education market research report further offers information on the unexplored areas in these regions to help the producers to plan promotional strategies and create demand for their new and updated products. This will again help the manufacturers to increase their customers and emerge as leaders in the near future.

In this study, the years considered to estimate the market size of Artificial Intelligence (AI) in Education are as follows:

Research Objectives

If You Have Any Query, Ask Our [emailprotected] https://www.marketgrowthinsight.com/inquiry/112161

About Us-

Market Growth Insight 100% Subsidiary of Exltech Solutions India, is a one stop solution for market research reports in various business categories. We are serving 100+ clients with 30000+ diverse industry reports and our reports are developed to simplify strategic decision making, on the basis of comprehensive and in-depth significant information, established through wide ranging analysis and latest industry trends.

Contact Us:

Direct Line:+1 3477675477 (US)Email:[emailprotected]Web:https://www.marketgrowthinsight.com

See the article here:
Artificial Intelligence (AI) in Education Market Advanced Technology and New Innovations by 2024 - Jewish Life News

The path to real-world artificial intelligence – TechRepublic

Experts from MIT and IBM held a webinar this week to discuss where AI technologies are today and advances that will help make their usage more practical and widespread.

Image: Sompong Rattanakunchon / Getty Images

Artificial intelligence has made significant strides in recent years, but modern AI techniques remain limited, a panel of MIT professors and the director of the MIT-IBM Watson AI Lab said during a webinar this week.

Neural networks can perform specific, well-defined tasks but they struggle in real-world situations that go beyond pattern recognition and present obstacles like limited data, reliance on self-training, and answering questions like "why" and "how" versus "what," the panel said.

The future of AI depends on enabling AI systems to do something once considered impossible: Learn by demonstrating flexibility, some semblance of reasoning, and/or by transferring knowledge from one set of tasks to another, the group said.

SEE: Robotic process automation: A cheat sheet (free PDF) (TechRepublic)

The panel discussion was moderated by David Schubmehl, a research director at IDC, and it began with a question he posed asking about the current limitations of AI and machine learning.

"The striking success right now in particular, in machine learning, is in problems that require interpretation of signalsimages, speech and language," said panelist Leslie Kaelbling, a computer science and engineering professor at MIT.

For years, people have tried to solve problems like detecting faces and images and directly engineering solutions that didn't work, she said.

We have become good at engineering algorithms that take data and use that to derive a solution, she said. "That's been an amazing success." But it takes a lot of data and a lot of computation so for some problems formulations aren't available yet that would let us learn from the amount of data available, Kaelbling said.

SEE:9 super-smart problem solvers take on bias in AI, microplastics, and language lessons for chatbots(TechRepublic)

One of her areas of focus is in robotics, and it's harder to get training examples there because robots are expensive and parts break, "so we really have to be able to learn from smaller amounts of data," Kaelbling said.

Neural networks and deep learning are the "latest and greatest way to frame those sorts of problems and the successes are many," added Josh Tenenbaum, a professor of cognitive science and computation at MIT.

But when talking about general intelligence and how to get machines to understand the world there is still a huge gap, he said.

"But on the research side really exciting things are starting to happen to try to capture some steps to more general forms of intelligence [in] machines," he said. In his work, "we're seeing ways in which we can draw insights from how humans understand the world and taking small steps to put them in machines."

Although people think of AI as being synonymous with automation, it is incredibly labor intensive in a way that doesn't work for most of the problems we want to solve, noted David Cox, IBM director of the MIT-IBM Watson AI Lab.

Echoing Kaelbling, Cox said that leveraging tools today like deep learning requires huge amounts of "carefully curated, bias-balanced data," to be able to use them well. Additionally, for most problems we are trying to solve, we don't have those "giant rivers of data" to build a dam in front of to extract some value from that river, Cox said.

Today, companies are more focused on solving some type of one-off problem and even when they have big data, it's rarely curated, he said. "So most of the problems we love to solve with AIwe don't have the right tools for that."

That's because we have problems with bias and interpretability with humans using these tools and they have to understand why they are making these decisions, Cox said. "They're all barriers."

However, he said, there's enormous opportunity looking at all these different fields to chart a path forward.

That includes using deep learning, which is good for pattern recognition, to help solve difficult search problems, Tenenbaum said.To develop intelligent agents, scientists need to use all the available tools, said Kaelbling. For example, neural networks are needed for perception as well as higher level and more abstract types of reasoning to decide, for example, what to make for dinner or to decide how to disperse supplies.

"The critical thing technologically is to realize the sweet spot for each piece and figure out what it is good at and not good at. Scientists need to understand the role each piece plays," she said.

The MIT and IBM AI experts also discussed a new foundational method known as neurosymbolic AI, which is the ability to combine statistical, data-driven learning of neural networks with the powerful knowledge representation and reasoning of symbolic approaches.

Moderator Schubmehl commented that having a combination of neurosymbolic AI and deep learning "might really be the holy grail" for advancing real-world AI.

Kaelbling agreed, adding that it may be not just those two techniques but include others as well.

One of the themes that emerged from the webinar is that there is a very helpful confluence of all types of AI that are now being used, said Cox. The next evolution of very practical AI is going to be understanding the science of finding things and building a system we can reason with and grow and learn from, and determine what is going to happen. "That will be when AI hits its stride," he said.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

More:
The path to real-world artificial intelligence - TechRepublic

Hardbacon secures funding to develop artificial intelligence capable of predicting changes in the stock market – PRNewswire

MONTREAL, July 14, 2020 /PRNewswire/ --Hardbacon is pleased to announce that it will receive consulting services and has obtained conditional funding of $50,000 for an artificial intelligence research and development project to predict stock prices. The grant is part of the National Research Council of Canada's Industrial Research Assistance Program (NRC IRAP).

Hardbacon, a mobile budgeting and investment tracking app, is currently developing a stock rating system, which will leverage artificial intelligence to help investors pick stocks.

Ratings generated by artificial intelligence will appear in Hardbacon's mobile application, and will also be made available under license to financial institutions wishing to use these ratings or to offer them to their customers.

"Many Hardbacon users asked us to tell them what to invest in", explained Julien Brault, CEO of Hardbacon. Until now we had refused, until one of our employees presented us with a promising academic article that he had written about the possibility of using artificial intelligence to generate predictive ratings. We are grateful that the NRC IRAP has agreed to support this project."

For more information, contact:

Julien Brault, CEO of Hardbacon; 514-250-3255; [emailprotected]

To learn more about Hardbacon, visit our website : https://hardbacon.ca/

Disclaimer:The news site hosting this press release is not associated with Hardbacon or Bacon Financial Technologies Inc. It is merely publishing a press release announcement submitted by a company, without any stated or implied endorsement of the information, product or service. Please check with a Registered Investment Adviser or Certified Financial Planner before making any investment.

About Hardbacon

Hardbacon strives to help Canadians make better financial decisions. The company, which obtained $1.1 million in funding, markets a mobile application that enables subscribers to create a plan, a budget and to analyze their investments. The mobile app, available in the App Store and Google Play, can link to bank and investment accounts for more than 100 Canadian financial institutions.

Press Contact:

Julien Brault 5142503255 https://hardbacon.ca/

SOURCE Hardbacon

https://hardbacon.ca

Read more here:
Hardbacon secures funding to develop artificial intelligence capable of predicting changes in the stock market - PRNewswire

Artificial Intelligence: 3 Benefits for the Insurance Industry – www.contact-centres.com

As the insurance sector competes to win market share, Henry Jinman at EBI.AI discusses three ways companies can benefit from the power of Artificial Intelligence

The UK general insurance market continues to be fiercely competitive. While the battle for repeat business keeps downward pressure on pricing, a constantly changing regulatory agenda increases costs. Whatever the industry, successful companies know that building a business based on price alone is not sustainable. Customer service is what matters most. Its a sentiment that is reflected in the latest findings of multinational professional services company Ernst & Young (EY). It claims that non-life insurance companies in particular should invest to create innovative and satisfying end-to-end customer experiences with optimised technology that helps them become data-driven and insight-enabled in everything they do.[i]

Its time to consider the benefits of Artificial Intelligence (AI). Through its ability to capture, analyse and learn from massive amounts of data, AI should be at the centre of every enterprise serious about creating amazing customer experiences. AI tools should also support everyone, employees, managers and customers, to ask and receive the information they need, whenever and wherever they need it, quickly and using engaging, natural language.

In EBI.AIs experience, companies that introduce AI solutions such as AI assistants are rewarded with multiple benefits. By reducing the number of repetitive calls in the contact centre or customer service departments and frontline staff are better equipped to handle more complex and rewarding tasks. Meanwhile, scaling todays virtual AI solutions is easy, enabling managers to adapt to unexpected events and emergencies as they happen such as the Covid-19 pandemic. Data-driven AI solutions also make formidable weapons against the common problems facing insurance managers such as highlighting fraudulent claims and mitigating claims leakage.

Here are 3 ways AI can help the insurance industry in key areas:

1. Front-end sales train the latest AI tools to answer the most common questions quickly then maximise their ability to use critical customer data to offer personalised recommendations on policies and pricing. Integrate AI with sophisticated telematics in-car sensors or health analytics platforms to identify your most careful drivers or health-conscious clients to reward them with lower premiums so they keep coming back.

2. Product and marketing deliver customers an exceptional experience with AI tools that are welcoming, efficient and secure. Use AIs image, video and natural language capabilities to assess and analyse claims and issue fast, accurate pay-out decisions in seconds. Then build confidence and loyalty with AIs ability to flag up potential threats from scammers and hackers to keep customers sensitive details safe. Once these important foundations are in place, make AI an intrinsic part of your marketing toolkit. AI can propose personalised offerings based on customer needs and then swiftly identify opportunities for intelligent lead generation.

3. Customer management AI tools guarantee round-the-clock customer service they never sleep, go off sick or need a holiday! Virtual Customer Assistants (VCAs) for example, are a bonus to customer service departments through their ability to cross-sell, upsell and prevent agent churn. AI tools can match customers with the most qualified available agents to handle their queries or, when applied over large data sets, provide analysis of general customer sentiment over time. Maximise machine learning to add feedback functionality to insurance bots. That way, youll better understand client needs, improve services and deliver a highly personalised experience.

Dont rush in!

To make AI a success, follow a few golden rules. First of all, involve the right people in the company including budget holders, the IT department and everyday users from the very beginning. Set and manage expectations by educating your organisation about what AI can and cannot do. Be realistic when sharing timeframes for results machine-learning takes time to perfect! Also remember that AI tools thrive on good data so build a bank of reliable data that is up-to-date and above all, relevant. Finally, test AI in a real-world environment while maintaining business as usual.

Learn from real-life success stories

Follow the lead of Legal & General, General Insurance now part of LV=General Insurance, part of the Allianz Group, at the beginning of this year, EBI.AI worked with the company to create SmartHelp, an AI assistant designed to enhance the companys customer service. Since that time, nearly 11% of Legal & Generals customers use SmartHelp on the available web pages, on some of the pages usage is as high as 40% and the virtual AI assistant regularly provides over 300 answers to thousands of the most commonly asked questions.

To find out how, download the Case Study Click Here

Henry Jinman is Commercial Director at EBIAI

Established in 2014, EBI.AI is among the most advanced UK labs to create fully managed, Enterprise-grade AI Assistants. These assistants help clients to provide their customers with faster and better resolutions to their queries, and liberate front-line customer service agents from the dull, repetitive, and mundane.

EBI.AI selects the best AI and cloud services available from IBM, Amazon, Microsoft and others, combined with bespoke AI models to deliver its AI communication platform, called Lobster.

Combined with it over 19 years of experience working with big data, analytics and systems integration it has successfully implemented AI Assistants, that now handle hundreds of thousands of conversations a year across Transport & Travel, Property, Insurance, Public and Automotive industries.

For more information on EBI.AI visit their Website

Visit link:
Artificial Intelligence: 3 Benefits for the Insurance Industry - http://www.contact-centres.com