Archive for the ‘Artificial Intelligence’ Category

Artificial intelligence predictions for 2020: 16 experts have their say – Verdict

2019 has seen artificial intelligence and machine learning take centre stage for many industries, with companies increasingly looking to harness the benefits of the technology for a wide range of use cases. With its advances, ethical implications and impact on humans likely to dominate conversations in the technology sector for years to come, how will AI continue to develop over the next 12 months?

Weve asked experts from a range of organisations within the AI sphere to give their predictions for 2020.

In both the private and public sectors, organisations are recognising the need to develop strategies to mitigate bias in AI. With issues such as amplified prejudices in predictive crime mapping, organisations must build in checks in both AI technology itself and their people processes. One of the most effective ways to do this is to ensure data samples are robust enough to minimise subjectivity and yield trustworthy insights. Data collection cannot be too selective and should be reflective of reality, not historical biases.

In addition, teams responsible for identifying business cases and creating and deploying machine learning models should represent a rich blend of backgrounds, views, and characteristics. Organisations should also test machines for biases, train AI models to identify bias, and consider appointing an HR or ethics specialist to collaborate with data scientists, thereby ensuring cultural values are being reflected in AI projects.

Zachary Jarvinen, Head of Technology Strategy, AI and Analytics, OpenText

A big trend for social media this year has been the rise of deepfakes and were only likely to see this increase in the year ahead. These are manipulated videos that are made to look real, but are actually inaccurate representations powered by sophisticated AI. This technology has implications for past political Facebook posts. I believe we will start to see threat actors use deepfakes as a tactic for corporate cyberattacks, in a similar way to how phishing attacks operate.

Cyber crooks will see this as a money-making opportunity, as they can cause serious harm on unsuspecting employees. This means it will be vital for organisations to keep validation technology up-to-date. The same tools that people use to create deepfakes will be the ones used to detect them, so we may see an arms race for who can use the technology first.

Jesper Frederiksen, VP and GM EMEA, Okta

When considering high-volume, fast turnaround hiring efforts, its often impossible to keep every candidate in the loop. Enter highly sophisticated artificial intelligence tools, such as chatbots. More companies are now using AI programs to inform candidates quickly and efficiently on where they stand in the process, help them navigate career sites, schedule interviews and give advice. This is significantly transforming the candidate experience, enhancing engagement and elevating overall satisfaction.

Chatbots are also increasingly becoming a tool for employees who wish to apply for new roles within their organisation. Instead of trying to work up the nerve to ask HR or their boss about new opportunities, employees can interact with a chatbot that can offer details about open jobs, give skills assessments and offer career guidance.

Whats more, some companies are offering day in the life virtual simulations that allow candidates to see what a role would entail, which can either enhance interest or help candidates self-select out of the process. It also helps employers understand if the candidate would be a good fit, based on their behavior during the simulation. In Korn Ferrys global survey of HR professionals, 78 percent say that in the coming year, it will be vital to provide candidates with these day in the life type experiences.

Byrne Mulrooney, Chief Executive Officer, Korn Ferry RPO, Professional Search and Korn Ferry Digital

Get the Verdict morning email

Despite fears that it will replace human employees, in 2020 AI and machine learning will increasingly be used to aid and augment them. For instance, customer service workers need to be certain they are giving customers the right advice. AI can analyse complex customer queries with high numbers of variables, then present solutions to the employee speeding up the process and increasing employee confidence.

Lufthansa for one is already using this method, and with a faster, more accurate and ultimately more satisfying customer experience acting as a significant differentiator more will follow. Over the next three years this trend will keep accelerating, as businesses from banks to manufacturers use AI to support their employees decisions and outperform the competition.

Felix Gerdes, Director of Digital Innovation Services at Insight UK

In 2020 were going to see increased public demand for the demystification and democratisation of AI. There is a growing level of interest and people are quite rightly not happy to sit back and accept that a robot or programme makes the decisions it does because it does or that its simply too complicated. They want to understand how varying AI works in principle, they want to have more of a role in determining how AI should engage in their lives so that they dont feel powerless in the face of this new technology.

Companies need to be ready for this shift, and to welcome it. Increasing public understanding of AI, and actively seeking to hear peoples hopes and concerns is the only way forward to ensure that the role of AI is both seen as a force for good for everyone in our society and as a result able to realise the opportunity ahead historically not something that tech industry as a whole have been good at, we need to change.

Teg Dosanjh, Director of Connected Living for Samsung UK and Ireland

As the next decade of the transforming transportation industry unfolds, investment in autonomous vehicle development will continue to grow dramatically, especially in the datacenter and AI infrastructure for training and validation. Well see a significant ramp in autonomous driving pilot programs as part of this continued investment. Some of these will include removal of the on-board safety driver. Autonomous driving technology will be applied to a wider array of industries, such as trucking and delivery, moving goods instead of people.

Production vehicles will start to incorporate the hardware necessary for self-driving, such as centralized onboard AI compute and advanced sensor suites. These new features will help power Level 2+ AI assisted driving and lay the foundation for higher levels of autonomy. Regulatory agencies will also begin to leverage new technologies to evaluate autonomous driving capability, in particular, hardware-in-the-loop simulation for accurate and scalable validation. The progress in AV development underway now and for the next few years will be instrumental to the coming era of safer, more efficient transportation.

Danny Shapiro, Senior Director of Automotive, NVIDIA

As AI tools become easier to use, AI use cases proliferate, and AI projects are deployed, cross-functional teams are being pulled into AI projects. Data literacy will be required from employees outside traditional data teamsin fact, Gartner expects that 80% of organisations will start to roll out internal data literacy initiatives to upskill their workforce by 2020.

But training is an ongoing endeavor, and to succeed in implementing AI and ML, companies need to take a more holistic approach toward retraining their entire workforces. This may be the most difficult, but most rewarding, process for many organisations to undertake. The opportunity for teams to plug into a broader community on a regular basis to see a wide cross-section of successful AI implementations and solutions is also critical.

Retraining also means rethinking diversity. Reinforcing and expanding on how important diversity is to detecting fairness and bias issues, diversity becomes even more critical for organisations looking to successfully implement truly useful AI models and related technologies. As we expect most AI projects to augment human tasks, incorporating the human element in a broad, inclusive manner becomes a key factor for widespread acceptance and success.

Roger Magoulas, VP of Radar at OReilly

The hottest trend in the industry right now is in Natural Language Processing (NLP). Over the past year, a new method called BERT (Bidirectional Encoder Representations from Transformers) has been developed for designing neural networks that work with text. Now, we suddenly have models that will understand the semantic meaning of whats in text, going beyond the basics. This creates a lot more opportunity for deep learning to be used more widely.

Almost every organisation has a need to read and understand text and spoken word whether it is dealing with customer enquiries in the contact centre, assessing social media sentiment in the marketing department or even deciphering legal contracts or invoices. Having a model that can learn from examples and build out its vocabulary to include local colloquialisms and turns of phrase is extremely useful to a much wider range of organisations than image processing alone.

Bjrn Brinne, Chief AI Officer at Peltarion

Voice assistants have established themselves as common place in our personal lives. But 2020 will see an increasing amount of businesses turning to them to improve and personalise the customer experience.

This is because, advances in AI-driven technology and natural language processing are enabling voice interactions to be translated into data. This data can be structured so that conversations can be analysed for insights.

Next year, organisations will likely begin to embrace conversational analytics to improve their chatbots and voice applications. This will ultimately result in better data-driven decisions and improved business performance.

Alberto Pan, Chief Technical Officer, Denodo

Organisations are already drowning in data, but the flood gates are about to open even wider. IDC predicts that the worlds data will grow to 175 zettabytes over the next five years. With this explosive growth comes increased complexity, making data harder than ever to manage. For many organisations already struggling, the pressure is on.

Yet the market will adjust. Over the next few years, organisations will exploit machine learning and greater automation to tackle the data deluge.

Machine learning applications are constantly improving when it comes to making predictions and taking actions based on historical trends and patterns. With its number-crunching capabilities, machine learning is the perfect solution for data management. Well soon see it accurately predicting outages and, with time, it will be able to automate the resolution of capacity challenges. It could do this, for example, by automatically purchasing cloud storage or re-allocating volumes when it detects a workload nearing capacity.

At the same time, with recent advances in technology we should also expect to see data becoming more intelligent, self-managing and self-protecting. Well see a new kind of automation where data is hardwired with a type of digital DNA. This data DNA will not only identify the data but will also program it with instructions and policies.

Adding intelligence to data will allow it to understand where it can reside, who can access it, what actions are compliant and even when to delete itself. These processes can then be carried out independently, with data acting like living cells in a human body, carrying out their hardcoded instructions for the good of the business.

However, with IT increasingly able to manage itself, and data management complexities resolved, what is left for the data leaders of the business? Theyll be freed from the low-value, repetitive tasks of data management and will have more time for decision-making and innovation. In this respect AI will become an invaluable tool, flagging issues experts may not have considered and giving them options, unmatched visibility and insight into their operations.

Jasmit Sagoo, Senior Director, Head of Technology UK&I at Veritas Technologies

2020 will be the year research & investment in ethics and bias in AI significantly increases. Today, business insights in enterprises are generated by AI and machine learning algorithms. However, due to these algorithms being built using models and data bases, bias can creep in from those that train the AI. This results in gender or racial bias be it for mortgage applications or forecasting health problems. With increased awareness of bias in data, business leaders will demand to know how AI reaches the recommendations it does to avoid making biased decisions as a business in the future.

Ashvin Kamaraju, CTO for Cloud Protection and Licensing activity atThales

2020 will be the year of health data. Everyone is agreed that smarter use of health data is essential to providing better patient care meaning treatment that is more targeted or is more cost effective. However, navigating through the thicket of consents and rules as well as the ethical considerations has caused a delay to advancement of the use of patient data.

There are now several different directions of travel emerging which all present exciting opportunities for patients, for health providers including the NHS, for Digital Health companies and for pharmaceutical companies.

Marcus Vass, Partner, Osborne Clarke

Artificial intelligence isnt just something debated by techies or sci-fi writers anymore its increasingly creeping into our collective cultural consciousness. But theres a lot of emphasis on the negative. While those big picture questions around ethics cannot and should not be ignored, in the near-term we wont be dealing with the super-AI you see in the movies.

Im excited by the possibilities well see AI open up in the next couple of years and the societal challenges it will inevitably help us to overcome. And its happening already. One of the main applications for AI right now is driving operational efficiencies and that may not sound very exciting, but its actually where the technology can have the biggest impact. If we can use AI to synchronise traffic lights to impact traffic flow and reduce the amount of time cars spend idling, that doesnt just make inner city travel less of a headache for drivers it can have a tangible impact on emissions. Thats just one example. In the next few years, well see AI applied in new, creative ways to solve the biggest problems were facing as a species right now from climate change to mass urbanisation.

Dr Anya Rumyantseva, Data Scientist at Hitachi Vantara

Businesses are investing more in AI each year, as they look to use the technology to personalize customer experiences, reduce human bias and automate tasks. Yet for most organizations AI hasnt yet reached its full potential, as data is locked up in siloed systems and applications.

In 2020, well see organizations unlock their data using APIs, enabling them to uncover greater insights and deliver more business value. If AI is the brain, APIs and integration are the nervous system that help AI really create value in a complex, real-time context.

Ian Fairclough, VP of Services, MuleSoft

2020 is going to be a tipping point, when algorithmic decision making AI will become more mainstream. This brings both opportunities and challenges, particularly around the explainability of AI. We currently have many blackbox models where we dont know how its coming to decisions. Bad guys can leverage this and manipulate these decisions.

Using machine identities, they will be able to infiltrate the data streams that feed into an AI models and manipulate them. If companies are unable to explain and see the decision making behind their AI this could go unquestioned, changing the outcomes. This could have wide reaching impacts in everything from predictive policing to financial forecasting and market decision making.

Kevin Bocek, Vice President, Security Strategy & Threat Intelligence at Venafi

Until now, robotic process automation (RPA) and artificial intelligence (AI) have been perceived as two separate things: RPA being task oriented, without intelligence built in. However, as we move into 2020, AI and machine learning (ML) will become an intrinsic part of RPA infused throughout analytics, process mining and discovery. AI will offer various functions like natural language processing (NLP) and language skills, and RPA platforms will need to be ready to accept those AI skill sets. More broadly, there will be greater adoption of RPA across industries to increase productivity and lower operating costs. Today we have over 1.7 million bots in operation with customers around the world and this number is growing rapidly. Consequently, training in all business functions will need to evolve, so that employees know how to use automation processes and understand how to leverage RPA, to focus on the more creative aspects of their job.

RPA is set to see adoption in all industries very quickly, across all job roles, from developers and business analysts, to programme and project managers, and across all verticals, including IT, BPO, HR, Education, Insurance and Banking. To facilitate continuous learning, companies must give employees the time and resources needed to upskill as job roles evolve, through methods such as micro-learning and just in time training. In the UK, companies are reporting that highly skilled AI professionals, currently, are hard to find and expensive to hire, driving up the cost of adoption and slowing technological advancement. Organisations that make a conscious decision to use automation in a way that enhances employees skills and complements their working style will significantly increase the performance benefit they see from augmentation.

James Dening, Vice President for Europe at Automation Anywhere

Read more: Artificial intelligence to create 133 million jobs globally: Report

Link:

Artificial intelligence predictions for 2020: 16 experts have their say - Verdict

Tommie Experts: Ethically Educating on Artificial Intelligence at St. Thomas – University of St. Thomas Newsroom

Tommie Experts taps into the knowledge of St. Thomas faculty and staff to help us better understand topical events, trends and the world in general.

Last month, School of Engineering Dean Don Weinkauf appointed Manjeet Rege, PhD, as the director for the Center for Applied Artificial Intelligence.

Rege is a faculty member, author, mentor, AI expert, thought leader and a frequent public speaker on big data, machine learning and AI technologies. The Newsroom caught up with him to ask about the centers launch in response to a growing need to educate ethically around AI.

Were partnering with industry in a number of ways. One way is in our data science curriculum. There are electives; some students take a regular course, while others take a data science capstone project. Its optional. Students who opt for that through partnership with the industry, companies in the Twin Cities interested in embarking on an AI journey can have several business use cases that they want to try AI out with. In an enterprise, you typically have to seek funding, convince a lot of people; in this case, well find a student, or a team, who will be working on that industry-sponsored project. Its a win-win for all. The project will be supervised by faculty. The company gets access to emerging AI talent, gets to try out their business use case and the students end up getting an opportunity working on a real-world project.

Secondly, a number of companies are looking to hire talent in machine learning and AI. This is a good way for companies to access good talent. We can build relationships, sending students for internships, or even students who work on these capstone projects become important in terms of hiring.

There are also a number of professional development offerings well come out with. We offer a mini masters program in big data and AI. The local companies can come and attend an executive seminar for a week on different aspects of AI. Well be offering two- or three-day workshops on hands-on AI, for someone within a company who would like to become an AI practitioner. If they are interested in getting in-depth knowledge, they can go through our curriculum.

We also have a speaker series in partnership with SAS.

In May well be hosting a data science day, a keynote speaker, and a panel of judges to review projects the data science students are working on (six of which are part of the SAS Global Student Symposium). Theyll get to showcase the work theyve done. That panel of judges will be from local companies.

Everybody is now becoming aware that AI is ubiquitous, around us and here. The ship has already left the dock, so to speak, in terms of AI being around us. The best way to succeed at the enterprise level is to embrace this and make it a business enabler. Its important for enterprises to transform themselves into an AI-first company. Think about Google. It first defined itself as a search company. Then a mobile company. Now, its an AI-first company. That is what keeps you ahead, always.

Being aware of the problems that may arise is so important. For us to address AI biases, we have to understand how AI works. Through these multiple offerings were hoping we can create knowledge about AI. Once we have that we can address the issue of AI bias.

For example, Microsoft did an experiment where it had AI go out on the web, read the literature and learn a lot of analogies. When you went in and asked that AI questions based on, say, what man is to a woman, father is to what? Mother. Perfect. What man is to computer programmer as woman is to what? Homemaker. Thats unfortunate. AI is learning the stereotypes that exist in the literature it was learned on.

There have been hiring tools that have gender bias. Facial recognition tools that work better for lighter skin colors than darker skin colors. Bank loan programs with biases for certain demographics. There is a lot of effort in the AI community to minimize these. Humans have bias, but when a computer does it you expect perfection. An AI system learning is like a child learning; when that AI system learned about different things from the web and different relationships between man and woman, because these stereotypes existed already in the data, the computer just learned from it. Ultimately an AI system is for a human; whenever it gives you certain output, we need to be aware and go back and nudge it in the right direction.

Read the original:

Tommie Experts: Ethically Educating on Artificial Intelligence at St. Thomas - University of St. Thomas Newsroom

Beethovens unfinished tenth symphony to be completed by artificial intelligence – Classic FM

16 December 2019, 16:31 | Updated: 17 December 2019, 14:25

Beethovens unfinished symphony is set to be completed by artificial intelligence, in the run-up to celebrations around the 250th anniversary of the composers birth.

A computer is set to complete Beethovens unfinished tenth symphony, in the most ambitious project of its kind.

Artificial intelligence has recently been used to complete Schuberts Unfinished Symphony No. 8, as well as to attempt to match the playing of revered 20th-century pianist, Glenn Gould.

Beethoven famously wrote nine symphonies (you can read more here about the Curse of the Ninth). But alongside his Symphony No. 9, which contains the Ode to Joy, there is evidence that he began writing a tenth.

Unfortunately, when the German composer died in 1827, he left only drafts and notes of the composition.

Read more: What is the Curse of the Ninth and does it really exist? >

A team of musicologists and programmers have been training the artificial intelligence, by playing snippets of Beethovens unfinished Symphony No. 10, as well as sections from other works like his Eroica Symphony. The AI is then left to improvise the rest.

Matthias Roeder, project leader and director of the Herbert von Karajan institute, told Frankfurter Allgemeine Sonntagszeitung: No machine has been able to do this for so long. This is unique.

The quality of genius cannot be fully replicated, still less if youre dealing with Beethovens late period, said Christine Siegert, head of the Beethoven Archive in Bonn and one of those managing the project.

I think the projects goal should be to integrate Beethovens existing musical fragments into a coherent musical flow, she told the German broadcaster Deutshe Welle. Thats difficult enough, and if this project can manage that, it will be an incredible accomplishment.

Read more: AI to compose classical music live in concert with over 100 musicians >

It remains to be seen and heard whether the new completed composition will sound anything like Beethovens own compositions. But Mr Roeder has said the algorithm is making positive progress.

Read more: Googles piano gadget means ANYONE can improvise classical music >

The algorithm is unpredictable, it surprises us every day. It is like a small child who is exploring the world of Beethoven.

But it keeps going and, at some point, the system really surprises you. And that happened the first time a few weeks ago. Were pleased that its making such big strides.

There will also, reliable sources have confirmed, be some human involvement in the project. Although the computer will write the music, a living composer will orchestrate it for playing.

The results of the experiment will be premiered by a full symphony orchestra, in a public performance in Bonn Beethovens birthplace in Germany on 28 April 2020.

Read more here:

Beethovens unfinished tenth symphony to be completed by artificial intelligence - Classic FM

Artificial Intelligence (AI) and the Seasonal Associate – AiThority

The holiday season is here and there are rafts of new associates manning registers and helping stores handle swarms of shoppers. But how are these associates finding their seasonal roles? Many retailers already use Artificial Intelligence (AI) in their recruiting systems for hiring the best and brightest seasonal help. Human resource tasks such as screening and hiring are more efficient and accurate thanks to the AI envisioned a few years ago and in operation today.

Nevertheless, landing a good employee is only the first step in making this season a winning year for shopper loyalty, conversions, and same-store sales. Once hired, every new employee instantly becomes a brand ambassador and critical resource for shoppers exploring the store, possibly for the first time. That good hire must quickly become a great ambassador or the bad news will travel fast. According to Andrew Thomas, founder of Skybell Video Doorbell, it takes roughly 40 positive customer experiences to undo the damage of a single negative review.

Read More: The Future of Works Most Crucial Component: Artificial Intelligence

Great ambassadors need help getting started and AI is about to tackle the problems of spinning up new employees.

New and existing employees serve the customer best when they are energetic, motivated and supported. Training is seldom effective in bringing out these essential human characteristics because they are an in-the-moment-every-moment responsibility. These human behaviors do not fit easily in a classroom when the real challenge occurs on the retail floor. Whats needed for behavioral support is continuous attitudinal awareness, gentle encouragement, motivational nudging and a supportive buddy who is always available.

Employees represent the company best when they know the products, have clarity of the brand and timely exposure to proven Sales messages a daunting challenge for a new employee in an industry where training time is costly and the Sales floor is frequently chaotic. Retailers need to know exactly which information needs reinforcing, who needs to hear the information, how much information they can digest when the best time to deliver it is and in which location the information makes the most educational impact.

Read More: Top 5 Best Pay Per Click Marketing Services in Dubai

The obvious vehicle for delivering both support and education to new employees is clear and continuous communication with veteran employees. Unfortunately, our 1950s walkie-talkies and our high-tech heads-down smart devices do not solve the problem nor fit the retail floor. The former clutters the ear with mostly irrelevant chatter, while the latter destroys both situational awareness and shopper rapport.

Whats needed is a conversational platform powered by Natural Language Processing that connects employees with each other or with the information available in the company IT systems on the spot, without having to rely on a screen. Using intelligent mediation within the communication platform assures each employee gets the best information, at the right time, in the right location.

Once a conversational platform replaces the old walkie-talkies, regular mobile devices become occasional-use, specialized tools. The AI platform learns the environment and transforms the employees measurably improving associate effectiveness and the shopper experience. Todays conversational platforms are able to add AI that dissects conversations and directs information to specific employees, at specific times, in specific locations. This information may be anything from the name of the customer approaching them to collect an online order in the store, a register backup call (with the opportunity to instantly respond), or an accurate technical answer to a question from an expert group anywhere in the world.

The conversations enabled between employees across the store contain the full context of what they know and offers management insight into how they share when they inspire, and where they perform the best. The conversations contain solutions.

AI is a natural progression in the evolution of conversational platforms for mobile store team members. The platforms available today connect employees, groups and IT systems using intelligent mediation while simultaneously collecting data for measuring performance. It wont be long before AI overlays these platforms with deep analytics of employee behaviors, derivation of critical messaging, and quantum leaps in shopper experiences.

Read More: The Future of AI: More Automation and Less Empathic Interaction

See the original post:

Artificial Intelligence (AI) and the Seasonal Associate - AiThority

Artificial intelligence: How to measure the I in AI – TechTalks

Image credit: Depositphotos

This article is part ofDemystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

Last week, Lee Se-dol, the South Korean Go champion who lost in a historical matchup against DeepMinds artificial intelligence algorithm AlphaGo in 2016, declared his retirement from professional play.

With the debut of AI in Go games, Ive realized that Im not at the top even if I become the number one through frantic efforts, Lee told theYonhap news agency. Even if I become the number one, there is an entity that cannot be defeated.

Predictably, Se-dols comments quickly made the rounds across prominent tech publications, some of them using sensational headlines with AI dominance themes.

Since the dawn of AI, games have been one of the main benchmarks to evaluate the efficiency of algorithms. And thanks to advances in deep learning and reinforcement learning, AI researchers are creating programs that can master very complicated games and beat the most seasoned players across the world. Uninformed analysts have been picking up on these successes to suggest that AI is becoming smarter than humans.

But at the same time, contemporary AI fails miserably at some of the most basic that every human can perform.

This begs the question, does mastering a game prove anything? And if not, how can you measure the level of intelligence of an AI system?

Take the following example. In the picture below, youre presented with three problems and their solution. Theres also a fourth task that hasnt been solved. Can you guess the solution?

Youre probably going to think that its very easy. Youll also be able to solve different variations of the same problem with multiple walls, and multiple lines, and lines of different colors, just by seeing these three examples. But currently, theres no AI system, including the ones being developed at the most prestigious research labs, that can learn to solve such a problem with so few examples.

The above example is from The Measure of Intelligence, a paper by Franois Chollet, the creator of Keras deep learning library. Chollet published this paper a few weeks before Le-sedol declared his retirement. In it, he provided many important guidelines on understanding and measuring intelligence.

Ironically, Chollets paper did not receive a fraction of the attention it needs. Unfortunately, the media is more interested in covering exciting AI news that gets more clicks. The 62-page paper contains a lot of invaluable information and is a must-read for anyone who wants to understand the state of AI beyond the hype and sensation.

But I will do my best to summarize the key recommendations Chollet makes on measuring AI systems and comparing their performance to that of human intelligence.

The contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks, such as board games and video games, Chollet writes, adding that solely measuring skill at any given task falls short of measuring intelligence.

In fact, the obsession with optimizing AI algorithms for specific tasks has entrenched the community in narrow AI. As a result, work in AI has drifted away from the original vision of developing thinking machines that possess intelligence comparable to that of humans.

Although we are able to engineer systems that perform extremely well on specific tasks, they have still stark limitations, being brittle, data-hungry, unable to make sense of situations that deviate slightly from their training data or the assumptions of their creators, and unable to repurpose themselves to deal with novel tasks without significant involvement from human researchers, Chollet notes in the paper.

Chollets observations are in line with those made by other scientists on the limitations and challenges of deep learning systems. These limitations manifest themselves in many ways:

Heres an example: OpenAIs Dota-playing neural networks needed 45,000 years worth of gameplay to reach a professional level. The AI is also limited in the number of characters it can play, and the slightest change to the game rules will result in a sudden drop in its performance.

The same can be seen in other fields, such as self-driving cars. Despite millions of hours of road experience, the AI algorithms that power autonomous vehicles can make stupid mistakes, such as crashing into lane dividers or parked firetrucks.

One of the key challenges that the AI community has struggled with is defining intelligence. Scientists have debated for decades on providing a clear definition that allows us to evaluate AI systems and determine what is intelligent or not.

Chollet borrows the definition by DeepMind cofounder Shane Legg and AI scientist Marcus Hutter: Intelligence measures an agents ability to achieve goals in a wide range of environments.

Key here is achieve goals and wide range of environments. Most current AI systems are pretty good at the first part, which is to achieve very specific goals, but bad at doing so in a wide range of environments. For instance, an AI system that can detect and classify objects in images will not be able to perform some other related task, such as drawing images of objects.

Chollet then examines the two dominant approaches in creating intelligence systems: symbolic AI and machine learning.

Early generations of AI research focused on symbolic AI, which involves creating an explicit representation of knowledge and behavior in computer programs. This approach requires human engineers to meticulously write the rules that define the behavior of an AI agent.

It was then widely accepted within the AI community that the problem of intelligence would be solved if only we could encode human skills into formal rules and encode human knowledge into explicit databases, Chollet observes.

But rather than being intelligent by themselves, these symbolic AI systems manifest the intelligence of their creators in creating complicated programs that can solve specific tasks.

The second approach, machine learning systems, is based on providing the AI model with data from the problem space and letting it develop its own behavior. The most successful machine learning structure so far is artificial neural networks, which are complex mathematical functions that can create complex mappings between inputs and outputs.

For instance, instead of manually coding the rules for detecting cancer in x-ray slides, you feed a neural network with many slides annotated with their outcomes, a process called training. The AI examines the data and develops a mathematical model that represents the common traits of cancer patterns. It can then process new slides and outputs how likely it is that the patients have cancer.

Advances in neural networks and deep learning have enabled AI scientists to tackle many tasks that were previously very difficult or impossible with classic AI, such as natural language processing, computer vision and speech recognition.

Neural networkbased models, also known as connectionist AI, are named after their biological counterparts. They are based on the idea that the mind is a blank slate (tabula rasa) that turns experience (data) into behavior. Therefore, the general trend in deep learning has become to solve problems by creating bigger neural networks and providing them with more training data to improve their accuracy.

Chollet rejects both approaches because none of them has been able to create generalized AI that is flexible and fluid like the human mind.

We see the world through the lens of the tools we are most familiar with. Today, it is increasingly apparent that both of these views of the nature of human intelligenceeither a collection of special-purpose programs or a general-purpose Tabula Rasaare likely incorrect, he writes.

Truly intelligent systems should be able to develop higher-level skills that can span across many tasks. For instance, an AI program that masters Quake 3 should be able to play other first-person shooter games at a decent level. Unfortunately, the best that current AI systems achieve is local generalization, a limited maneuver room within their own narrow domain.

In his paper, Chollet argues that the generalization or generalization power for any AI system is its ability to handle situations (or tasks) that differ from previously encountered situations.

Interestingly, this is a missing component of both symbolic and connectionist AI. The former requires engineers to explicitly define its behavioral boundary and the latter requires examples that outline its problem-solving domain.

Chollet also goes further and speaks of developer-aware generalization, which is the ability of an AI system to handle situations that neither the system nor the developer of the system have encountered before.

This is the kind of flexibility you would expect from a robo-butler that could perform various chores inside a home without having explicit instructions or training data on them. An example is Steve Wozniaks famous coffee test, in which a robot would enter a random house and make coffee without knowing in advance the layout of the home or the appliances it contains.

Elsewhere in the paper, Chollet makes it clear that AI systems that cheat their way toward their goal by leveraging priors (rules) and experience (data) are not intelligent. For instance, consider Stockfish, the best rule-base chess-playing program. Stockfish, an open-source project, is the result of contributions from thousands of developers who have created and fine-tuned tens of thousands of rules. A neural networkbased example is AlphaZero, the multi-purpose AI that has conquered several board games by playing them millions of times against itself.

Both systems have been optimized to perform a specific task by making use of resources that are beyond the capacity of the human mind. The brightest human cant memorize tens of thousands of chess rules. Likewise, no human can play millions of chess games in a lifetime.

Solving any given task with beyond-human level performance by leveraging either unlimited priors or unlimited data does not bring us any closer to broad AI or general AI, whether the task is chess, football, or any e-sport, Chollet notes.

This is why its totally wrong to compare Deep Blue, Alpha Zero, AlphaStar or any other game-playing AI with human intelligence.

Likewise, other AI models, such as Aristo, the program that can pass an eighth-grade science test, does not possess the same knowledge as a middle school student. It owes its supposed scientific abilities to the huge corpora of knowledge it was trained on, not its understanding of the world of science.

(Note: Some AI researchers, such as computer scientist Rich Sutton, believe that the true direction for artificial intelligence research should be methods that can scale with the availability of data and compute resources.)

In the paper, Chollet presents the Abstraction Reasoning Corpus (ARC), a dataset intended to evaluate the efficiency of AI systems and compare their performance with that of human intelligence. ARC is a set of problem-solving tasks that tailored for both AI and humans.

One of the key ideas behind ARC is to level the playing ground between humans and AI. It is designed so that humans cant take advantage of their vast background knowledge of the world to outmaneuver the AI. For instance, it doesnt involve language-related problems, which AI systems have historically struggled with.

On the other hand, its also designed in a way that prevents the AI (and its developers) from cheating their way to success. The system does not provide access to vast amounts of training data. As in the example shown at the beginning of this article, each concept is presented with a handful of examples.

The AI developers must build a system that can handle various concepts such as object cohesion, object persistence, and object influence. The AI system must also learn to perform tasks such as scaling, drawing, connecting points, rotating and translating.

Also, the test dataset, the problems that are meant to evaluate the intelligence of the developed system, are designed in a way that prevents developers from solving the tasks in advance and hard-coding their solution in the program. Optimizing for evaluation sets is a popular cheating method in data science and machine learning competitions.

According to Chollet, ARC only assesses a general form of fluid intelligence, with a focus on reasoning and abstraction. This means that the test favors program synthesis, the subfield of AI that involves generating programs that satisfy high-level specifications. This approach is in contrast with current trends in AI, which are inclined toward creating programs that are optimized for a limited set of tasks (e.g., playing a single game).

In his experiments with ARC, Chollet has found that humans can fully solve ARC tests. But current AI systems struggle with the same tasks. To the best of our knowledge, ARC does not appear to be approachable by any existing machine learning technique (including Deep Learning), due to its focus on broad generalization and few-shot learning, Chollet notes.

While ARC is a work in progress, it can become a promising benchmark to test the level of progress toward human-level AI. We posit that the existence of a human-level ARC solver would represent the ability to program an AI from demonstrations alone (only requiring a handful of demonstrations to specify a complex task) to do a wide range of human-relatable tasks of a kind that would normally require human-level, human-like fluid intelligence, Chollet observes.

Read this article:

Artificial intelligence: How to measure the I in AI - TechTalks