The path to real-world artificial intelligence – TechRepublic

Experts from MIT and IBM held a webinar this week to discuss where AI technologies are today and advances that will help make their usage more practical and widespread.

Image: Sompong Rattanakunchon / Getty Images

Artificial intelligence has made significant strides in recent years, but modern AI techniques remain limited, a panel of MIT professors and the director of the MIT-IBM Watson AI Lab said during a webinar this week.

Neural networks can perform specific, well-defined tasks but they struggle in real-world situations that go beyond pattern recognition and present obstacles like limited data, reliance on self-training, and answering questions like "why" and "how" versus "what," the panel said.

The future of AI depends on enabling AI systems to do something once considered impossible: Learn by demonstrating flexibility, some semblance of reasoning, and/or by transferring knowledge from one set of tasks to another, the group said.

SEE: Robotic process automation: A cheat sheet (free PDF) (TechRepublic)

The panel discussion was moderated by David Schubmehl, a research director at IDC, and it began with a question he posed asking about the current limitations of AI and machine learning.

"The striking success right now in particular, in machine learning, is in problems that require interpretation of signalsimages, speech and language," said panelist Leslie Kaelbling, a computer science and engineering professor at MIT.

For years, people have tried to solve problems like detecting faces and images and directly engineering solutions that didn't work, she said.

We have become good at engineering algorithms that take data and use that to derive a solution, she said. "That's been an amazing success." But it takes a lot of data and a lot of computation so for some problems formulations aren't available yet that would let us learn from the amount of data available, Kaelbling said.

SEE:9 super-smart problem solvers take on bias in AI, microplastics, and language lessons for chatbots(TechRepublic)

One of her areas of focus is in robotics, and it's harder to get training examples there because robots are expensive and parts break, "so we really have to be able to learn from smaller amounts of data," Kaelbling said.

Neural networks and deep learning are the "latest and greatest way to frame those sorts of problems and the successes are many," added Josh Tenenbaum, a professor of cognitive science and computation at MIT.

But when talking about general intelligence and how to get machines to understand the world there is still a huge gap, he said.

"But on the research side really exciting things are starting to happen to try to capture some steps to more general forms of intelligence [in] machines," he said. In his work, "we're seeing ways in which we can draw insights from how humans understand the world and taking small steps to put them in machines."

Although people think of AI as being synonymous with automation, it is incredibly labor intensive in a way that doesn't work for most of the problems we want to solve, noted David Cox, IBM director of the MIT-IBM Watson AI Lab.

Echoing Kaelbling, Cox said that leveraging tools today like deep learning requires huge amounts of "carefully curated, bias-balanced data," to be able to use them well. Additionally, for most problems we are trying to solve, we don't have those "giant rivers of data" to build a dam in front of to extract some value from that river, Cox said.

Today, companies are more focused on solving some type of one-off problem and even when they have big data, it's rarely curated, he said. "So most of the problems we love to solve with AIwe don't have the right tools for that."

That's because we have problems with bias and interpretability with humans using these tools and they have to understand why they are making these decisions, Cox said. "They're all barriers."

However, he said, there's enormous opportunity looking at all these different fields to chart a path forward.

That includes using deep learning, which is good for pattern recognition, to help solve difficult search problems, Tenenbaum said.To develop intelligent agents, scientists need to use all the available tools, said Kaelbling. For example, neural networks are needed for perception as well as higher level and more abstract types of reasoning to decide, for example, what to make for dinner or to decide how to disperse supplies.

"The critical thing technologically is to realize the sweet spot for each piece and figure out what it is good at and not good at. Scientists need to understand the role each piece plays," she said.

The MIT and IBM AI experts also discussed a new foundational method known as neurosymbolic AI, which is the ability to combine statistical, data-driven learning of neural networks with the powerful knowledge representation and reasoning of symbolic approaches.

Moderator Schubmehl commented that having a combination of neurosymbolic AI and deep learning "might really be the holy grail" for advancing real-world AI.

Kaelbling agreed, adding that it may be not just those two techniques but include others as well.

One of the themes that emerged from the webinar is that there is a very helpful confluence of all types of AI that are now being used, said Cox. The next evolution of very practical AI is going to be understanding the science of finding things and building a system we can reason with and grow and learn from, and determine what is going to happen. "That will be when AI hits its stride," he said.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

More:
The path to real-world artificial intelligence - TechRepublic

Related Posts

Comments are closed.