Artificial Intelligence May Be Just Code, But Its Our Code – Forbes

AI

Theres nothing magical about artificial intelligence, its simply code designed by fallible humans using fallible data. The magic comes from the humans working with or seeing the benefits of AI. So the questions are: are we expecting too much from AI? Too what extent should companies and their executives rely on the output delivered by AI?

This was the subject of debate at a panel hosted at AI Summit in New York, held in early December, focusing on risks in the emerging role of AI in the financial services sector, but the discussion had wide-ranging implications across all industries. (I had the opportunity to co-chair the conference, and moderate the panel.)

We think AI is telling us something, but its not, cautioned Rod Butters, chief technology officer for Aible. Its just a bunch of code. It doesnt know. This is the fantasy we all fall into. Somehow we think that model has embodies something. The reality is that an AI is just a statistical engine, and in a lot of cases, its a bad statistical engine.

With AI these days, the biggest systemic risk in the notion that artificial intelligence is artificial, said Rik Willard, founder and managing director of Agentic Group, and member of the advisory board of the World Ethical Data Foundation. Its all done by humans; its all manifested by humans. When we look at risk versus returns, its only as good as the financial institutions, and the regulatory frameworks around those institutions. Are we supporting the same human and economic algorithms that we set up before technology, or are we working to make those better and more inclusive?

In addition, AI is still a relatively immature technology, said Drew Scarano, vice president of global financial services at AntWorks. Ten years ago we werent even talking about AI, but today, its a multi-billion dollar industry, he said. said Scarano. We might be too reliant on this technology, forgetting about the humans in the loop and how they play an integral part in complementing artificial intelligence in order to get desired results.

Another challenge is AI systems tend to get built in relative isolation. AI is just code, and the people building these systems may have limited perspectives on its value to the business, Butters cautioned. When we tell data scientists go out and create a model, were asking them to be a mind reader and a fortune teller, he said. Those are two bad job sets, it doesnt work. The data scientist is trying to do the right thing, creating a responsible and solid model, but based on what? Ultimately when they build a model, unless theyve got this combination to create transparency, create expandability, actually communicate that across to the business constituency both at a strategic and tactical, who is in charge? Just creating a great model does not necessarily solve all problems.

In the process of building data models, data scientists need to understand the objectives of the enterprise, taking into account the human implications, Scarano said. You can have engineer build a great bridge. So if its not going over what its intended to do, its just a great bridge, right? Im afraid that people in business, especially financial services. will just keep relying too much on technology. We need a holistic approach, in coexistence with humans.

Look beyond the technology and statistics of AI, and focus on what ultimately serves the customer, Scarano urged. Its about how we complement humans with artificial intelligence to drive business, and also drive customer reality, customer success and customer satisfaction at the end of the day.

The path to AI in service of business objectives relies on the establishment of consistent frameworks that guide its development, panelists agreed. I was raised in a fail-fast environment, said Willard. You build code, you test, and fix what's broken. You fix it on the fly. You build it, it kind of works, you let it loose, then you refine it over time based on input to the feedback loop. However, with AI, the issue is that we put it in a position of judgment. Like in the criminal justice system, where it does a lot of harm before you get it right. In the banking system its loan, no loan; score, no score; or credit, no credit. How do we build testing frameworks and sandboxes that have the accuracy thats necessary to be launched at scale, while doing less harm along the way?

AI is being used for many purposes across the financial services industry, but the risk is in de-humanizing the interpersonal qualities that helped build the industry. Today we can use AI for anything from approving a credit card to approving a mortgage to approving any kind of lending vehicle, said Scarano. But without human intervention to be able to understand there's more to a human than a credit score, there's more to a person than getting approved or denied for a mortgage.

Customer experience is the foundation of financial services, and this needs to be front and center of all AI initiatives. There needs to feedback loops in AI-driven systems that incorporate human input. As we implement AI-based solutions, we need to ensure that the end users, the customers, who are consuming the product are also happy with that investment and solution as well, said Robert Magno, solutions architect with Run:AI. It makes a lot of sense to have robots moving packages around, automated in a warehouse. But from a customer service standpoint, if a person interacting with a chatbot is getting frustrated, there needs to be a feedback loop to ensure solutions you're implementing are resonating with your customers, and they're enjoying the experience as much as you're enjoying creating the experience.

More:
Artificial Intelligence May Be Just Code, But Its Our Code - Forbes

Related Posts

Comments are closed.