Everyone Wants Responsible Artificial Intelligence, Few Have It Yet – Forbes

With great power comes great responsibility.

As artificial intelligence continues to gain traction, there has been a rising level of discussion about responsible AI (and, closely related, ethical AI). While AI is entrusted to carry more decision-making workloads, its still based on algorithms that respond to models and data, as I and my co-author Andy Thurai explain in a recent Harvard Business Review article. As a result, AI and often misses the big picture and most times cant analyze the decision with reasoning behind it. It certainly isnt ready to assume human qualities that emphasize empathy, ethics, and morality.

Is this a concern that is shared within the executive suites of companies deploying AI? Yes, a recent study of 1,000 executives published by MIT Sloan Management Review and Boston Consulting Group confirms. However, the study finds, while most executives agree that responsible AI is instrumental to mitigating technologys risks including issues of safety, bias, fairness, and privacy they acknowledged a failure to prioritize it. In other words, when it comes to AI, its damn the torpedoes and full speed ahead. However, more attention needs to paid to those torpedoes, which may take the form of lawsuits, regulations, and damaging decisions. At the same time, more adherence to responsible AI may deliver tangible business benefits.

While AI initiatives are surging, responsible AI is lagging, the MIT-BCG survey reports authors, Elizabeth M. Renieris, David Kiron, and Steven Mills, report. The gap increases the possibility of failure and exposes companies to regulatory, financial, and customer satisfaction risks.

Just about everyone sees the logic in making AI more responsible 84% believe that it should be a top management priority. About half of the executives surveyed, 52%, say their companies practice some level of responsible AI. However, only 25% reported that their organization has a fully mature program the remainder say their implementations are limited in scale and scope.

Confusion and lack of consensus over the meaning of responsible AI may be a limiting factor. Only 36% of respondents believe the term is used consistently throughout their organizations, the survey finds. The surveys authors define responsible AI as a framework with principles, policies, tools, and processes to ensure that AI systems are developed and operated in the service of good for individuals and society while still achieving transformative business impact.

Other factors inhibiting responsible AI include a lack of responsible AI expertise and talent training or knowledge among staff members (54%); lack of prioritization and attention by senior leaders (53%); and a lack of funding or resourcing for responsible AI initiatives (43%).

Renieris and her co-authors identified a segment of companies that are ahead of the curve with responsible AI, which tend to apply responsible conduct not to just AI, but across their entire suites of technologies, systems, and processes. For these leading companies, responsible AI is less about a particular technology than the company itself, they state.

These leading companies are also seeing pronounced business benefits as well as a result of this attitude. Benefits realized since implementing responsible AI initiatives: better products and services (cited by 50%), enhanced brand differentiation (48%), and accelerated innovation (43%).

The following are recommendations based on the experiences of companies taking the lead with responsible AI:

Read more:
Everyone Wants Responsible Artificial Intelligence, Few Have It Yet - Forbes

Related Posts

Comments are closed.