Archive for the ‘Artificial Intelligence’ Category

Artificial intelligence pays off when businesses go all in – MIT Sloan News

open share links close share links

About 92% of large companies are achieving returns on their investments in artificial intelligence, and the same percentage are increasing their AI investments. But what does it take for startups and early-stage companies to get to this point?

Thats a critical question, according to Sukwoong Choi, a postdoctoral scholar at MIT Sloan. AI utilization is tied to startups products and services. Its more directly relevant, he said.

In a newpaper, Choi and his co-authors find that firms need to be ready to make a significant investment in AI to see any gains, because limited AI adoption doesnt contribute to revenue growth. Only when firms increase their intensity of AI adoption to at least 25% meaning that they are using a quarter of the AI tools currently available to them do growth rates pick up and investments in AI start to pay off.

The paper was co-authored by Yong Suk Lee, Taekyun Kim, and Wonjoon Kim.

Here are three things companies should know about investing in AI.

The researchers surveyed 160 startups and small businesses in South Korea about their use of AI technologies such as natural language processing, computer vision, and machine learning. Of the firms included, 53% were in technology-related fields (namely software, pharma, and mobile computing), and 54% had adopted AI to some degree.

The survey was administered to companies created before 2015, as these firms were founded before AI adoption generally took off in South Korea. (A footnote in the paper points to an explosion of interest in AI in the country after Go master Lee Sedol lost four of five matches to Google DeepMinds AlphaGo program in March 2016.)

Among the firms surveyed, the correlation between AI adoption and revenue growth followed a J-curve: slow and steady at first, then substantial. The turning point was an intensity of AI adoption of 25%. For firms with AI intensity below 25%, annual revenue growth was essentially zero; for firms above the 25% threshold, growth approached 24%.

Theres a disruptive power for AI. With lower utilization, its harder to make a profit, Choi said. When youre in those early stages of AI adoption, you may need some time to obtain the payoff to using AI.

Several factors can influence a firms embrace of AI, the researchers found. For example, firms that are smaller and/or were founded by CEOs with prior entrepreneurial experience are more likely to adopt AI intensively. Larger firms or spinoffs from other companies are less likely to adopt AI at that level, though lab-based spinoffs are an exception.

One of the most influential factors, though, is adoption of complementary technology namely, big data capabilitiesand cloud computing. The former contributes to better AI outcomes through more mature data collection and management, while the latter provides the computational power necessary to run complex analyses. Both help firms drive growth from their investments in AI.

This finding came as little surprise to Choi and his co-authors. For decades, investing in one type of technology has driven the adoption of other technologies. Examples abound: Better operating systems led to better software, faster modems made computer networks possible, and IT infrastructure supported the growth of online selling.

Complementary technology makes it easy to adopt new technology such as AI, Choi said. To adopt and utilize AI effectively, and to get the payoff at earlier stages in your investment, you need the technology and the skills that go with it.

The pivotal role of complementary technology points to one key takeaway from the paper, Choi said. To support AI adoption, its not enough to have access to the technology you also need the infrastructure that supports it. When you make that easily available, you can accelerate AI adoption, Choi said.

The second consideration is how closely AI is tied to a companys core product or service, he said, and how that impacts the companys research and development strategy.

Internally focused R&D helps a company build absorptive capacity in this case, AI know-how that positions it to more intensively adopt and use AI technology. This is helpful for firms that need to protect their proprietary algorithms as intellectual property, or for firms working with sensitive data sets theyd rather not allow a third party to process.

On the other hand, if AI is a complement to the work that a firm is doing but isnt the core focus of that work, firms can turn to external resources, Choi said. Large language models, such as OpenAIs ChatGPT, are a good example of this: Theyre readily available, widely used, and constantly being refined.

Its important to ask, Is there a point solution for the AI work Im trying to do? Choi said. If your area of work is more systematic, then you dont necessarily need an internally focused R&D strategy. You can license something thats already available.

Read next: how to prepare for the AI productivity boom

See the rest here:
Artificial intelligence pays off when businesses go all in - MIT Sloan News

Digital Dr. Dolittle: decoding animal conversations with artificial … – KUOW News and Information

We could be talking to animals in the next year using AI. But are we ready?

Whenever I'm out doing field work or on a hike, Ive not only got my eyes wide open, but my ears too. Theres a lot going on in a forest or under the sea - the sounds of nature. So many of those sounds are about communication. And some species seem more chatty than others. Birds and whales seem to have a lot more to say than bears or mountain lions.

Personally, I love to chat with ravens. I like to think that we have lovely conversations. I know Im fooling myself but theres something happening that might change that.

Theres a tech company out of Silicon Valley that is hoping to make that dream of communicating with animals a reality. Earth Species Project is a non-profit working to develop machine learning that can decode animal language. Basically, artificial intelligence that can speak whale or monkey...or perhaps even raven?

We are awash in meanings and signals. And what we're gonna have to do is use these brand new big telescopes of AI to discover what's been there all along, said Aza Raskin, co-founder of Earth Species Project.

So we are doing something a bit different on The Wild today - fun to mix things up now and then. For this episode Im not outdoors among the wild creatures, but in my home studio, talking with two fascinating people about the latest developments in technology that are being created to talk to wild animals. Well also explore the ethics of this technology... something Karen Bakker, a professor at the University of British Columbia, knows a lot about.

We could lure every animal on the planet to their deaths with this technology, if it develops as Aza suggests it might, said Bakker.

What are the downsides to playing the role of Digital Dr. Dolittle?

Guests:

Aza Raskin, co-founder of Earth Species Project and co-founder of the Center for Humane Technology.

Karen Bakker, professor at the University of British Columbia where she researches digital innovation and environmental governance. She also leads the Smart Earth Project.

Original post:
Digital Dr. Dolittle: decoding animal conversations with artificial ... - KUOW News and Information

Podcast: Now Is the Best Time To Embrace Artificial Intelligence – Reason

In this week's The Reason Roundtable, editors Matt Welch, Katherine Mangu-Ward, Nick Gillespie, and special guest Elizabeth Nolan Brown unpack the ubiquitous sense that politicians of every stripe have abandoned a commitment to free expression. They also examine the fast evolution of artificial intelligence chatbots like ChatGPT.

0:42: Politicians choose the culture war over the First Amendment

20:04: Artificial intelligence and large language model (LLM) chatbots like ChatGPT

36:13: Weekly Listener Question

44:27: This week's cultural recommendations

Mentioned in this podcast:

"Congress Asks Is TikTok Really 'An Extension of' the Chinese Communist Party?" by Elizabeth Nolan Brown

"TikTok Is Too Popular To Ban," by Elizabeth Nolan Brown

"Utah Law Gives Parents Full Access to Teens' Social Media," by Elizabeth Nolan Brown

"Florida's War on Drag Targets Theater's Liquor License," by Scott Shackford

"Welcoming Our New Chatbot Overlords," by Ronald Bailey

"Maybe A.I. Will Be a ThreatTo Governments," by Peter Suderman

"The Luddites' Veto," by Ronald Bailey

"Artificial Intelligence Will Change JobsFor the Better," by Jordan McGillis

"The Robot Revolution Is Here," by Katherine Mangu-Ward

"The Earl Weaver Case for Rand Paul's Libertarianism," by Matt Welch

"Rand Paul Tries (Again!) To Make It Harder for Police To Take Your Stuff," by Scott Shackford

Send your questions to roundtable@reason.com. Be sure to include your social media handle and the correct pronunciation of your name.

Today's sponsor:

Audio production by Ian Keyser

Assistant production by Hunt Beaty

Music: "Angeline," by The Brothers Steve

See the rest here:
Podcast: Now Is the Best Time To Embrace Artificial Intelligence - Reason

Artificial Intelligence will not save banks from short-sightedness – SWI swissinfo.ch in English

Banks like Credit Suisse use sophisticated models to analyse and predict risks, but too often they are ignored or bypassed by humans, saysrisk management expert Didier Sornette.

This content was published on March 28, 2023March 28, 2023 minutes

Writes about the impact of new technologies on society: are we aware of the revolution in progress and its consequences? Hobby: free thinking. Habit: asking too many questions.

The collapse of Credit Suisse has once again exposed the high-stakes risk culture in the financial sector. The many sophisticated artificial intelligence (AI) tools used by the banking system to predict and manage risks arent enough to save banks from failure.

According to Didier Sornette, honorary professor of entrepreneurial risks at the federal technology institute ETH Zurich, the tools aren't the problem but rather the short-sightedness of bank executives who prioritise profits.

SWI swissinfo.ch: Banks use AI models to predict risks and evaluate the performance of their investments, yet these models couldnt save Credit Suisse or Silicon Valley Bank from collapse. Why didnt they act on the predictions?And why didnt decision-makers intervene earlier?

Didier Sornette:I have made so many successful predictions in the past that were systematically ignored by managers and decision-makers. Why? Because it is so much easier to say that the crisis is an act of God and could not have been foreseen, and to wash your hands of any responsibility.

Acting on predictions means to stop the dance, in other words to take painful measures. This is why policymakers are essentially reactive, always behind the curve. It is political suicide to impose pain to embrace a problem and solve it before it explodes in your face. This is the fundamental problem of risk control.

Credit Suisse had very weak risk controls and culture for decades. Instead, business units were always left to decide what to do and therefore inevitably accumulated a portfolio of latent risks or I'd say lots of far out-of-the-money put options [when an option has no intrinsic value].Then, when a handful of random events occurred that were symptomatic of the fundamental lack of controls, people started to get worried. When a large US bank [Silicon Valley Bank] with $220 billion (CHF202 billion) of assets quickly went insolvent, people started to reassess their willingness to leave uninsured deposits at any poorly run bank - and voil.

SWI: This means that risk prediction and management wont work if the problem is not solved at the systemic level?

D.S.: The policy of zero or negative interest rates is the root cause of all this.It has led to positions of these banks that are vulnerable to rising rates. The huge debts of countries have also made them vulnerable. We live in a world that has become very vulnerable because of the short-sighted and irresponsible policies of the big central banks, which have not considered the long-term consequences of their "firefighting" interventions.

The shock is a systemic one, starting from Silicon Valley Bank, Signature Banketc., with Credit Suisse being only an episode revealing the major problem of the system: the consequences of the catastrophic policies of the central banks since 2008, which flooded the markets with easy money and led to huge excesses in financial institutions. We are now seeing some of the consequences.

SWI: What role can AI-based risk prediction play, for example, in the case of the surviving giant UBS?

D.S.: AIand mathematical models are irrelevant in the sense that (risk control) tools are useful only if there is a will to use them!

When there is a problem, many people always blame the models, the risk methods etc. This is wrong. The problems lie with humans whosimply ignore models and bypass them. There were so many instances in the last 20 years. Again and again, the same kind of story repeats itself with nobody learning the lessons. So AI cant do much because the problem is not about more "intelligencebut greed and short-sightedness.

Despite the apparent financial gains, this is probably a bad and dangerous deal for UBS. The reason is that it takes decades to create the right risk culture and they are now likely to create huge morale damage via the big headcount reductions. Additionally, no regulator will be giving them an indemnity for inherited regulatory or client Anti-Money Laundering violations from the Credit Suisse side, which we know had very weak compliance. They will have to deal with surprising problems there for years.

SWI: Could we envision a more rigorous form of oversight of the banking system by governments or even taxpayers using data collected by AI systems?

D.S.: Collecting data is not the purview of AI systems. Collecting clean and relevant data is the most difficult challenge, much more difficult than machine learning and AI techniques. Most data is noisy, incomplete, inconsistent, very costly to obtain and to manage. This requires huge investments and a long-term view that is almost always missing. Hence crises occur every fiveyears or so.

SWI: Lately, weve been hearing more and more about behavioral finance. Is there more psychology and irrationality in the financial system than we think?

D.S.: There is greed, fear, hope and... sex. Joking aside, people in banking and finance are in general superrational when it comes to optimising their goals and getting rich. It is not irrationality, it is betting and taking big risks where the gains are privatised and the losses are socialised.

Strong regulations need to be imposed. In a sense, we need to make "banking boring to tame the beasts that tend to destabilise the financial system by construction.

SWI: Is there a future in which machine learning can prevent the failure of "too big to fail"banks like Credit Suisse, or is that pure science fiction?

D.S.: Yes, an AI can prevent a future failure if the AI takes power and enslaves humans to follow the risk managements with incentives dictated by the AI, as in many scenarios depicting the dangers of superintelligent AI. I am not kidding.

The interview was conducted in writing. It has been edited for clarity and brevity.

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

Originally posted here:
Artificial Intelligence will not save banks from short-sightedness - SWI swissinfo.ch in English

Most Jobs Soon To Be Influenced By Artificial Intelligence, Research Out Of OpenAI And University Of Pennsylvania Suggests – Forbes

As artificial intelligence opens up and becomes democratized through platforms offering generative AI, its likely to alter tasks within at least 80% of all jobs, a new analysis suggests. Jobs requiring college education will see the highest impacts, and in many cases, at least half of peoples tasks may be affected by AI. Its extremely important to add that affected occupations will be significantly influenced or augmented by generative AI, not replaced.

Thats the word from a paper published by a team of researchers from OpenAI, OpenResearch, and the University of Pennsylvania. The researchers included Tyna Eloundou with OpenAI, Sam Manning with OpenResearch and OpenAI, Pamela Mishkin with OpenAI, and Daniel Rock, assistant professor at the University of Pennsylvania, also affiliated with OpenAI and OpenResearch.

The research looked at the potential implications of GPT (Generative Pre-trained Transformer) models and related technologies on occupations, assessing their exposure to GPT capabilities. Our findings indicate that approximately 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of GPTs, while around 19% of workers may see at least 50% of their tasks impacted, Eloundou and her colleagues estimate. The influence spans all wage levels, with higher-income jobs potentially facing greater exposure particularly jobs requiring college degrees. At the same time, they observe, considering each job as a bundle of tasks, it would be rare to find any occupation for which AI tools could do nearly all of the work.

The researchers base their study on GPT-4, and use the terms large language models (LLMs) and GPTs interchangeably.

Their findings suggest that programming and writing skills are more likely to be influenced by generative AI. On the other hand, occupations or tasks involving science and critical thinking skills are less likely to be influenced. Occupations that are seeing or will see a high degree of AI-based influence and augmentation (again, emphasis on influence and augment) include the following:

GPTs are improving in capabilities over time with the ability to complete or be helpful for an increasingly complex set of tasks and use-cases, Eloundou and her co-authors point out. They caution, however, that the definition of a task is very fluid. It is unclear to what extent occupations can be entirely broken down into tasks, and whether this approach systematically omits certain categories of skills or tasks that are tacitly required for competent performance of a job, they add. Additionally, tasks can be composed of sub-tasks, some of which are more automatable than others.

Theres more implications to AI than simply taking over tasks, of course. While the technical capacity for GPTs to make human labor more efficient appears evident, it is important to recognize that social, economic, regulatory, and other factors will influence actual labor productivity outcomes, the team states. There will be broader implications for AI as it progresses, including their potential to augment or displace human labor, their impact on job quality, impacts on inequality, skill development, and numerous other outcomes.

Still, accurately predicting future LLM applications remains a significant challenge, even for experts, Eloundou and her co-authors caution. The discovery of new emergent capabilities, changes in human perception biases, and shifts in technological development can all affect the accuracy and reliability of predictions regarding the potential impact of GPTs on worker tasks and the development of GPT-powered software.

An important takeaway from this study is that generative AI not to mention AI in all forms is reshaping the workplace in ways that currently cannot be imagined. Yes, some occupations may eventually disappear, but those that can harness the productivity and power of AI to create new innovations and services that improve the lives of customers or people will be well-placed for the economy of the mid-to-late 2020s and beyond.

I am an author, independent researcher and speaker exploring innovation, information technology trends and markets. I served as co-chair of the AI Summit in 2021 and 2022, and have also participated in the IEEE International Conference on Edge Computing and the International SOA and Cloud Symposium series.I am also a co-author of the SOA Manifesto, which outlines the values and guiding principles of service orientation in business and IT.I also regularly contribute to Harvard Business Review and CNET on topics shaping business and technology careers.

Much of my research work is in conjunction with Forbes Insights and Unisphere Research/ Information Today, Inc., covering topics such as artificial intelligence, cloud computing, digital transformation, and big data analytics.

In a previous life, I served as communications and research manager of the Administrative Management Society (AMS), an international professional association dedicated to advancing knowledge within the IT and business management fields. I am a graduate of Temple University.

Link:
Most Jobs Soon To Be Influenced By Artificial Intelligence, Research Out Of OpenAI And University Of Pennsylvania Suggests - Forbes