Archive for the ‘Artificial Intelligence’ Category

The Future is Now: Exploring the Importance of Artificial Intelligence – The Geopolitics

Artificial intelligence (AI) is a rapidly growing field that has captured the attention of scientists, engineers, business leaders, and policymakers worldwide. It refers to the development of computer systems that can perform tasks that typically require human intelligence, such as learning, problem-solving, perception, and decision-making. AI has the potential to transform various industries and sectors, including healthcare, transportation, education, manufacturing, and finance, among others. In this article, we will explore the importance of artificial intelligence in the future and its potential benefits and challenges.

One of the most significant advantages of artificial intelligence is its ability to automate routine and repetitive tasks, allowing humans to focus on more complex and creative work. For instance, AI-powered robots and machines can perform tasks like assembling products, packaging goods, and transporting materials with greater speed, accuracy, and efficiency than humans. This can help businesses increase productivity, reduce costs, and improve the quality of their products and services.

Another important benefit of artificial intelligence is its ability to analyze and interpret vast amounts of data, enabling organizations to gain valuable insights into customer behavior, market trends, and business operations. By using advanced algorithms and machine learning techniques, AI systems can identify patterns, correlations, and anomalies in data that would be challenging for humans to detect. This can help businesses make data-driven decisions, optimize their processes, and improve their overall performance.

Moreover, artificial intelligence has the potential to revolutionize healthcare by improving the accuracy and efficiency of medical diagnoses, treatments, and research. For example, AI systems can analyze medical images, such as X-rays and CT scans, to detect signs of diseases like cancer, heart disease, and Alzheimers, with greater accuracy than human doctors. AI-powered chatbots and virtual assistants can also provide patients with personalized health advice, monitor their symptoms, and remind them to take their medication. Additionally, AI can help accelerate drug discovery and development by predicting the efficacy and safety of new drugs and identifying potential side effects.

In the field of education, artificial intelligence can help personalize learning and improve student outcomes by providing tailored instruction and feedback based on individual needs and preferences. For example, AI systems can analyze students performance data and adjust their learning paths and content accordingly. AI-powered chatbots can also provide students with instant answers to their questions and feedback on their assignments. Moreover, AI can help educators develop more effective teaching strategies by providing insights into student engagement, motivation, and learning preferences.

However, along with its potential benefits, artificial intelligence also poses significant challenges and risks that need to be addressed. One of the main concerns is the potential impact of AI on employment, as automation and AI systems may replace human workers in various industries and occupations. While AI can create new job opportunities in areas like software development, data analysis, and robotics, it may also lead to job losses in other sectors, particularly those that involve routine and repetitive tasks.

Another challenge of artificial intelligence is its potential to perpetuate and amplify social biases and inequalities. AI systems are only as unbiased as the data they are trained on, and if the data contain biased or discriminatory patterns, the AI systems will replicate and reinforce them. This can lead to unfair and discriminatory outcomes in areas like hiring, lending, and law enforcement. Therefore, it is essential to ensure that AI systems are developed and deployed ethically and with diversity and inclusivity in mind.

Moreover, artificial intelligence also raises concerns about privacy, security, and accountability. AI systems often collect and process sensitive personal data, such as medical records, financial information, and social media activity, raising concerns about data breaches, identity theft, and surveillance. Additionally, AI systems may make decisions that have significant consequences for individuals and society, such as determining eligibility for loans or insurance, or recommending criminal sentences. Therefore, it is crucial to ensure that AI systems are transparent, accountable, and subject to ethical and legal oversight.

Artificial intelligence is a powerful and transformative technology that has the potential to bring significant benefits to various industries and sectors. By automating routine tasks, analyzing data, and improving decision-making, AI can help increase productivity, reduce costs, and improve the quality of products and services. In healthcare, education, and other fields, AI can improve outcomes and accelerate progress. However, AI also poses significant challenges and risks that need to be addressed, such as job displacement, bias and discrimination, privacy, security, and accountability. Therefore, it is essential to ensure that AI is developed and deployed ethically, transparently, and with diversity and inclusivity in mind. By doing so, we can harness the power of AI to create a more prosperous, equitable, and sustainable future for all.

[Gerd Altmann / Pixabay]

Carl Taylor a tech author with over 12 years of experience in the industry. He has written numerous articles on topics such as artificial intelligence, machine learning, and data science. The views and opinions expressed in this article are those of the author.

See the article here:
The Future is Now: Exploring the Importance of Artificial Intelligence - The Geopolitics

Artificial intelligence pays off when businesses go all in – MIT Sloan News

open share links close share links

About 92% of large companies are achieving returns on their investments in artificial intelligence, and the same percentage are increasing their AI investments. But what does it take for startups and early-stage companies to get to this point?

Thats a critical question, according to Sukwoong Choi, a postdoctoral scholar at MIT Sloan. AI utilization is tied to startups products and services. Its more directly relevant, he said.

In a newpaper, Choi and his co-authors find that firms need to be ready to make a significant investment in AI to see any gains, because limited AI adoption doesnt contribute to revenue growth. Only when firms increase their intensity of AI adoption to at least 25% meaning that they are using a quarter of the AI tools currently available to them do growth rates pick up and investments in AI start to pay off.

The paper was co-authored by Yong Suk Lee, Taekyun Kim, and Wonjoon Kim.

Here are three things companies should know about investing in AI.

The researchers surveyed 160 startups and small businesses in South Korea about their use of AI technologies such as natural language processing, computer vision, and machine learning. Of the firms included, 53% were in technology-related fields (namely software, pharma, and mobile computing), and 54% had adopted AI to some degree.

The survey was administered to companies created before 2015, as these firms were founded before AI adoption generally took off in South Korea. (A footnote in the paper points to an explosion of interest in AI in the country after Go master Lee Sedol lost four of five matches to Google DeepMinds AlphaGo program in March 2016.)

Among the firms surveyed, the correlation between AI adoption and revenue growth followed a J-curve: slow and steady at first, then substantial. The turning point was an intensity of AI adoption of 25%. For firms with AI intensity below 25%, annual revenue growth was essentially zero; for firms above the 25% threshold, growth approached 24%.

Theres a disruptive power for AI. With lower utilization, its harder to make a profit, Choi said. When youre in those early stages of AI adoption, you may need some time to obtain the payoff to using AI.

Several factors can influence a firms embrace of AI, the researchers found. For example, firms that are smaller and/or were founded by CEOs with prior entrepreneurial experience are more likely to adopt AI intensively. Larger firms or spinoffs from other companies are less likely to adopt AI at that level, though lab-based spinoffs are an exception.

One of the most influential factors, though, is adoption of complementary technology namely, big data capabilitiesand cloud computing. The former contributes to better AI outcomes through more mature data collection and management, while the latter provides the computational power necessary to run complex analyses. Both help firms drive growth from their investments in AI.

This finding came as little surprise to Choi and his co-authors. For decades, investing in one type of technology has driven the adoption of other technologies. Examples abound: Better operating systems led to better software, faster modems made computer networks possible, and IT infrastructure supported the growth of online selling.

Complementary technology makes it easy to adopt new technology such as AI, Choi said. To adopt and utilize AI effectively, and to get the payoff at earlier stages in your investment, you need the technology and the skills that go with it.

The pivotal role of complementary technology points to one key takeaway from the paper, Choi said. To support AI adoption, its not enough to have access to the technology you also need the infrastructure that supports it. When you make that easily available, you can accelerate AI adoption, Choi said.

The second consideration is how closely AI is tied to a companys core product or service, he said, and how that impacts the companys research and development strategy.

Internally focused R&D helps a company build absorptive capacity in this case, AI know-how that positions it to more intensively adopt and use AI technology. This is helpful for firms that need to protect their proprietary algorithms as intellectual property, or for firms working with sensitive data sets theyd rather not allow a third party to process.

On the other hand, if AI is a complement to the work that a firm is doing but isnt the core focus of that work, firms can turn to external resources, Choi said. Large language models, such as OpenAIs ChatGPT, are a good example of this: Theyre readily available, widely used, and constantly being refined.

Its important to ask, Is there a point solution for the AI work Im trying to do? Choi said. If your area of work is more systematic, then you dont necessarily need an internally focused R&D strategy. You can license something thats already available.

Read next: how to prepare for the AI productivity boom

See the rest here:
Artificial intelligence pays off when businesses go all in - MIT Sloan News

Digital Dr. Dolittle: decoding animal conversations with artificial … – KUOW News and Information

We could be talking to animals in the next year using AI. But are we ready?

Whenever I'm out doing field work or on a hike, Ive not only got my eyes wide open, but my ears too. Theres a lot going on in a forest or under the sea - the sounds of nature. So many of those sounds are about communication. And some species seem more chatty than others. Birds and whales seem to have a lot more to say than bears or mountain lions.

Personally, I love to chat with ravens. I like to think that we have lovely conversations. I know Im fooling myself but theres something happening that might change that.

Theres a tech company out of Silicon Valley that is hoping to make that dream of communicating with animals a reality. Earth Species Project is a non-profit working to develop machine learning that can decode animal language. Basically, artificial intelligence that can speak whale or monkey...or perhaps even raven?

We are awash in meanings and signals. And what we're gonna have to do is use these brand new big telescopes of AI to discover what's been there all along, said Aza Raskin, co-founder of Earth Species Project.

So we are doing something a bit different on The Wild today - fun to mix things up now and then. For this episode Im not outdoors among the wild creatures, but in my home studio, talking with two fascinating people about the latest developments in technology that are being created to talk to wild animals. Well also explore the ethics of this technology... something Karen Bakker, a professor at the University of British Columbia, knows a lot about.

We could lure every animal on the planet to their deaths with this technology, if it develops as Aza suggests it might, said Bakker.

What are the downsides to playing the role of Digital Dr. Dolittle?

Guests:

Aza Raskin, co-founder of Earth Species Project and co-founder of the Center for Humane Technology.

Karen Bakker, professor at the University of British Columbia where she researches digital innovation and environmental governance. She also leads the Smart Earth Project.

Original post:
Digital Dr. Dolittle: decoding animal conversations with artificial ... - KUOW News and Information

Podcast: Now Is the Best Time To Embrace Artificial Intelligence – Reason

In this week's The Reason Roundtable, editors Matt Welch, Katherine Mangu-Ward, Nick Gillespie, and special guest Elizabeth Nolan Brown unpack the ubiquitous sense that politicians of every stripe have abandoned a commitment to free expression. They also examine the fast evolution of artificial intelligence chatbots like ChatGPT.

0:42: Politicians choose the culture war over the First Amendment

20:04: Artificial intelligence and large language model (LLM) chatbots like ChatGPT

36:13: Weekly Listener Question

44:27: This week's cultural recommendations

Mentioned in this podcast:

"Congress Asks Is TikTok Really 'An Extension of' the Chinese Communist Party?" by Elizabeth Nolan Brown

"TikTok Is Too Popular To Ban," by Elizabeth Nolan Brown

"Utah Law Gives Parents Full Access to Teens' Social Media," by Elizabeth Nolan Brown

"Florida's War on Drag Targets Theater's Liquor License," by Scott Shackford

"Welcoming Our New Chatbot Overlords," by Ronald Bailey

"Maybe A.I. Will Be a ThreatTo Governments," by Peter Suderman

"The Luddites' Veto," by Ronald Bailey

"Artificial Intelligence Will Change JobsFor the Better," by Jordan McGillis

"The Robot Revolution Is Here," by Katherine Mangu-Ward

"The Earl Weaver Case for Rand Paul's Libertarianism," by Matt Welch

"Rand Paul Tries (Again!) To Make It Harder for Police To Take Your Stuff," by Scott Shackford

Send your questions to roundtable@reason.com. Be sure to include your social media handle and the correct pronunciation of your name.

Today's sponsor:

Audio production by Ian Keyser

Assistant production by Hunt Beaty

Music: "Angeline," by The Brothers Steve

See the rest here:
Podcast: Now Is the Best Time To Embrace Artificial Intelligence - Reason

Artificial Intelligence will not save banks from short-sightedness – SWI swissinfo.ch in English

Banks like Credit Suisse use sophisticated models to analyse and predict risks, but too often they are ignored or bypassed by humans, saysrisk management expert Didier Sornette.

This content was published on March 28, 2023March 28, 2023 minutes

Writes about the impact of new technologies on society: are we aware of the revolution in progress and its consequences? Hobby: free thinking. Habit: asking too many questions.

The collapse of Credit Suisse has once again exposed the high-stakes risk culture in the financial sector. The many sophisticated artificial intelligence (AI) tools used by the banking system to predict and manage risks arent enough to save banks from failure.

According to Didier Sornette, honorary professor of entrepreneurial risks at the federal technology institute ETH Zurich, the tools aren't the problem but rather the short-sightedness of bank executives who prioritise profits.

SWI swissinfo.ch: Banks use AI models to predict risks and evaluate the performance of their investments, yet these models couldnt save Credit Suisse or Silicon Valley Bank from collapse. Why didnt they act on the predictions?And why didnt decision-makers intervene earlier?

Didier Sornette:I have made so many successful predictions in the past that were systematically ignored by managers and decision-makers. Why? Because it is so much easier to say that the crisis is an act of God and could not have been foreseen, and to wash your hands of any responsibility.

Acting on predictions means to stop the dance, in other words to take painful measures. This is why policymakers are essentially reactive, always behind the curve. It is political suicide to impose pain to embrace a problem and solve it before it explodes in your face. This is the fundamental problem of risk control.

Credit Suisse had very weak risk controls and culture for decades. Instead, business units were always left to decide what to do and therefore inevitably accumulated a portfolio of latent risks or I'd say lots of far out-of-the-money put options [when an option has no intrinsic value].Then, when a handful of random events occurred that were symptomatic of the fundamental lack of controls, people started to get worried. When a large US bank [Silicon Valley Bank] with $220 billion (CHF202 billion) of assets quickly went insolvent, people started to reassess their willingness to leave uninsured deposits at any poorly run bank - and voil.

SWI: This means that risk prediction and management wont work if the problem is not solved at the systemic level?

D.S.: The policy of zero or negative interest rates is the root cause of all this.It has led to positions of these banks that are vulnerable to rising rates. The huge debts of countries have also made them vulnerable. We live in a world that has become very vulnerable because of the short-sighted and irresponsible policies of the big central banks, which have not considered the long-term consequences of their "firefighting" interventions.

The shock is a systemic one, starting from Silicon Valley Bank, Signature Banketc., with Credit Suisse being only an episode revealing the major problem of the system: the consequences of the catastrophic policies of the central banks since 2008, which flooded the markets with easy money and led to huge excesses in financial institutions. We are now seeing some of the consequences.

SWI: What role can AI-based risk prediction play, for example, in the case of the surviving giant UBS?

D.S.: AIand mathematical models are irrelevant in the sense that (risk control) tools are useful only if there is a will to use them!

When there is a problem, many people always blame the models, the risk methods etc. This is wrong. The problems lie with humans whosimply ignore models and bypass them. There were so many instances in the last 20 years. Again and again, the same kind of story repeats itself with nobody learning the lessons. So AI cant do much because the problem is not about more "intelligencebut greed and short-sightedness.

Despite the apparent financial gains, this is probably a bad and dangerous deal for UBS. The reason is that it takes decades to create the right risk culture and they are now likely to create huge morale damage via the big headcount reductions. Additionally, no regulator will be giving them an indemnity for inherited regulatory or client Anti-Money Laundering violations from the Credit Suisse side, which we know had very weak compliance. They will have to deal with surprising problems there for years.

SWI: Could we envision a more rigorous form of oversight of the banking system by governments or even taxpayers using data collected by AI systems?

D.S.: Collecting data is not the purview of AI systems. Collecting clean and relevant data is the most difficult challenge, much more difficult than machine learning and AI techniques. Most data is noisy, incomplete, inconsistent, very costly to obtain and to manage. This requires huge investments and a long-term view that is almost always missing. Hence crises occur every fiveyears or so.

SWI: Lately, weve been hearing more and more about behavioral finance. Is there more psychology and irrationality in the financial system than we think?

D.S.: There is greed, fear, hope and... sex. Joking aside, people in banking and finance are in general superrational when it comes to optimising their goals and getting rich. It is not irrationality, it is betting and taking big risks where the gains are privatised and the losses are socialised.

Strong regulations need to be imposed. In a sense, we need to make "banking boring to tame the beasts that tend to destabilise the financial system by construction.

SWI: Is there a future in which machine learning can prevent the failure of "too big to fail"banks like Credit Suisse, or is that pure science fiction?

D.S.: Yes, an AI can prevent a future failure if the AI takes power and enslaves humans to follow the risk managements with incentives dictated by the AI, as in many scenarios depicting the dangers of superintelligent AI. I am not kidding.

The interview was conducted in writing. It has been edited for clarity and brevity.

In compliance with the JTI standards

More: SWI swissinfo.ch certified by the Journalism Trust Initiative

Originally posted here:
Artificial Intelligence will not save banks from short-sightedness - SWI swissinfo.ch in English