Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence History, How It Embeds Bias, Displaces … – Democracy Now!

If you think Democracy Now!s reporting is a critical line of defense against war, climate catastrophe and fascism, please make your donation of $10 or more right now. Today a generous donor will DOUBLE your donation, which means itll go twice as far to support our independent journalism. When Democracy Now! covers war or gun violence, were not brought to you by the weapons manufacturers. When we cover the climate emergency, our reporting isnt sponsored by the oil, gas, coal or nuclear companies. Democracy Now! is funded by you, and thats why were counting on your donation to keep us going. Please give today. Every dollar makes a differencein fact, gets doubled! Thank you so much. -Amy Goodman

If you think Democracy Now!s reporting is a critical line of defense against war, climate catastrophe and fascism, please make your donation of $10 or more right now. Today a generous donor will DOUBLE your donation, which means itll go twice as far to support our independent journalism. When Democracy Now! covers war or gun violence, were not brought to you by the weapons manufacturers. When we cover the climate emergency, our reporting isnt sponsored by the oil, gas, coal or nuclear companies. Democracy Now! is funded by you, and thats why were counting on your donation to keep us going. Please give today. Every dollar makes a differencein fact, gets doubled! Thank you so much. -Amy Goodman

We rely on contributions from you, our viewers and listeners to do our work. If you visit us daily or weekly or even just once a month, now is a great time to make your monthly contribution.

Please do your part today.

Go here to see the original:
Artificial Intelligence History, How It Embeds Bias, Displaces ... - Democracy Now!

ChatGPT-powered Wall Street: The benefits and perils of using artificial intelligence to trade stocks and other financial instruments – The…

Artificial Intelligence-powered tools, such as ChatGPT, have the potential to revolutionize the efficiency, effectiveness and speed of the work humans do.

And this is true in financial markets as much as in sectors like health care, manufacturing and pretty much every other aspect of our lives.

Ive been researching financial markets and algorithmic trading for 14 years. While AI offers lots of benefits, the growing use of these technologies in financial markets also points to potential perils. A look at Wall Streets past efforts to speed up trading by embracing computers and AI offers important lessons on the implications of using them for decision-making.

In the early 1980s, fueled by advancements in technology and financial innovations such as derivatives, institutional investors began using computer programs to execute trades based on predefined rules and algorithms. This helped them complete large trades quickly and efficiently.

Back then, these algorithms were relatively simple and were primarily used for so-called index arbitrage, which involves trying to profit from discrepancies between the price of a stock index like the S&P 500 and that of the stocks its composed of.

As technology advanced and more data became available, this kind of program trading became increasingly sophisticated, with algorithms able to analyze complex market data and execute trades based on a wide range of factors. These program traders continued to grow in number on the largey unregulated trading freeways on which over a trillion dollars worth of assets change hands every day causing market volatility to increase dramatically.

Eventually this resulted in the massive stock market crash in 1987 known as Black Monday. The Dow Jones Industrial Average suffered what was at the time the biggest percentage drop in its history, and the pain spread throughout the globe.

In response, regulatory authorities implemented a number of measures to restrict the use of program trading, including circuit breakers that halt trading when there are significant market swings and other limits. But despite these measures, program trading continued to grow in popularity in the years following the crash.

Fast forward 15 years, to 2002, when the New York Stock Exchange introduced a fully automated trading system. As a result, program traders gave way to more sophisticated automations with much more advanced technology: High-frequency trading.

HFT uses computer programs to analyze market data and execute trades at extremely high speeds. Unlike program traders that bought and sold baskets of securities over time to take advantage of an arbitrage opportunity a difference in price of similar securities that can be exploited for profit high-frequency traders use powerful computers and high-speed networks to analyze market data and execute trades at lightning-fast speeds. High-frequency traders can conduct trades in approximately one 64-millionth of a second, compared with the several seconds it took traders in the 1980s.

These trades are typically very short term in nature and may involve buying and selling the same security multiple times in a matter of nanoseconds. AI algorithms analyze large amounts of data in real time and identify patterns and trends that are not immediately apparent to human traders. This helps traders make better decisions and execute trades at a faster pace than would be possible manually.

Another important application of AI in HFT is natural language processing, which involves analyzing and interpreting human language data such as news articles and social media posts. By analyzing this data, traders can gain valuable insights into market sentiment and adjust their trading strategies accordingly.

These AI-based, high-frequency traders operate very differently than people do.

The human brain is slow, inaccurate and forgetful. It is incapable of quick, high-precision, floating-point arithmetic needed for analyzing huge volumes of data for identifying trade signals. Computers are millions of times faster, with essentially infallible memory, perfect attention and limitless capability for analyzing large volumes of data in split milliseconds.

And, so, just like most technologies, HFT provides several benefits to stock markets.

These traders typically buy and sell assets at prices very close to the market price, which means they dont charge investors high fees. This helps ensure that there are always buyers and sellers in the market, which in turn helps to stabilize prices and reduce the potential for sudden price swings.

High-frequency trading can also help to reduce the impact of market inefficiencies by quickly identifying and exploiting mispricing in the market. For example, HFT algorithms can detect when a particular stock is undervalued or overvalued and execute trades to take advantage of these discrepancies. By doing so, this kind of trading can help to correct market inefficiencies and ensure that assets are priced more accurately.

But speed and efficiency can also cause harm.

HFT algorithms can react so quickly to news events and other market signals that they can cause sudden spikes or drops in asset prices.

Additionally, HFT financial firms are able to use their speed and technology to gain an unfair advantage over other traders, further distorting market signals. The volatility created by these extremely sophisticated AI-powered trading beasts led to the so-called flash crash in May 2010, when stocks plunged and then recovered in a matter of minutes erasing and then restoring about $1 trillion in market value.

Since then, volatile markets have become the new normal. In 2016 research, two co-authors and I found that volatility a measure of how rapidly and unpredictably prices move up and down increased significantly after the introduction of HFT.

The speed and efficiency with which high-frequency traders analyze the data mean that even a small change in market conditions can trigger a large number of trades, leading to sudden price swings and increased volatility.

In addition, research I published with several other colleagues in 2021 shows that most high-frequency traders use similar algorithms, which increases the risk of market failure. Thats because as the number of these traders increases in the marketplace, the similarity in these algorithms can lead to similar trading decisions.

This means that all of the high-frequency traders might trade on the same side of the market if their algorithms release similar trading signals. That is, they all might try to sell in case of negative news or buy in case of positive news. If there is no one to take the other side of the trade, markets can fail.

That brings us to a new world of ChatGPT-powered trading algorithms and similar programs. They could take the problem of too many traders on the same side of a deal and make it even worse.

In general, humans, left to their own devices, will tend to make a diverse range of decisions. But if everyones deriving their decisions from a similar artificial intelligence, this can limit the diversity of opinion.

Consider an extreme, nonfinancial situation in which everyone depends on ChatGPT to decide on the best computer to buy. Consumers are already very prone to herding behavior, in which they tend to buy the same products and models. For example, reviews on Yelp, Amazon and so on motivate consumers to pick among a few top choices.

Since decisions made by the generative AI-powered chatbot are based on past training data, there would be a similarity in the decisions suggested by the chatbot. It is highly likely that ChatGPT would suggest the same brand and model to everyone. This might take herding to a whole new level and could lead to shortages in certain products and service as well as severe price spikes.

This becomes more problematic when the AI making the decisions is informed by biased and incorrect information. AI algorithms can reinforce existing biases when systems are trained on biased, old or limited data sets. And ChatGPT and similar tools have been criticized for making factual errors.

In addition, since market crashes are relatively rare, there isnt much data on them. Since generative AIs depend on data training to learn, their lack of knowledge about them could make them more likely to happen.

For now, at least, it seems most banks wont be allowing their employees to take advantage of ChatGPT and similar tools. Citigroup, Bank of America, Goldman Sachs and several other lenders have already banned their use on trading-room floors, citing privacy concerns.

But I strongly believe banks will eventually embrace generative AI, once they resolve concerns they have with it. The potential gains are too significant to pass up and theres a risk of being left behind by rivals.

But the risks to financial markets, the global economy and everyone are also great, so I hope they tread carefully.

If so, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. With the latest scientific discoveries, thoughtful analysis on political issues and research-based life tips, each email is filled with articles that will inform you and often intrigue you.

Get our newsletters

Editor and General Manager

Get news thats free, independent and based on evidence.

Get newsletter

Editor

Find peace of mind, and the facts, with experts. Add evidence-based articles to your news digest. No uninformed commentariat. Just experts. 90,000 of them have written for us. They trust us. Give it a go.

Get our newsletter

If you found the article you just read to be insightful, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. Each newsletter has articles that will inform and intrigue you.

Subscribe now

CEO | Editor-in-Chief

It helps you go deeper into key political issues and also introduces you to the diversity of research coming out of the continent. It's not about breaking news. It's not about unfounded opinions. The Europe newsletter is evidence-based expertise from European scholars, presented by myself in France, and two of my colleagues in Spain and the UK.

Get our newsletter

Head of English section, France edition

Here is the original post:
ChatGPT-powered Wall Street: The benefits and perils of using artificial intelligence to trade stocks and other financial instruments - The...

ChatGPT as ‘educative artificial intelligence’ – Phys.org

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

With the advent of artificial intelligence (AI), several aspects of our lives have become more efficient and easier to navigate. One of the latest AI-based technologies is a user-friendly chatbotChatGPT, which is growing in popularity owing to its many applications, including in the field of education.

ChatGPT uses algorithms to generate text similar to that generated by a human, within seconds. With its correct and responsible use, it could be used to answer questions, source information, write essays, summarize documents, compose code, and much more. By extension, ChatGPT could transform education drastically by creating virtual tutors, providing personalized learning, and enhancing AI literacy among teachers and students.

However, ChatGPT or any AI-based technology capable of creating content in education, must be approached with caution.

Recently, a research team including Dr. Weipeng Yang, Assistant Professor at the Education University of Hong Kong, and Ms. Jiahong Su from the University of Hong Kong, proposed a theoretical framework known as 'IDEE' for guiding AI use in education (also referred to as 'educative AI').

In their study, which was published in the ECNU Review of Education on April 19, 2023, the team also identified the benefits and challenges of using educative AI and provided recommendations for future educative AI research and policies. Dr. Yang remarks, "We developed the IDEE framework to guide the integration of generative artificial intelligence into educational activities. Our practical examples show how educative Al can be used to improve teaching and learning processes."

The IDEE framework for educative AI includes a four-step process. 'I' stands for identifying the desired outcomes and objectives, 'D' stands for determining the appropriate level of automation, the first 'E' stands for ensuring that ethical considerations are met, and the second 'E' stands for evaluating the effectiveness of the application. For instance, the researchers tested the IDEE framework for using ChatGPT as a virtual coach for early childhood teachers by providing quick responses to teachers during classroom observations.

They found that ChatGPT can provide a more personalized and interactive learning experience for students that is tailored to their individual needs. It can also improve teaching models, assessment systems, and make education more enjoyable. Furthermore, it can help save teachers' time and energy by providing answers to students' questions, encourage teachers to reflect more on educational content, and provide useful teaching suggestions.

Notably, mainstream ChatGPT use for educational purposes raises many concerns including issues of costs, ethics, and safety. Real-world applications of ChatGPT require significant investments with respect to hardware, software, maintenance, and support, which may not be affordable for many educational institutions.

In fact, the unregulated use of ChatGPT could lead students to access inaccurate or dangerous information. ChatGPT could also be wrongfully used to collect sensitive information about students without their knowledge or consent. Unfortunately, AI models are only as good as the data used to train them. Hence, low quality data that is not representative of all student cohorts can generate erroneous, unreliable, and discriminatory AI responses.

Since ChatGPT and other educative AI are still emerging technologies, understanding their effectiveness in education warrants further research. Accordingly, the researchers offer recommendations for future opportunities related to educative AI. There is a dire need for more contextual research on using AI under different educational settings. Secondly, there should be an in-depth exploration of the ethical and social implications of educative AI.

Thirdly, the integration of AI into educational practices must involve teachers who are regularly trained in the use of generative AI. Finally, there should be polices and regulations for monitoring the use of educative AI to ensure responsible, unbiased, and equal technological access for all students.

Dr. Yang says, "While we acknowledge the benefits of educative AI, we also recognize the limitations and existing gaps in this field. We hope that our framework can stimulate more interest and empirical research to fill these gaps and promote widespread application of Al in education."

More information: Jiahong Su () et al, Unlocking the Power of ChatGPT: A Framework for Applying Generative AI in Education, ECNU Review of Education (2023). DOI: 10.1177/20965311231168423

Provided by Cactus Communications

Continued here:
ChatGPT as 'educative artificial intelligence' - Phys.org

What is the Message from Artificial Intelligence | Butler Snow LLP … – JD Supra

When announcing the much publicized $125 million fine against JP Morgan for violating recordkeeping rules, the U.S. Securities and Exchange Commission (SEC) Chair stated that financial institutions did not act as if they got the message regarding unauthorized or unwatched written communications.[1] The Commodity Futures Trading Commission (CFTC) has also fined banks and brokerages billions of dollars for not saving communications.[2] For a long time, businesses had to worry about saving their own authorized communications, but more recently, the SEC has discovered that employees are purposefully discussing business on separate communications channels to avoid company eyes.[3]

As the fines become stiffer for current authorized and unauthorized messaging platforms, the race to develop artificial intelligence, including an AI-powered chatbox, is in full swing.[4] Early tests of AI chat tools show that AI can produce some offensive and crazy responses as they ingest a vast amount of text from the internet.[5] What will federal regulators make of these messages? It is even predicted that workers in the finance industry could be replaced by AI in the areas of financial advising, trading, accounting, and investment banking.[6] In announcing the fine against JP Morgan, SEC Chair Gary Gensler stated: As technology changes, its even more important that registrants ensure that their communications are appropriately recorded and are not conducted outside of official channels in order to avoid market oversight.[7]

Much has been said about the Department of Justices announced changes to its criminal enforcement policies.[8] Kenneth A. Polite, Jr., the Assistant Attorney General of the Criminal Division, emphasized that business-related electronic data and communications must be preserved, and accessible or negative consequences will flow. This includes communications on personal devices, communication platforms, or third-party messaging applications. AI generated messages or information will most likely also be considered business-related electronic data. It is unclear how erroneous or false information from AI will be viewed. But one thing is for sure, the SEC and other agencies will continue to investigate financial firms and whether they have gotten the message or not. Regulated firms need to invest time and resources in developing communications and retention policies now with an eye towards the future of AI.

[1] SEC will reward cooperation where firms mess up with unauthorized communication, Resources, Jennie Clarke, November 17, 2022, http://www.globalrelay.com/sec-will-reward-cooperation-where-firms-mess-up-with-unauthorized-communications.

[2] 16 Wall Street firms fined $1.8B for using private text apps, lying about it, Computer World, Lucas Mearlan, September 28, 2022, http://www.computerworld.com/article/3675289/16-wall-street-firms-fined-18b-for-using-private-text-apps-lying-about-it.

[3] Id.

[4] Microsofts new AI chatbox has been saying some crazy and unhinged things, NPR, Bobby Allyn, March 2, 2023, http://www.npr.org/2023/03/02/1159895892/ai-microsoft-bing-chatbox.

[5] Id.

[6] ChatGPT may be coming for our jobs. Here are the 10 roles that AI is most likely to replace, Business Insider, Aaron Mok and Jacob Zinkula, April 9, 2023, http://www.businessinsider.com/chatgpt-jobs-at-risk-replacement-artificial-intelligence-ai-labor-trends-2023-02.

[7] JPMorgan Admits to Widespread Recordkeeping Failures and Agrees to Pay $125 Million Penalty to Resolve SEC Charges, U.S. Securities and Exchange Commission Press Release 2021-262, December 17, 2021, http://www.sec.gov/news/press-release/2021-262.

[8] DOJ Takes Stance on Messaging Apps, Details Comp Reforms, Law 360, Stewart Bishop, March 3, 2023, http://www.law360.com/articles/1582115/doj-takes-stance-on-messaging-apps-details-comp-reforms.

Read the original here:
What is the Message from Artificial Intelligence | Butler Snow LLP ... - JD Supra

Approaching artificial intelligence: How Purdue is leading the … – Purdue University

WEST LAFAYETTE, Ind. A technology with the potential to transform all aspects of everyday life is shaping the next pursuit at Purdue University. With the programs, research and expertise highlighted below, Purdue is guiding the advancement of artificial intelligence. If you have any questions about Purdues work in AI or would like to speak to a Purdue expert, contact Brian Huchel, bhuchel@purdue.edu.

AI presents both unique opportunities and unique challenges. How can we use this technology as a tool? Researcher Javier Gomez-Lavin, assistant professor in the Department of Philosophy, shares the work that needs to be done in habits, rules and regulations surrounding those who work with AI.

Hear researcher Aniket Bera explain more about his groundbreaking work to bring human behavior into AI and what sparked his interest in the technology. In this interview, Bera touches on the importance of technology in human emotion and the goal of his research lab.

Is AI trustworthy? Hear Purdue University in Indianapolis researcher Arjan Durresi explain how making AI safe and easy to understand for the everyday user requires treating the development of the technology like the development of a child.

AI is touching almost every subject, discipline and career field as it evolves. In human resources, the technology has already been used as a selection tool in job interviews. Professor Sang Eun Woo explains how we can turn this use of AI as a selection tool into a professional development tool.

How will AI influence writing and education? Harry Denny, professor of English and director of Purdues On-Campus Writing Lab, answers how ChatGPT and other AI programs may be integrated into the classroom experience.

The rise of ChatGPT has created concerns over security. Professor Saurabh Bagchi shares the reality of cybersecurity concerns and how this technology could be used to strengthen the security of our computing systems.

How Purdue is helping design artificial intelligence, raise trust in it

WISH-TV Indianapolis

Purdue University professor working to help robots better work with humans

WGN-TV Chicago

Original post:
Approaching artificial intelligence: How Purdue is leading the ... - Purdue University