Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence voice coach shows promise in treating depression, anxiety: Study – Organiser

The findings of a recent pilot study headed by academics from the University of Illinois Chicago suggest that artificial intelligence may be a beneficial aid in the treatment of mental illness.

The study, which was the first to test an AI voice-based virtual coach for behavioural therapy, found changes in patients brain activity along with improved depression and anxiety symptoms after using Lumen, an AI voice assistant that delivered a form of psychotherapy. The UIC team says the results, which are published in the journal Translational Psychiatry, offer encouraging evidence that virtual therapy can play a role in filling the gaps in mental health care, where waitlists and disparities in access, are often hurdles that patients, particularly from vulnerable communities, must overcome to receive treatment.

Weve had an incredible explosion of need, especially in the wake of COVID, with soaring rates of anxiety and depression and not enough practitioners, said Dr Olusola A Ajilore, UIC professor of psychiatry and co-first author of the paper. This kind of technology may serve as a bridge. Its not meant to be a replacement for traditional therapy, but it may be an important stop-gap before somebody can seek treatment.

Lumen, which operates as a skill in the Amazon Alexa application, was developed by Ajilore and study senior author Dr Jun Ma, the Beth and George Vitoux Professor of Medicine at UIC, along with collaborators at Washington University in St. Louis and Pennsylvania State University, with the support of a $2 million grant from the National Institute of Mental Health.

The UIC researchers recruited over 60 patients for the clinical study exploring the applications effect on mild-to-moderate depression and anxiety symptoms, and activity in brain areas previously shown to be associated with the benefits of problem-solving therapy.

Two-thirds of the patients used Lumen on a study-provided iPad for eight problem-solving therapy sessions, with the rest serving as a waitlist control receiving no intervention.

After the intervention, study participants using the Lumen app showed decreased scores for depression, anxiety and psychological distress compared with the control group. The Lumen group also showed improvements in problem-solving skills that correlated with increased activity in the dorsolateral prefrontal cortex, a brain area associated with cognitive control. Promising results for women and underrepresented populations also were found.

Its about changing the way people think about problems and how to address them, and not being emotionally overwhelmed, Jun Ma said. Its a pragmatic and patient-driven behavior therapy thats well established, which makes it a good fit for delivery using voice-based technology.

A larger trial comparing the use of Lumen with both a control group on a waitlist and patients receiving human-coached problem-solving therapy is currently being conducted by the researcher. They stress that the virtual coach doesnt need to perform better than a human therapist to fill a desperate need in the mental health system.

The way we should think about digital mental health service is not for these apps to replace humans, but rather to recognise what a gap we have between supply and demand, and then find novel, effective and safe ways to deliver treatments to individuals who otherwise do not have access, to fill that gap, Jun Ma said.

(with inputs from ANI)

See the original post:
Artificial Intelligence voice coach shows promise in treating depression, anxiety: Study - Organiser

BofA’s analysts say artificial intelligence (AI) is a ‘baby bubble’ for … – Investing.com

The highlight of the stock market in 2023 has been The Magnificent Seven i.e. the surge in shares of the mega-cap tech stocks, Bank of America analysts write in their regular weekly column.

The Big 7 monopolistic U.S. Tech stocks - Apple (NASDAQ:AAPL), Microsoft (NASDAQ:MSFT), Google (NASDAQ:GOOGL), Amazon (NASDAQ:AMZN), Nvidia (NASDAQ:NVDA), Meta Platforms (NASDAQ:META), and Tesla (NASDAQ:TSLA) - are up 61% year-to-date. The group trades on 30x PE vs 17x for the rest of the S&P 500.

A similar situation can also be observed in Europe where a group of 7 luxury stocks trades on 36x vs the rest of Stoxx 600 trading on 12x PE.

One of the key drivers of the tech rally in 2023 has been the artificial intelligence (AI) frenzy and the popularity of generative AI tools, like OpenAIs ChatGPT. The analysts say AI is in a baby bubble so far.

Bubbles in right things (e.g. Internet) & wrong things (e.g. housing) always started by easy money, always ended by rate hikes, the analysts said in a note on Friday.

They say the Fed funds rate at 6%, not at 3%, could be the pain trade for the next 12 months as the market continues to expect rate cuts in the second half of 2023. In the near term, the S&P 500 extending its rally to 4400 could be another pain trade'.

We still fade SPX 4.2k$220 EPS + 20x PE + 200bps Fed cuts = as good as it gets; but clients so bored of bears, the analysts added.

As far as flows are concerned, $25.1 billion went to cash and $5.6B to bonds in the week to Wednesday.

Read market moving news with a personalized feed of stocks you care about.

Get The App

More here:
BofA's analysts say artificial intelligence (AI) is a 'baby bubble' for ... - Investing.com

The benefits and perils of using artificial intelligence to trade stocks and other financial instruments – Tech Xplore

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

written by researcher(s)

proofread

Credit: Pixabay/CC0 Public Domain

Artificial Intelligence-powered tools, such as ChatGPT, have the potential to revolutionize the efficiency, effectiveness and speed of the work humans do.

And this is true in financial markets as much as in sectors like health care, manufacturing and pretty much every other aspect of our lives.

I've been researching financial markets and algorithmic trading for 14 years. While AI offers lots of benefits, the growing use of these technologies in financial markets also points to potential perils. A look at Wall Street's past efforts to speed up trading by embracing computers and AI offers important lessons on the implications of using them for decision-making.

In the early 1980s, fueled by advancements in technology and financial innovations such as derivatives, institutional investors began using computer programs to execute trades based on predefined rules and algorithms. This helped them complete large trades quickly and efficiently.

Back then, these algorithms were relatively simple and were primarily used for so-called index arbitrage, which involves trying to profit from discrepancies between the price of a stock indexlike the S&P 500and that of the stocks it's composed of.

As technology advanced and more data became available, this kind of program trading became increasingly sophisticated, with algorithms able to analyze complex market data and execute trades based on a wide range of factors. These program traders continued to grow in number on the largely unregulated trading freewayson which over a trillion dollars worth of assets change hands every daycausing market volatility to increase dramatically.

Eventually this resulted in the massive stock market crash in 1987 known as Black Monday. The Dow Jones Industrial Average suffered what was at the time the biggest percentage drop in its history, and the pain spread throughout the globe.

In response, regulatory authorities implemented a number of measures to restrict the use of program trading, including circuit breakers that halt trading when there are significant market swings and other limits. But despite these measures, program trading continued to grow in popularity in the years following the crash.

Fast forward 15 years, to 2002, when the New York Stock Exchange introduced a fully automated trading system. As a result, program traders gave way to more sophisticated automations with much more advanced technology: High-frequency trading.

HFT uses computer programs to analyze market data and execute trades at extremely high speeds. Unlike program traders that bought and sold baskets of securities over time to take advantage of an arbitrage opportunitya difference in price of similar securities that can be exploited for profithigh-frequency traders use powerful computers and high-speed networks to analyze market data and execute trades at lightning-fast speeds. High-frequency traders can conduct trades in approximately one 64-millionth of a second, compared with the several seconds it took traders in the 1980s.

These trades are typically very short term in nature and may involve buying and selling the same security multiple times in a matter of nanoseconds. AI algorithms analyze large amounts of data in real time and identify patterns and trends that are not immediately apparent to human traders. This helps traders make better decisions and execute trades at a faster pace than would be possible manually.

Another important application of AI in HFT is natural language processing, which involves analyzing and interpreting human language data such as news articles and social media posts. By analyzing this data, traders can gain valuable insights into market sentiment and adjust their trading strategies accordingly.

These AI-based, high-frequency traders operate very differently than people do.

The human brain is slow, inaccurate and forgetful. It is incapable of quick, high-precision, floating-point arithmetic needed for analyzing huge volumes of data for identifying trade signals. Computers are millions of times faster, with essentially infallible memory, perfect attention and limitless capability for analyzing large volumes of data in split milliseconds.

And, so, just like most technologies, HFT provides several benefits to stock markets.

These traders typically buy and sell assets at prices very close to the market price, which means they don't charge investors high fees. This helps ensure that there are always buyers and sellers in the market, which in turn helps to stabilize prices and reduce the potential for sudden price swings.

High-frequency trading can also help to reduce the impact of market inefficiencies by quickly identifying and exploiting mispricing in the market. For example, HFT algorithms can detect when a particular stock is undervalued or overvalued and execute trades to take advantage of these discrepancies. By doing so, this kind of trading can help to correct market inefficiencies and ensure that assets are priced more accurately.

But speed and efficiency can also cause harm.

HFT algorithms can react so quickly to news events and other market signals that they can cause sudden spikes or drops in asset prices.

Additionally, HFT financial firms are able to use their speed and technology to gain an unfair advantage over other traders, further distorting market signals. The volatility created by these extremely sophisticated AI-powered trading beasts led to the so-called flash crash in May 2010, when stocks plunged and then recovered in a matter of minuteserasing and then restoring about $1 trillion in market value.

Since then, volatile markets have become the new normal. In 2016 research, two co-authors and I found that volatilitya measure of how rapidly and unpredictably prices move up and downincreased significantly after the introduction of HFT.

The speed and efficiency with which high-frequency traders analyze the data mean that even a small change in market conditions can trigger a large number of trades, leading to sudden price swings and increased volatility.

In addition, research I published with several other colleagues in 2021 shows that most high-frequency traders use similar algorithms, which increases the risk of market failure. That's because as the number of these traders increases in the marketplace, the similarity in these algorithms can lead to similar trading decisions.

This means that all of the high-frequency traders might trade on the same side of the market if their algorithms release similar trading signals. That is, they all might try to sell in case of negative news or buy in case of positive news. If there is no one to take the other side of the trade, markets can fail.

That brings us to a new world of ChatGPT-powered trading algorithms and similar programs. They could take the problem of too many traders on the same side of a deal and make it even worse.

In general, humans, left to their own devices, will tend to make a diverse range of decisions. But if everyone's deriving their decisions from a similar artificial intelligence, this can limit the diversity of opinion.

Consider an extreme, nonfinancial situation in which everyone depends on ChatGPT to decide on the best computer to buy. Consumers are already very prone to herding behavior, in which they tend to buy the same products and models. For example, reviews on Yelp, Amazon and so on motivate consumers to pick among a few top choices.

Since decisions made by the generative AI-powered chatbot are based on past training data, there would be a similarity in the decisions suggested by the chatbot. It is highly likely that ChatGPT would suggest the same brand and model to everyone. This might take herding to a whole new level and could lead to shortages in certain products and service as well as severe price spikes.

This becomes more problematic when the AI making the decisions is informed by biased and incorrect information. AI algorithms can reinforce existing biases when systems are trained on biased, old or limited data sets. And ChatGPT and similar tools have been criticized for making factual errors.

In addition, since market crashes are relatively rare, there isn't much data on them. Since generative AIs depend on data training to learn, their lack of knowledge about them could make them more likely to happen.

For now, at least, it seems most banks won't be allowing their employees to take advantage of ChatGPT and similar tools. Citigroup, Bank of America, Goldman Sachs and several other lenders have already banned their use on trading-room floors, citing privacy concerns.

But I strongly believe banks will eventually embrace generative AI, once they resolve concerns they have with it. The potential gains are too significant to pass upand there's a risk of being left behind by rivals.

But the risks to financial markets, the global economy and everyone are also great, so I hope they tread carefully.

Read this article:
The benefits and perils of using artificial intelligence to trade stocks and other financial instruments - Tech Xplore

Opinion: Soon, artificial intelligence will be running companies rise … – The Globe and Mail

Open this photo in gallery:

A company's use of AI needs to align with its vision, mission and values and be based on a set of transparent and ethical principles and policies.DADO RUVIC/Reuters

Ian Robertson is the chief executive officer of strategic shareholder advisory and governance firm Kingsdale Advisors Inc.

Artificial Intelligence is bound to be the central engine of a fourth industrial revolution and is on the verge of playing a crucial role in the management and oversight of companies.

Some may be surprised to learn the use of artificial governance intelligence is already actively applied in boardrooms and corporate decision-making processes, such as due diligence of mergers and acquisitions, profiling investors, auditing annual reports, validating new business opportunities, analyzing and optimizing procurement, sales, marketing, and other corporate matters.

Most businesses are already utilizing some form of AI, algorithms and various platforms, such as ChatGPT. International organizations, governments, businesses, scientific and legal communities are racing to establish new regulations, laws, policies, ethical codes and privacy requirements as AI continues to evolve at a rapid pace while current legal and regulatory frameworks are lagging and becoming obsolete.

Against this backdrop it is important shareholders and boards start considering these issues, too, especially as it relates to augmenting or supplanting the role of corporate directors. Is your company ready for the rise of the robo-director?

In 2014, Hong Kong-based venture capital group Deep Knowledge Ventures appointed an algorithm named VITAL (Validating Investment Tool for Advancing Life Sciences) to its board of directors. VITAL was given the same right as human directors of the corporation to vote on whether the firm should invest in a specific company or not. Since then, VITAL has been widely acknowledged as the worlds first robo-director and other companies, such as software provider Tietoevry and Salesforce, have followed suit in employing AI in the boardroom.

The World Economic Forum has reported that by 2026, corporate governance will have undergone a robotization process on a massive scale. Momentum in computational power, breakthroughs in AI technology and advanced digitalization will inevitably lead to more established support for corporate directors using AI in their roles, if not their full replacement by autonomous systems. The result being that human directors sharing their decision-making powers with robo-directors will have become the new normal.

As the legal and regulatory landscape races to keep pace, companies need to forecast their compliance obligations that govern AI systems and boards will need to adjust to new corporate laws. In Canada, several coming federal and provincial privacy law reforms will affect the use of AI in business operations. The proposed federal Bill C- 27, if passed, would implement Canadas first artificial intelligence legislation, the Artificial Intelligence and Data Act (AIDA), which could come into effect in 2025. Current corporate law is not adapted to artificial governance intelligence and will have to cope with new and complex legal questions once the use of AI as a support tool or replacement of human directors increases.

There are some key questions directors and shareholders alike should be considering: How do current legal strategies apply to robo-directors? How and who will be responsible for the execution of fiduciary duties? Financial compensation and pay-for-performance will be of no use to robo-directors, so who is being compensated and being held accountable behind the scenes for programming and controlling the robo-director? What are the needs and limitations of a robo-director and what roles of a traditional director should be ring-fenced from them?

The use of AI provides opportunities and potential threats, both requiring strong risk and governance frameworks. The board is accountable legally and ethically for the use of AI within the company and its impact on employees, customers and shareholders, including third-party products which may embed AI technologies.

The use of AI needs to align with the companys vision, mission and values; be based on a set of safe, transparent and ethical principles and policies; and be rigorously monitored to ensure compliance with data privacy rules. Codes of conduct and ethics need to be updated to include an AI governance framework and ensure no bias in data-setting and decision-making. Companies should consider appointing an executive who will be responsible for AI governance and provide strategic insights to the board.

See the original post:
Opinion: Soon, artificial intelligence will be running companies rise ... - The Globe and Mail

Siri co-founder Tom Gruber helped bring AI into the mainstream. Here’s why he’s worried about how fast AI is growing – ABC News

Tom Gruber speaks in a soft and deep American drawl. Passionate and methodical, he reflects on the moment he and two colleagues created Siri Apple's virtual assistant the high point of his 40-year career in Silicon Valley's pursuit of artificial intelligence.

"Around 2007-2008, we had everything in place to bring real artificial intelligence into everyone's hand, and that was thrilling.

"Siri was very playful. And that was by design," he declares with a wide grin and a laugh almost like a proud dad.

"Now it's used roughly a billion times a day. That's a lot of use. It's on 2 billion devices. It is absolutely woven into everyday life."

But what Mr Gruber and long-time colleagues working on artificial intelligence (AI) have seen in the past 18 months has scared them.

"There's something different this time," he says.

"And that something different is that the amount of capabilities that were just uncovered in the last year or two that has surprised the people who were building them, and surpassed all of our expectations at the pace to which these things were uncovered."

ChatGPT produced by the Microsoft-funded OpenAI company is the most well-known of all the new "generative" AI chatbots that have been released.

Trained on the knowledge of the internet and then released to be tested on the public, this new AI has spread at a record pace.

In Australia, it is already causing disruption. Schools and unis have both embraced and banned it. Workplaces are using it for shortcuts and efficiencies,raising questions about whether it will abolish some jobs. IBM's CEO has already said about 8,000 jobs could be replaced by AI and automation.

Microsoft told 7.30 this week that "real-world experience and feedback is critical and can't be fully replicated in a lab".

But Mr Gruber, and thousands of AI industry scientists, coders and researchers, want testing on the public to stop until a framework is put in place to in his words "keep this technology safe and on the side of humans".

"What they're doing by releasing it [to the world] is they're getting data from humans about the kind of stuff that normal humans would do when they get their hands on such a model. And they're like, learning by trial and error," Mr Gruber tells 7.30.

"There's a problem with that model that's called a human trial without consent.

Toby Walsh is the chief scientist at UNSW's new AI Institute. He says another part of the concern is the rate at which ChatGPT is being adopted.

"Even small harms, multiplied by a billion people, could be cause for significant concern," he says.

"ChatGPT was used by a million people in five days, [and] 100 million people at the end of the first month.

"Now, six months later, it's in the hands of a billion people. We've never had technologies before where you could roll them out so quickly."

Here's the rub the new AI models are really good at being fake humans in text. They're also really good at creating fake images, and even the voices of real people.

If you don't want to pay a model for a photo shoot, or you want a contract written quickly without a lawyer, AI is a tool that's at your disposal.

But the new AI apps are also great for fraudsters and those who want to manipulate public perceptions.

CBC Canada's public broadcaster reported that police are investigating cases where AI-trained fake voices were used to scam money from parents who believed they were speaking to their children.

"Just to clarify it only takes a few seconds now to clone someone's voice I could ring up your answer phone, record your voice, and speak just like you," Mr Walsh says.

Mr Gruber is scared by how new AI can "pretend to be human really well".

"Humans are already pretty gullible," he says.

"I mean, a lot of people would talk to Siri for a while, mostly for entertainment purposes.

"But there are a lot of people who get sucked into such a thing. And that's one of the really big risks.

"We haven't even seen the beginning of all the ways people can use this amazing piece of technology to amplify acts of mischief.

"If we can't believe our senses, and use our inherited ability to detect whether that thing is fake or real, then a lot of things that we do as a society start to unravel."

AI is not like computer-coded programs whose lines of script can be checked and corrected one by one.

"They're closer to [being] organic," Connor Leahy says.

The London-based coder is at the beginning of his career and already the CEO of his own company Conjecture, which aims to create "safe AI" and is funded to the tune of millions of dollars by venture capitalists and former tech success stories like the creator of Skype.

Mr Leahy's definition of safe AI is "AI that truly does what we want it to do, and that we can rely on them to not do things we don't want them to do".

Sounds simple enough until he describes the current AI apps.

"They are complete mystery boxes, black boxes, as we would say in technical terms," he says.

"There's all these kinds of weirdness that we don't understand, even with, for example, relatively simple image recognition systems, which have existed for quite a while.

"They have these problems, which are called adversarial examples.

"And what this means is that you can completely confuse the system by just changing a single pixel in an image;you just change one pixel and suddenly [the system] thinks that a dog is an ostrich.

"This is very strange. And we don't really know why this happens. And we don't really know how to fix it."

This "black box"has led OpenAI to develop a tool to help identify which parts of its AI system are responsible for its behaviours.

William Saunders, the interpretability team manager at OpenAI, told industry site TechCrunch: "We want to really be able to know that we can trust what the model is doing, and the answer that it produces."

Each large language model he's referring tois a neural network. And each individual neuron makes decisions based on the information it receives,a bit like the human brain.That neuron then sends its answer to the rest of the network.

OpenAI says their tool could only "confidently" explain the behaviour of just 1,000 neurons out of a total of 307,200 neurons in its GPT-2 system. That's two generations back.

Meanwhile, GPT-4 has an estimated trillion neurons.

Ironically,OpenAI is using GPT-4 to run its tests on GPT-2, which underscores the point that it has released something into the world it barely understands.

Science fiction writer Isaac Asimov famously wrote the three laws of robotics the first of which Mr Gruber expands upon: "Robots should do no harm to humans or not cause harm to happen through inaction."

That doesn't apply to AI at the moment because it's not a law it can understand at a conceptual level,"because the AI bot or the language model doesn't have human values engineered into it".

It's a big word calculator.

"It's only been trained to solve this astonishingly simple game of cards [in which each card is a word]," Mr Gruber says.

"It plays this game where it puts a card down and then guesses what the next word is. And then, once it figures that out, you know, OK, it guesses the next word. And so on."

Those words are what it calls a response to a question asked by a human.

"It plays the game a trillion, trilliontimes an astonishing amount of scale and computation, and a very simple operation. And that's the only thing it's told to do."

The new generation of AI can mimic human language very effectively but it cannot feel empathy for humans.

"They are a very blunt instrument we don't know how to make them care about us," Mr Leahy says.

"This is similar [to] how potentially a human sociopath understands that doing mean things is mean, but they don't care. This is the problem that we currently face with AI."

This is all happening now not in some doomsday future scenario on a Hollywood scale where sentient AI takes over the world.

It's no wonder, then, that so many in the industry are calling for help.

Tech insiders are now calling for their wings to be clipped even in America, where it is almost unheard of that US corporations ask to be regulated.

But that is precisely what happened this week, when the head of OpenAI Sam Altman appeared before the US Congress.

In stunning testimony, the 38-year-old declared: "If this technology goes wrong, it can go quite wrong; we want to be vocal about that, we want to work with the government to prevent that from happening."

Mr Altman has been quite open about his fears, telling ABC America in an interview earlier this year he is "a little bit" scared of AI's capabilities, before adding that if he said he wasn't, he shouldn't be trusted.

Mr Leahy is also outspoken.

"There is currently more regulation on selling a sandwich to the public than there is to building completely novel powerful AI systems with unknown capabilities and intelligence and releasing them to the general public widely onto the internet, you know, accessible by API to interface with any tools they want," he said.

"The government at the moment has no regulation in place whatsoever about this."

The challenge now is how fast safeguards can be installed and whether they are effective.

"It's kind of like a sense of futurists' whack-a-mole," Mr Leahy told 7.30.

"It's not that there's one specific way things go wrong, and only one way, how unleashing intelligent, autonomous, powerful systems onto the internet that we cannot control and we do not understand ... there's billions of ways this could go wrong."

Watch 7.30, Mondays to Thursdays 7:30pm on ABC iview and ABC TV

Do you know more about this story? Get in touch with 7.30 here.

Read more:
Siri co-founder Tom Gruber helped bring AI into the mainstream. Here's why he's worried about how fast AI is growing - ABC News