Archive for the ‘Artificial Intelligence’ Category

AI caramba, those neural networks are power-hungry: Counting the environmental cost of artificial intelligence – The Register

Feature The next time you ask Alexa to turn off your bedroom lights or make a computer write dodgy code, spare a thought for the planet. The back-end mechanics that make it all possible take up a lot of power, and these systems are getting hungrier.

Artificial intelligence began to gain traction in mainstream computing just over a decade ago when we worked out how to make GPUs handle the underlying calculations at scale. Now there's a machine learning algorithm for everything, but while the world marvels at the applications, some researchers are worried about the environmental expense.

One of the most frequently quoted papers on this topic, from the University of Massachusetts, analysed training costs on AI including Google's BERT natural language processing model. It found that the cost of training BERT on a GPU in carbon emissions was roughly the same as a trans-American jet flight.

Kate Saenko, associate professor of computer science at Boston University, worries that we're not doing enough to make AI more energy efficient. "The general trend in AI is going in the wrong direction for power consumption," she warns. "It's getting more expensive in terms of power to train the newer models."

The trend is exponential. Researchers associated with OpenAI wrote that the computing used to train the average model increases by a factor of 10 each year.

Most AI these days is based on machine learning (ML). This uses a neural network, which is a collection of nodes designed in layers. Each node has connections to nodes in the next. Each of these connections has a score known as a parameter or weight.

The neural network takes an input (such as a picture of a hotdog) and runs it through the layers of the neural network, each of which uses its parameters to produce an output. The final output is a judgement about the data (for example, was the original input a picture of a hotdog or not?)

Those weights don't come preconfigured. You have to calculate them. You do that by showing the network lots of labelled pictures of hot dogs and not hot dogs. You keep training it until the parameters are optimised, which means that they spit out the correct judgement for each piece of data as often as possible. The more accurate the model, the better it will be when making judgements about new data.

You don't just train an AI model once. You keep doing it, adjusting various aspects of the neural network each time to maximise the right answers. These aspects are called hyperparameters, and they include variables such as the number of neurons in each layer and the number of layers in each network. A lot of that tuning is trial and error, which can mean many training passes. Chewing through all that data is already expensive enough, but doing it repeatedly uses even more electrons.

The reason that the models are taking more power to train is that researchers are throwing more data at them to produce more accurate results, explains Lukas Biewald. He's the CEO of Weights and Biases, a company that helps AI researchers organise the training data for all these models while monitoring their compute usage.

"What's alarming about about it is that it seems like for every factor of 10 that you increase the scale of your model training, you get a better model," he says.

Yes, but the model's accuracy doesn't increase by a factor of 10. Jesse Dodge, postdoctoral researcher at the Allen Institute for AI and co-author of a paper called Green AI, notes studies pointing to the diminishing returns of throwing more data at a neural network.

So why do it?

"There's a long tail of things to learn," he explains. ML algorithms can train on the most commonly-seen data, but the edge cases the confusing examples that rarely come up are harder to optimise for.

Our hotdog recognition system might be fine until some clown comes along in a hotdog costume, or it sees a picture of a hotdog-shaped van. A language processing model might be able to understand 95 per cent of what people say, but wouldn't it be great if it could handle exotic words that hardly anyone uses? More importantly, your autonomous vehicle must be able to stop in dangerous conditions that rarely ever arise.

"A common thing that we see in machine learning is that it takes exponentially more and more data to get out into that long tail," Dodge says.

Piling on all this data data doesn't just slurp power on the compute side, points out Saenko; it also burdens other parts of the computing infrastructure. "The larger the data, the more overhead," she says. "Even transferring the data from the hard drive to the GPU memory is power intensive."

There are various attempts to mitigate this problem. It starts at the data centre level, where hyperscalers are doing their best to switch to renewables so that they can at least hammer their servers responsibly.

Another approach involves taking a more calculated approach when tweaking your hyperparameters. Weights and Biases offers a "hyperparameter sweep" service that uses Bayesian algorithms to narrow the field of potential changes with each training pass. It also offers an "early stopping" algorithm which halts a training pass early on if the optimisation isn't panning out.

Not all approaches involve fancy hardware and software footwork. Some are just about sharing. Dodge points out that researchers could amortise the carbon cost of their model training by sharing the end result. Trained models released in the public domain can be used without retraining, but people don't take enough advantage of that.

"In the AI community, we often train models and then don't release them," he says. "Or the next people that want to build on our work just rerun the experiments that we did."

Those trained models can also be fine tuned with additional data, enabling people to tweak existing optimisations for new applications without retraining the entire model from scratch.

Making training more efficient only tackles one part of the problem, and it isn't the most important part. The other side of the AI story is inference. This is when a computer runs new data through a trained model to evaluate it, recognising hotdogs it has never seen before. It still takes power, and the rapid adoption of AI is making it more of a problem. Every time you ask Siri how to cook rice properly, it uses inference power in the cloud.

One way to reduce model size is to cut down the number of parameters. AI models often use vast numbers of weights in a neural network because data scientists aren't sure which ones will be most useful. Saenko and her colleagues have researched reducing the number of parameters using a concept that they call shape shifter networks that share some of the parameters in the final model.

"You might train a much bigger network and then distil it into a smaller one so that you can deploy a smaller network and save computation and deployment at inference time," she says.

Companies are also working on hardware innovations to cope with this increased inference load. Google's Tensor Processing Units (TPUs) are tailored to handle both training and inference more efficiently, for example.

Solving the inference problem is especially tricky because we don't know where a lot of it will happen in the long term. The move to edge computing could see more inference jobs happening in lower-footprint devices rather than in the cloud. The trick there is to make the models small enough and to introduce hardware advances that will help to make local AI computation more cost-effective.

"How much do companies care about running their inference on smaller devices rather than in the cloud on GPUs?" Saenko muses. "There is not yet that much AI running standalone on edge devices to really give us some clear impetus to figure out a good strategy for that."

Still, there is movement. Apple and Qualcomm have already produced tailored silicon for inference on smart phones, and startups are becoming increasingly innovative in anticipation of edge-based inference. For example, semiconductor startup Mythic launched an AI processor focused on edge-based AI that uses analogue circuitry and in-memory computing to save power. It's targeting applications including object detection and depth estimation, which could see the chips turn up in everything from factories to surveillance cameras.

As companies grapple with whether to infer at the edge, the problem of making AI more energy efficient in the cloud remains. The key lies in resolving two opposing forces: on the one hand, everyone wants more energy efficient computing. On the other, researchers constantly strive for more accuracy.

Dodge notes that most academic AI papers today focus on the latter. Accuracy is winning out as companies strive to beat each other with better models, agrees Saenko. "It might take a lot of compute but it's worthwhile for people to claim that one or two percent improvement," she says.

She would like to see more researchers publish data on the power consumption of their models. This might inspire competition to drive efficiencies up and costs down.

The stakes may be more than just environmental, warns Biewald; they could be political too. What happens if computing consumption continues to go up by a factor of 10 each year?

"You have to buy the energy to train these models, and the only people that can realistically afford that will be Google and Microsoft and the 100 biggest corporations," he posits.

If we start seeing a growing inequality gap in AI research, with corporate interests out in front, carbon emissions could be the least of our worries.

See the article here:
AI caramba, those neural networks are power-hungry: Counting the environmental cost of artificial intelligence - The Register

DMALINK partners with Axyon AI to add deep learning artificial intelligence to its platform tech stack – PRNewswire

LONDON, Sept. 13, 2021 /PRNewswire/ -- DMALINK, the emerging markets foreign exchange focused institutional ECN brings the FX market into the heart of the 4th industrial revolution.

The firm today announced its partnership with Axyon AI to enable the first ever use of Deep Learning artificial intelligence to dynamically manage liquidity, detect market and order anomalies and create smart algos for trade execution in the fiat FX space.

Axyon AI is a leading European FinTech company with expertise in Deep Learning/AI for asset management and trading firms. Axyon AI has successful products in several financial use-cases, from security selection and asset allocation to anomaly detection in option pricing.

Manu Choudhary, Chief Executive Officer of DMALINK, says: "In spite of the pace of innovation within the e-FX space, liquidity management, anomaly detection and algos have been left behind by advances in deep learning AI technology. The ability for Axyon AI's deep learning technology to leverage insights in a fraction of the time of a human-driven equivalent provides opportunities for the procurement and analysis of unique data to dynamically manage liquidity, risk and trade execution for the first time."

Axyon AI's technology will combine with DMALINK's ECN infrastructure to radically modernise FX trading. For the buy side, deep learning models will considerably improve the quality of order fills. For the sell side, the application will ensure a positive yield curve. The deep learning technology will also detect market anomalies in spot FX allowing DMALINK participants access to one of the most powerful risk management tools developed in the e-FX space. Smart algos, dynamically created by the AI will instantly adjust trade execution as a function of the market dynamics.

Daniele Grassi, Chief Executive Officer of Axyon AI, says "We believe that deep learning has just begun to transform financial markets, increasing efficiency and improving risk management. Our partnership with DMALINK will be a driver of this paradigm shift for the FX trading industry."

About DMALINK

DMALINK is an independent electronic trading, analytics and market data venue for institutional FX traders globally. All liquidity pools are proactively constructed across key emerging markets. Its platform participants benefit from advanced order analytics data and granular reporting and benchmarked execution services, ensuring price transparency for all platform participants.

About Axyon AI

Axyon AI is a leading player in deep learning, the newest area of machine learning artificial intelligence, for time series forecasting. Axyon AI partners with asset managers and hedge funds to deliver consistently high-performing end-to-end AI powered quantitative insights and investment strategies.

For media enquiries, please contact:

Media Room, DMALINK, Tel: +44 (0) 20 7117 2517

SOURCE DMALINK

Continued here:
DMALINK partners with Axyon AI to add deep learning artificial intelligence to its platform tech stack - PRNewswire

Why robotics and artificial intelligence will be bigger than the discovery of the New World | Column – Tampa Bay Times

Having spent more than 25 years working with industry partners to educate and prepare the future workforce, it is not surprising to see that Florida has experienced growth in the technology sector.

Across the nation, the U.S. Bureau of Labor Statistics estimates that computer and information technology occupations are projected to grow 11 percent from 2019 to 2029, much faster than the average for all occupations. Additionally, demand for skilled professionals in robotics and artificial intelligence is growing. The World Economic Forum estimates that while 85 million jobs will be displaced, 97 million new jobs will be created across 26 countries by 2025 due to the growth of artificial intelligence technology.

From my conversations with industry leaders to the research and data Ive studied, all signs lead me to believe that robotics and artificial intelligence will be a significant economic driver, surpassing the impact of Christopher Columbus exploration of the New World in 1492.

While Columbus used sophisticated technology that was highly advanced for his time, he was still required to convince Queen Isabella that his trip and tools had value. His technology included the compass, maps, and charts that helped him navigate what many considered a nearly unthinkable journey.

Today, few in our modern world need to be convinced that computing and other advanced technologies, including robotics and artificial intelligence, have value.

While certainly some people fear technology will impact us negatively with the loss of jobs or human touch, others see technologies like robotic surgery or manufacturing as protections that can help heal people faster or make work more effective. Today, robots are largely sophisticated tools that are as amazing and mindboggling as the compass and quadrant were in Columbus time.

While Columbus trip changed the world, it took hundreds of years for its impact to be understood and capitalized upon. Robotics, as a field of practice and study, rapidly will change the future for graduates, and all of us, with new technologies being employed each year.

The idea of a robot may bring to mind images of Commander Data from Star Trek, or more frighteningly, the robots featured in The Terminator, but the field of robotics is much broader than those perceptions.

According to the Institute of Electrical and Electronics Engineers, there are many types of robots from those in aerospace, to consumer products, disaster response, drones, autonomous vehicles, and exoskeletons, to industrial robots, and medical robots, among others. In 2019, an article in Oxford Economics revealed that the number of robots in use worldwide multiplied three-fold over the past two decades, to 2.25 million. In many cases, robots are simply machines that are programmed to perform tasks or take actions. They are able to do things in anticipation of needs, based on artificial intelligence coding.

A final point to consider is the impact on the economy. After Columbus journey, trade between nations became prevalent and a new economic system was born. Likewise, demand for robotics and artificial intelligence technology will grow and create new efficiencies. PriceWaterhouseCoopers Global Artificial Intelligence Study predicts that by 2030, growth of artificial intelligence will lead to an estimated $15.7 trillion, or 26 percent increase, in global gross domestic product.

Demand for robotics engineers and technicians also will grow, given the need for designing and maintaining robots. There also will be strong demand for application developers for robotic systems and solutions. So, while some fear that robots and artificial intelligence will take away jobs from humans, they will create many more jobs and careers.

With what I now know today, if I could go back and change my college major, I would select robotics. There are many opportunities in this growing field. It is multidisciplinary, creative, impactful, and would allow me to innovate. It is and will be the next big discovery in our world.

Jeffrey D. Senese, PhD, is the president of Saint Leo University, a private, nonprofit Catholic university based in Pasco County, FL. Saint Leo is the largest Benedictine Catholic university in the world, educating more than 18,000 students each year. This fall, the university is launching a bachelors degree in robotics and artificial intelligence and opening a new college dedicated to the growing field.

Excerpt from:
Why robotics and artificial intelligence will be bigger than the discovery of the New World | Column - Tampa Bay Times

Current uses, emerging applications, and clinical integration of artificial intelligence in neuroradiology – DocWire News

This article was originally published here

Rev Neurosci. 2021 Sep 10. doi: 10.1515/revneuro-2021-0101. Online ahead of print.

ABSTRACT

Artificial intelligence (AI) is a branch of computer science with a variety of subfields and techniques, exploited to serve as a deductive tool that performs tasks originally requiring human cognition. AI tools and its subdomains are being incorporated into healthcare delivery for the improvement of medical data interpretation encompassing clinical management, diagnostics, and prognostic outcomes. In the field of neuroradiology, AI manifested through deep machine learning and connected neural networks (CNNs) has demonstrated incredible accuracy in identifying pathology and aiding in diagnosis and prognostication in several areas of neurology and neurosurgery. In this literature review, we survey the available clinical data highlighting the utilization of AI in the field of neuroradiology across multiple neurological and neurosurgical subspecialties. In addition, we discuss the emerging role of AI in neuroradiology, its strengths and limitations, as well as future needs in strengthening its role in clinical practice. Our review evaluated data across several subspecialties of neurology and neurosurgery including vascular neurology, spinal pathology, traumatic brain injury (TBI), neuro-oncology, multiple sclerosis, Alzheimers disease, and epilepsy. AI has established a strong presence within the realm of neuroradiology as a successful and largely supportive technology aiding in the interpretation, diagnosis, and even prognostication of various pathologies. More research is warranted to establish its full scientific validity and determine its maximum potential to aid in optimizing and providing the most accurate imaging interpretation.

PMID:34506699 | DOI:10.1515/revneuro-2021-0101

Read the original:
Current uses, emerging applications, and clinical integration of artificial intelligence in neuroradiology - DocWire News

Artificial Intelligence in Film Industry is Sophisticating Production – Analytics Insight

Artificial intelligence in filmmaking might sound futuristic, but we have reached this place. Technology is already making a significant impact on film production.

Today, most of the outperforming movies that come under the visual effects category are using machine learning and AI for filmmaking. Significant pictures like The Irishman and Avengers: Endgame are no different.

It wont be a wonder if the next movie you watch is written by AI, performed by robots, and animated and rendered by a deep learning algorithm.

But why do we need artificial intelligence in filmmaking? In the fast-moving world, everything has relied on technology. Integrating artificial intelligence and subsequent technologies in film production will help create movies faster and obtain more income. Besides, employing technology will also ease almost every task in the film industry.

Writing scripts

Artificial intelligence writes a story is what happens here. Humans can imagine and script amazing stories, but they cant assure that it will perform well in the theatres. Fortunately, AI can. Machine learning algorithms are fed with large amounts of movie data, which analyses them and comes up with unique scripts that the audience love.

Simplifying pre-production

Pre-production is an important but stressful task. However, AI can help streamline the process involved in pre-production. AI can plan schedules according to actors and others timing, and find apt locations that will go well with the storyline.

Character making

Graphics and visual effects never fail to steal peoples hearts. Digital domain applied machine learning technologies are used to design amazing fictional characters like Thanos of Avengers: Infinity War.

Subtitle creation

Global media publishing companies have to make their content suitable for viewers from different regions to consume. In order to deliver video content with multiple language subtitles, production houses can use AI-based technologies like Natural language generation and natural language processing.

Movie Promotion

To confirm that the movie is a box-office success, AI can be leveraged in the promotion process. AI algorithm can be used to evaluate the viewer base, the excitement surrounding the movie, and the popularity of the actors around the world.

Movie editing

In editing feature-length movies, AI supports the film editors. With facial recognition technology, an AI algorithms can recognize the key characters and sort certain scenes for human editors. By getting the first draft done quickly, editors can focus on scenes featuring the main plot of the script.

Share This ArticleDo the sharing thingy

About AuthorMore info about author

Analytics Insight is an influential platform dedicated to insights, trends, and opinions from the world of data-driven technologies. It monitors developments, recognition, and achievements made by Artificial Intelligence, Big Data and Analytics companies across the globe.

Read more:
Artificial Intelligence in Film Industry is Sophisticating Production - Analytics Insight