Archive for the ‘Artificial Intelligence’ Category

Unlocking the power of data with artificial intelligence – TechRadar

Data is the lifeblood of business it drives innovation and enhances competitiveness. However, its importance was brought to the fore by the pandemic as lockdowns and social distancing drove digital transformation like never before.

About the author

Andrew Brown, General Manager, Technology Group, IBM United Kingdom & Ireland.

Forward-thinking businesses have started to grasp the importance of their data; they understand the consequences of not fully mobilizing it, but many are sat at the start of their journey.

Even the best organizations are failing to extract the maximum benefits from their data while keeping it safe. This is where artificial intelligence (AI) comes into play it can benefit enterprises with their data in three fundamental ways.

First, without the right tools it is impossible to unlock datas hidden value. For that to happen businesses need to deploy AI because of its ability to analyze complex datasets and produce actionable insights. These can significantly enhance business agility and improve the foresight of enterprises of all sizes.

The success of any move to adopt AI will depend on a robust IT infrastructure being in place. Transforming data into useful information is only possible with this solid foundation, which in turn allows advanced AI applications to extract the real value locked inside the data.

During the first wave of the pandemic, IBM worked with The Royal Marsden, a world-leading cancer hospital, to launch an AI virtual assistant to alleviate some of the pressures and uncertainty for staff associated with COVID-19. The system depended on fast access to trusted information from various diverse sources, such as the hospitals official policy handbook as well as data from NHS England. By tapping into these rich knowledge sources, staff were able to get quicker answers to workplace queries while the HR team had more time to handle complex requests.

Another issue is that far too many businesses simply dont know how much data they own. Split up into silos, it can be impossible to gain a clear view of not only what data is available but also where it resides. Removing this bottleneck can also be achieved through the implementation of AI. This is important because incomplete data will result in incomplete insights.

Businesses should prioritize making all data sources as simple and accessible as possible. Cloud computing technologies, such as hybrid data management, have a vital role to play here. Adoption makes it possible to manage all data types across multiple sources and locations, effectively breaking down these silos and a major barrier to AI adoption.

IBM has partnered with Wimbledon for more than 30 years, helping the worlds leading tennis tournament get the most from its data. Tapping into a wealth of new and archived footage, player data and historical records, fans can now benefit from personalized recommendations and highlights reels. Created through a rules-based recommendation engine integrated across Wimbledons digital platforms, this personalized content allows fans to track their favorite players through the tournament as well as receive suggestions on emerging talent to follow.

This is all made possible by the hybrid cloud the data spans a combination of on-premises systems, private clouds, and public cloud. Breaking down these silos has allowed Wimbledon to innovate at pace to attract new global audiences.

While extracting value from data is undoubtedly beneficial for organizations, it also creates risks. Criminals are increasingly aware of the potential to exploit vulnerabilities to disrupt operations or cause reputational issues through leaking sensitive data. The threat landscape is evolving and rising data breach costs are a growing problem for businesses in the wake of the rapid technology shifts triggered by the pandemic.

Over the last year businesses were forced to quickly adapt their technology approaches, with many companies encouraging or requiring employees to work from home, and 60% of organizations moved further into cloud-based activities during the pandemic.

According to the latest annual Cost of a Data Breach report, conducted by Ponemon Institute and analyzed by IBM Security, serious security incidents now cost UK-based organizations an average of $4.67 million (around 3.4 million) per incident, the highest cost in the 17-year history of the report. This is higher than the global average of $4.24 million per incident, highlighting the importance of protecting data for British businesses.

AI has a role to play here, and the study revealed encouraging signs about the impact of intelligent and automated security tools. While data breach costs reached a record high over the past year, the report also showed positive signs about the impact of modern cybersecurity tactics, such as AI and automation which may pay off by reducing the cost of these incidents further down the line.

The adoption of AI and security analytics were in the top five mitigating factors shown to reduce the cost of a breach. On average, organizations with a fully deployed security automation strategy faced data breach costs of less than half of those with no automation technology in place.

The sector in which a business operates also has a direct impact on the overall cost of a security breach. The report identified that the average cost of each compromised record containing sensitive data was highest for UK organizations in Services (191 per record), Financial (188) and Pharmaceuticals (147). This highlights how quickly the costs of a breach can escalate if a large number of records are compromised.

The Cost of a Data Breach report highlights a number of trends and best practices that were consistent with an effective response to security incidents. These can be adopted by organizations of all types and sizes and contribute to form the basis of a data management and governance strategy:

1. Invest in security orchestration, automation and response (SOAR). Security AI and automation significantly reduce the time to identify and respond to a data breach. By deploying SOAR solutions alongside your existing security tools, its possible to accelerate incident response and reduce overall costs associated with breaches.

2. Adopt a zero trust security model to help prevent unauthorized access to sensitive data. Organizations with mature zero trust deployments have far lower breach costs than those without. As businesses move to remote working and hybrid cloud environments, a zero trust strategy can help protect data by only making it accessible in the right context.

3. Stress test incident response plans to improve resilience. Forming an Incident Response team, developing a plan and putting it to the test are crucial steps to responding quickly and effectively to attacks.

4. Invest in governance, risk management and compliance. Evaluating risk and tracking compliance can help quantify the cost of a potential breach in real terms. In turn this can expedite the decision-making process and resource allocation.

5. Protect sensitive data in the cloud using policy and encryption. Data classification schema and retention policies should help minimize the volume of the sensitive information that is vulnerable to a breach. Advanced data encryption techniques should be deployed for everything that remains.

So how should a business bring its AI strategy to life? First, organizations must ensure their infrastructure is equipped to handle all the data, processing and performance requirements needed to effectively run AI. If you use your existing storage arrangement without modernizing it, you greatly increase your risk of failure. A hybrid cloud implementation is likely to be the best solution in most instances as it offers the optimum flexibility.

Enterprises should also directly embed AI into their data management and security systems, which should have clearly defined data policies to ensure appropriate levels of access and resilience. The data management system and the data architecture should be optimized for added agility and ease of operation.

A fully featured AI implementation doesnt just aggregate data and perform largescale analytics, it also enhances security and governance. Together they enable companies to create valuable business insights that fuel innovation. AI will also help ensure that data if used more efficiently and minimize data duplication. But above all, properly managed data is the lifeblood of enterprise a resource that needs to be identified and protected. Only then can companies start to climb the AI ladder.

Read more:
Unlocking the power of data with artificial intelligence - TechRadar

Museum Of Wild And Newfangled Art’s Opening Exhibition Curated By Artificial Intelligence – Broadway World

The Museum of Wild and Newfangled Art (mowna) will open their final show of the year "This Show is Curated by a Machine" on September 23, 2021.

The Artificial Intelligence curated exhibition opens with a talk on the development of the AI model followed by a Q&A with the AI Team: IV (Ivan Pravdin) and museum co-founders cari ann shim sham* and Joey Zaza. "This Show is Curated by a Machine" runs September 23, 2021 through January 31, 2022 tickets bought prior to opening day, September 23rd, include entrance to AI talk and are available at: https://www.mowna.org/museum/this-show-is-curated-by-a-machine

Earlier this year, The Whitney Museum of American Art commissioned and exhibited the work "The Next Biennial Should Be Curated by a Machine" for their online artport. In response the Museum of Wild and Newfangled Art has designed an artificial intelligence curator that will not only redefine how we look at curation and AI but will also underscore the need to move forward with AI curation in an ethical way.

The artificial intelligence model was trained on image sets from various sources, including the Museum of Modern Art, the Art Institute of Chicago, and the mowna Biennial submissions, an exhibit of around 88 International Artists from 44 countries.

"Curation is very subjective. It's my hope through the development of an AI curator that we can allow for equity and diversity, and eliminate some biases," says cari ann shim sham*.

Artists in the show include Alice Prum, a London based artist whose work explores the invisible relationships between space, the body, and technology. Bridget DeFranco is an east coast media artist working against the high-stimulation nature of the screen. Avideh Salmanpour is an Iranian artist whose paintings explore the bewilderment of contemporary man and the attempt to find a new way.

The artificial intelligence curator was created by multiple artists. IV is a post-contemporary artist working with various artificial intelligence and neural networking techniques. cari ann shim sham* is the co-founder and curator of mowna, a wild artist working at the intersection of dance and technology, and an associate arts professor of dance and technology at NYU Tisch School of the Arts. Joey Zaza is the co-founder and curator of mowna, and works in photography, software, video, sound, and installation. They combined forces to explore the potential of using artificial intelligence in art curation. The team's initial thoughts, strategies and questions in the development of the AI model can be found on mowna's blog.

Human curation is also included alongside the AI curation for "This Show is Curated by a Machine" to offer a comparison. Text written by the team will explain why or why not they think the AI chose the work. This show is a successful completion of phase one of mowna's AI model which ranks and curates a show using image based files. mowna will release a paper with its phase one research and findings to the public. With this data the team will enter into phase two development for the AI's ability to curate sound and video files.

"This Show is Curated by a Machine" will be installed and available for viewing on September 23, 2021 and marks the third online art exhibition by mowna. The second, the 2021 mowna Biennial, showcases art of all mediums and focuses on exhibiting art that might have otherwise gone unseen due to gaps in the post-pandemic art world. It is currently still on exhibit until September 22, 2021 and can be viewed on the mowna website. Tickets are a sliding scale of pay-what-you-wish.

mowna makes it their priority to showcase a broad range of art and is committed to diversity in every way. It provides an international online platform for the most timely, diverse, and preeminent artists. At the center of the constantly changing and expanding art world, mowna showcases a mixture of the familiar and unfamiliar. Members will have the opportunity to see artists who have been curated by the MoMA or the Whitney alongside artists available only on mowna.

As the global landscape shifts towards a more technological way of being, mowna is there to meet the needs of an ever-changing art world. The Museum of Wild and Newfangled Art was formed to feature the newest art developments and make art of all mediums accessible to everyone. And it unmistakably builds on that foundation with the upcoming exhibition "This Show is Curated by a Machine".

For more information on current and upcoming exhibitions and events, please visit mowna's new pages on Instagram and Facebook (below) as well as the museum's official website.

Continued here:
Museum Of Wild And Newfangled Art's Opening Exhibition Curated By Artificial Intelligence - Broadway World

Federal Court Rules That Artificial Intelligence Cannot Be An Inventor Under The Patent Act – JD Supra

Although this blog typically focuses on decisions rendered in intellectual property and/or antitrust cases currently in or that originated in the United States District Court for the District of Delaware or are in the Federal Circuit, every now and then there is a decision rendered in another federal trial or appellate court that is significant enough it warrants going beyond the normal boundaries. The recent decision rendered by The Honorable Leonie M. Brinkema, of the United States District Court for the Eastern District of Virginia, in Thaler v. Hirshfeld et al., Civil Action No. 1:20-cv-903-LMB (E.D.Va. September 2, 2021), is such a decision.

In Thaler, the Court confronted, analyzed and answered the question of can an artificial intelligence machine be an inventor under the Patent Act? Id. at *1. After analyzing the plain statutory language of the Patent Act and the Federal Circuit authority, the Court held that the clear answer is no. Id. In reaching its holding, the Court found that Congress intended to limit the definition of inventor to natural persons which means humans not artificial intelligence. Id. at *17. The Court noted that, [a]s technology evolves, there may come a time when artificial intelligence reaches a level of sophistication such that it might satisfy accepted meanings of inventorship. But that time has not yet arrived, and, if it does, it will be up to Congress to decide how, if at all, it wants to expand the scope of patent law. Id. at *17-18.

A copy of the Memorandum Opinion is attached.

[View source.]

More here:
Federal Court Rules That Artificial Intelligence Cannot Be An Inventor Under The Patent Act - JD Supra

AI caramba, those neural networks are power-hungry: Counting the environmental cost of artificial intelligence – The Register

Feature The next time you ask Alexa to turn off your bedroom lights or make a computer write dodgy code, spare a thought for the planet. The back-end mechanics that make it all possible take up a lot of power, and these systems are getting hungrier.

Artificial intelligence began to gain traction in mainstream computing just over a decade ago when we worked out how to make GPUs handle the underlying calculations at scale. Now there's a machine learning algorithm for everything, but while the world marvels at the applications, some researchers are worried about the environmental expense.

One of the most frequently quoted papers on this topic, from the University of Massachusetts, analysed training costs on AI including Google's BERT natural language processing model. It found that the cost of training BERT on a GPU in carbon emissions was roughly the same as a trans-American jet flight.

Kate Saenko, associate professor of computer science at Boston University, worries that we're not doing enough to make AI more energy efficient. "The general trend in AI is going in the wrong direction for power consumption," she warns. "It's getting more expensive in terms of power to train the newer models."

The trend is exponential. Researchers associated with OpenAI wrote that the computing used to train the average model increases by a factor of 10 each year.

Most AI these days is based on machine learning (ML). This uses a neural network, which is a collection of nodes designed in layers. Each node has connections to nodes in the next. Each of these connections has a score known as a parameter or weight.

The neural network takes an input (such as a picture of a hotdog) and runs it through the layers of the neural network, each of which uses its parameters to produce an output. The final output is a judgement about the data (for example, was the original input a picture of a hotdog or not?)

Those weights don't come preconfigured. You have to calculate them. You do that by showing the network lots of labelled pictures of hot dogs and not hot dogs. You keep training it until the parameters are optimised, which means that they spit out the correct judgement for each piece of data as often as possible. The more accurate the model, the better it will be when making judgements about new data.

You don't just train an AI model once. You keep doing it, adjusting various aspects of the neural network each time to maximise the right answers. These aspects are called hyperparameters, and they include variables such as the number of neurons in each layer and the number of layers in each network. A lot of that tuning is trial and error, which can mean many training passes. Chewing through all that data is already expensive enough, but doing it repeatedly uses even more electrons.

The reason that the models are taking more power to train is that researchers are throwing more data at them to produce more accurate results, explains Lukas Biewald. He's the CEO of Weights and Biases, a company that helps AI researchers organise the training data for all these models while monitoring their compute usage.

"What's alarming about about it is that it seems like for every factor of 10 that you increase the scale of your model training, you get a better model," he says.

Yes, but the model's accuracy doesn't increase by a factor of 10. Jesse Dodge, postdoctoral researcher at the Allen Institute for AI and co-author of a paper called Green AI, notes studies pointing to the diminishing returns of throwing more data at a neural network.

So why do it?

"There's a long tail of things to learn," he explains. ML algorithms can train on the most commonly-seen data, but the edge cases the confusing examples that rarely come up are harder to optimise for.

Our hotdog recognition system might be fine until some clown comes along in a hotdog costume, or it sees a picture of a hotdog-shaped van. A language processing model might be able to understand 95 per cent of what people say, but wouldn't it be great if it could handle exotic words that hardly anyone uses? More importantly, your autonomous vehicle must be able to stop in dangerous conditions that rarely ever arise.

"A common thing that we see in machine learning is that it takes exponentially more and more data to get out into that long tail," Dodge says.

Piling on all this data data doesn't just slurp power on the compute side, points out Saenko; it also burdens other parts of the computing infrastructure. "The larger the data, the more overhead," she says. "Even transferring the data from the hard drive to the GPU memory is power intensive."

There are various attempts to mitigate this problem. It starts at the data centre level, where hyperscalers are doing their best to switch to renewables so that they can at least hammer their servers responsibly.

Another approach involves taking a more calculated approach when tweaking your hyperparameters. Weights and Biases offers a "hyperparameter sweep" service that uses Bayesian algorithms to narrow the field of potential changes with each training pass. It also offers an "early stopping" algorithm which halts a training pass early on if the optimisation isn't panning out.

Not all approaches involve fancy hardware and software footwork. Some are just about sharing. Dodge points out that researchers could amortise the carbon cost of their model training by sharing the end result. Trained models released in the public domain can be used without retraining, but people don't take enough advantage of that.

"In the AI community, we often train models and then don't release them," he says. "Or the next people that want to build on our work just rerun the experiments that we did."

Those trained models can also be fine tuned with additional data, enabling people to tweak existing optimisations for new applications without retraining the entire model from scratch.

Making training more efficient only tackles one part of the problem, and it isn't the most important part. The other side of the AI story is inference. This is when a computer runs new data through a trained model to evaluate it, recognising hotdogs it has never seen before. It still takes power, and the rapid adoption of AI is making it more of a problem. Every time you ask Siri how to cook rice properly, it uses inference power in the cloud.

One way to reduce model size is to cut down the number of parameters. AI models often use vast numbers of weights in a neural network because data scientists aren't sure which ones will be most useful. Saenko and her colleagues have researched reducing the number of parameters using a concept that they call shape shifter networks that share some of the parameters in the final model.

"You might train a much bigger network and then distil it into a smaller one so that you can deploy a smaller network and save computation and deployment at inference time," she says.

Companies are also working on hardware innovations to cope with this increased inference load. Google's Tensor Processing Units (TPUs) are tailored to handle both training and inference more efficiently, for example.

Solving the inference problem is especially tricky because we don't know where a lot of it will happen in the long term. The move to edge computing could see more inference jobs happening in lower-footprint devices rather than in the cloud. The trick there is to make the models small enough and to introduce hardware advances that will help to make local AI computation more cost-effective.

"How much do companies care about running their inference on smaller devices rather than in the cloud on GPUs?" Saenko muses. "There is not yet that much AI running standalone on edge devices to really give us some clear impetus to figure out a good strategy for that."

Still, there is movement. Apple and Qualcomm have already produced tailored silicon for inference on smart phones, and startups are becoming increasingly innovative in anticipation of edge-based inference. For example, semiconductor startup Mythic launched an AI processor focused on edge-based AI that uses analogue circuitry and in-memory computing to save power. It's targeting applications including object detection and depth estimation, which could see the chips turn up in everything from factories to surveillance cameras.

As companies grapple with whether to infer at the edge, the problem of making AI more energy efficient in the cloud remains. The key lies in resolving two opposing forces: on the one hand, everyone wants more energy efficient computing. On the other, researchers constantly strive for more accuracy.

Dodge notes that most academic AI papers today focus on the latter. Accuracy is winning out as companies strive to beat each other with better models, agrees Saenko. "It might take a lot of compute but it's worthwhile for people to claim that one or two percent improvement," she says.

She would like to see more researchers publish data on the power consumption of their models. This might inspire competition to drive efficiencies up and costs down.

The stakes may be more than just environmental, warns Biewald; they could be political too. What happens if computing consumption continues to go up by a factor of 10 each year?

"You have to buy the energy to train these models, and the only people that can realistically afford that will be Google and Microsoft and the 100 biggest corporations," he posits.

If we start seeing a growing inequality gap in AI research, with corporate interests out in front, carbon emissions could be the least of our worries.

See the article here:
AI caramba, those neural networks are power-hungry: Counting the environmental cost of artificial intelligence - The Register

DMALINK partners with Axyon AI to add deep learning artificial intelligence to its platform tech stack – PRNewswire

LONDON, Sept. 13, 2021 /PRNewswire/ -- DMALINK, the emerging markets foreign exchange focused institutional ECN brings the FX market into the heart of the 4th industrial revolution.

The firm today announced its partnership with Axyon AI to enable the first ever use of Deep Learning artificial intelligence to dynamically manage liquidity, detect market and order anomalies and create smart algos for trade execution in the fiat FX space.

Axyon AI is a leading European FinTech company with expertise in Deep Learning/AI for asset management and trading firms. Axyon AI has successful products in several financial use-cases, from security selection and asset allocation to anomaly detection in option pricing.

Manu Choudhary, Chief Executive Officer of DMALINK, says: "In spite of the pace of innovation within the e-FX space, liquidity management, anomaly detection and algos have been left behind by advances in deep learning AI technology. The ability for Axyon AI's deep learning technology to leverage insights in a fraction of the time of a human-driven equivalent provides opportunities for the procurement and analysis of unique data to dynamically manage liquidity, risk and trade execution for the first time."

Axyon AI's technology will combine with DMALINK's ECN infrastructure to radically modernise FX trading. For the buy side, deep learning models will considerably improve the quality of order fills. For the sell side, the application will ensure a positive yield curve. The deep learning technology will also detect market anomalies in spot FX allowing DMALINK participants access to one of the most powerful risk management tools developed in the e-FX space. Smart algos, dynamically created by the AI will instantly adjust trade execution as a function of the market dynamics.

Daniele Grassi, Chief Executive Officer of Axyon AI, says "We believe that deep learning has just begun to transform financial markets, increasing efficiency and improving risk management. Our partnership with DMALINK will be a driver of this paradigm shift for the FX trading industry."

About DMALINK

DMALINK is an independent electronic trading, analytics and market data venue for institutional FX traders globally. All liquidity pools are proactively constructed across key emerging markets. Its platform participants benefit from advanced order analytics data and granular reporting and benchmarked execution services, ensuring price transparency for all platform participants.

About Axyon AI

Axyon AI is a leading player in deep learning, the newest area of machine learning artificial intelligence, for time series forecasting. Axyon AI partners with asset managers and hedge funds to deliver consistently high-performing end-to-end AI powered quantitative insights and investment strategies.

For media enquiries, please contact:

Media Room, DMALINK, Tel: +44 (0) 20 7117 2517

SOURCE DMALINK

Continued here:
DMALINK partners with Axyon AI to add deep learning artificial intelligence to its platform tech stack - PRNewswire