Archive for the ‘Alphago’ Category

Cyber attacks on AI a problem for the future – Verdict

Although the use of artificial intelligence (AI) is accelerating in security, its increased use could actually invite AI cyber attacks (including adversarial attacks) in varied systems, devices and applications.

Recent advances around the improvement of algorithms (Googles AlphaGo, OpenAIs GPT-3), and increasing computing power have accelerated AI across a number of potential applications and use cases.

Use cases stem across Automotive (computer vision and conversational platforms), Consumer Electronics (implementing virtual assistants, authentication via facial recognition, i.e. Apples FaceID), and Ecommerce and Retail (voice-enabled shopping assistants, personalized shop). Accordingly, based on GlobalData forecasts, the total AI market is demonstrating strong growth (includes software, hardware, and services) and will be worth $383.3bn in 2030, having grown at a compound annual growth rate (CAGR) of 21.4% from $81.3bn in 2022. As a result many of these use cases will be across a number of consumer and business application settings.

AI within cybersecurity is very much talked about in how AI can be utilized to increase cyber resiliency, simplify processes, and perform human functions. AI together with Automation and Analytics enables managed security providers to ingest data from multiple feeds and react more quickly to real threats, and apply automation to incidence response in a broader way.

AI in cybersecurity is also seen to solve the problem in the long run of resourcing, by in the short term providing a stop gap by streamlining human functions across Security Operations Centers (SOCs) this could be through for example cybersecurity technology components covering Extended, Detection and Response (XDR) that detect sophisticated threats with AI; and Security Orchestration, Automation and Response (SOAR) platforms that utilize Machine Learning (ML) to provide incident handling guidance based on past actions and historical data.

On the flip side, the increased use of AI in all applications (including cybersecurity) increases the chances of attacks on the actual AI/ML models in varied systems, devices and applications. In addition, adversarial attacks on AI could cause models to misinterpret information. There are many use cases that this could occur, and examples include iPhones FaceID access feature that makes use of neural networks to recognize faces Here there is potential for attacks to happen through the AI models themselves and in bypassing the security layers.

Cybersecurity products where AI is implemented is also a target as AI in cybersecurity entails acquiring data sets over time which are vulnerable to attacks. Other examples include algorithm theft in autonomous vehicles, predictive maintenance algorithms in sectors like Oil &Gas and Utilities which could be subject to State Sponsored attacks, identification breaches in video surveillance, and medical misdiagnosis in Healthcare.

The discussion of countering attacks on AI will gain momentum over the next two years as AI use cases increase. Regulations around AI security will also drive momentum and frameworks in place to address cyber attacks on AI.

As an example, current regulatory examples at a vertical level include The European Telecommunications Standards Institute (ETSI) Industry Specification Group for Telecoms that is focusing on utilizing AI to enhance security and securing AI attacks.

The Financial sector as a whole is in its infancy in terms of setting and implementing AI regulatory frameworks. Though, there have been developments in Europe for example, and the European Commission published a comprehensive set of proposals for the AI Act. However, the security component is limited.

The lack of guidance and regulation currently leaves a number of vertical sectors like Finance and Utilities vulnerable.

However, as more AI regulatory frameworks in the context of security are introduced, this could pave way to the rise of managed services specifically at addressing attacks on AI service propositions could entail looking at risk management profiles and laying down security layers around vulnerability assessments, and better integration of MLOps and SIEM/SOAR environments.

Visit link:
Cyber attacks on AI a problem for the future - Verdict

Taming AI to the benefit of humans – Asia News NetworkAsia News … – asianews.network

May 19, 2023

BEIJING For decades, artificial intelligence (AI) has captivated humanity as an enigmatic and elusive entity, often depicted in sci-fi films. Will it emerge as a benevolent angel, devotedly serving mankind, or a malevolent demon, poised to seize control and annihilate humanity?

Previous sci-fi movies featuring AI often portray evil-minded enemies set on destroying humanity, such as The Terminator, The Matrix and Blade Runner. Experts, including late British theoretical physicist Stephen Hawking and Tesla CEO Elon Musk, have expressed concern about the potential risks of AI, with Hawking warning that it could lead to the end of the human race. These tech gurus understand the limitations of human intelligence when compared to rapidly evolving technologies like supercomputers, Big Data and cloud computing, and fear that AI will soon become too powerful to control.

In March 2016, AlphaGo, a computer program developed by Google DeepMind, decisively beat Lee Sedol, a 9-dan Korean professional Go player, with a score of 4-1. In May 2017, AlphaGo crushed Kejie 3-0, Chinas then-top Go player. This historic event marked the first time a machine had defeated a human at Go, widely considered one of the most complex and challenging games in the world. The victory shattered skepticism about AIs capabilities and instilled a sense of awe and fear in many. This sentiment was further reinforced when Master, the updated version of AlphaGo, achieved an unprecedented 60-game winning streak, beating dozens of top-notch players from China, South Korea and Japan, driving human players to despair.

These victories sparked widespread interest and debate about the potential of AI and its impact on society. Some saw it as a triumph of human ingenuity and technological progress, while others expressed concern about the implications for employment, privacy and ethics. Overall, AlphaGos dominance in Go signaled a turning point in the history of AI and became a reminder of the power and potential of this rapidly evolving field.

If AlphaGo was an AI prodigy that impressed humans with its exceptional abilities, then Chat GPT, which made its debut earlier this year, along with its more powerful successor GPT, has left humans both awestruck with admiration and fearful of its potential negative impact.

GPT, or Generative Pre-trained Transformer, a language model AI, has the ability to generate human-like responses to text prompts, making it seem like you are having a conversation with a human. GPT-3, the latest version of the model, has 175 billion parameters, making it the largest language model AI to date. Some have claimed that it has passed the Turing test.

Indisputably, AI has the potential to revolutionize many industries, from healthcare and education to finance and manufacturing to transportation, by providing more accurate diagnoses, reducing accidents and analyzing large amounts of data. It is anticipated that AIs rapid development will bring immeasurable benefits to humans.

Yet, history has shown us that major technological advancements can be a double-edged sword, capable of bringing both benefits and drawbacks. For instance, the discovery of nuclear energy has led to the creation of nuclear weapons, which have caused immense destruction and loss of life. Similarly, the widespread use of social media has revolutionized communication, but it has also led to the spread of misinformation and cyberbullying.

Despite their impressive performance, the latest versions of GPT and its Chinese counterparts, such as Baidus Wenxin Yiyan, are not entirely reliable or trustworthy due to fatal bugs. Despite my attempts to request specific metrical poems by famous ancient Chinese poets, these seemingly omniscient chatbots would display fake works they had cobbled together from their database instead of authentic ones. Even when I corrected them, they would continue to provide incorrect answers without acknowledging their ignorance. Until this bug is resolved, these chatbots cannot be considered a reliable tool.

Furthermore, AI has advanced in image and sound generation through deep learning and neural networks, including the use of GANs for realistic images and videos and text-to-speech algorithms for human-like speech. However, without strict monitoring, these advancements could be abused for criminal purposes, such as deepfake technology for creating convincing videos of people saying or doing things they never did, leading to the spread of false information or defamation.

It has been discovered that AI is being used for criminal purposes. On April 25th, the internet security police in Pingliang City, Gansu Province, uncovered an article claiming that nine people had died in a train collision that morning. Further investigation revealed that the news was entirely false. The perpetrator, a man named Hong, had utilized ChatGPT and other AI products to generate a large volume of fake news and profit illegally. Hongs use of AI tools allowed him to quickly search for and edit previous popular news stories, making them appear authentic and facilitating the spread of false information. In this case, AI played a significant role in the commission of the crime.

Due to the potential risks that AI poses to human society, many institutions worldwide have imposed bans or restrictions on GPT usage, citing security risks and plagiarism concerns. Some countries have also requested that GPT meet specific requirements, such as the European Unions proposed regulations that mandate AI systems to be transparent, explainable and subject to human oversight.

China has always prioritized ensuring the safety, reliability and controllability of AI to better empower global sustainable development. In its January 2023 Position Paper on Strengthening Ethical Governance of Artificial Intelligence, China actively advocates for the concepts of people-oriented and AI for good.

In conclusion, while AI is undoubtedly critical to technological and social advancement, it must be tamed to serve humankind as a law-abiding and people-oriented assistant, rather than a deceitful and rebellious troublemaker. Ethics must take precedence, and legislation should establish regulations and accountability mechanisms for AI. An international consensus and concerted action are necessary to prevent AI from endangering human society.

Read the original:
Taming AI to the benefit of humans - Asia News NetworkAsia News ... - asianews.network

Evolutionary reinforcement learning promises further advances in … – EurekAlert

image:Key research areas in evolutionary reinforcement learning. view more

Credit: Hui Bai et al.

Evolutionary reinforcement learning is an exciting frontier in machine learning, combining the strengths of two distinct approaches: reinforcement learning and evolutionary computation. In evolutionary reinforcement learning, an intelligent agent learns optimal strategies by actively exploring different approaches and receiving rewards for successful performance. This innovative paradigm combines reinforcement learning's trial-and-error learning with evolutionary algorithms' ability to mimic natural selection, resulting in a powerful methodology for artificial intelligence development that promises breakthroughs in various domains.

A groundbreaking review article on evolutionary reinforcement learning was published Apr. 21 in Intelligent Computing, a Science Partner Journal. It sheds light on the latest advancements in the integration of evolutionary computation with reinforcement learning and presents a comprehensive survey of state-of-the-art methods.

Reinforcement learning, a subfield of machine learning, focuses on developing algorithms that learn to make decisions based on feedback from the environment. Remarkable examples of successful reinforcement learning include AlphaGo and, more recently, Google DeepMind robots that play soccer. However, reinforcement learning still faces several challenges, including the exploration and exploitation trade-off, reward design, generalization and credit assignment.

Evolutionary computation, which emulates the process of natural evolution to solve problems, offers a potential solution to the problems of reinforcement learning. By combining these two approaches, researchers created the field of evolutionary reinforcement learning.

Evolutionary reinforcement learning encompasses six key research areas:

Evolutionary reinforcement learning can solve complex reinforcement learning tasks, even in scenarios with rare or misleading rewards. However, it requires significant computational resources, making it computationally expensive. There is a growing need for more efficient methods, including improvements in encoding, sampling, search operators, algorithmic frameworks and evaluation.

While evolutionary reinforcement learning has shown promising results in addressing challenging reinforcement learning problems, further advancements are still possible. By enhancing its computational efficiency and exploring new benchmarks, platforms and applications, researchers in the field of evolutionary reinforcement learning can make evolutionary methods even more effective and useful for solving complex reinforcement learning tasks.

Intelligent Computing

Evolutionary Reinforcement Learning: A Survey

21-Apr-2023

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Go here to see the original:
Evolutionary reinforcement learning promises further advances in ... - EurekAlert

Commentary: AI’s successes – and problems – stem from our own … – CNA

The reason why machines are now able to do things that we, their makers, do not fully understand is that they have become capable of learning from experience. AlphaGo became so good by playing more games of Go than a human could fit into a lifetime. Likewise, no human could read as many books as ChatGPT has absorbed.

Its important to understand that machines have become intelligent without thinking in a human way. This realisation alone can greatly reduce confusion, and therefore anxiety.

Intelligence is not exclusively a human ability, as any biologist will tell you, and our specific brand of it is neither its pinnacle nor its destination. It may be difficult to accept for some, but intelligence has more to do with chickens crossing the road safely than with writing poetry.

In other words, we should not necessarily expect machine intelligence to evolve towards some form of consciousness. Intelligence is the ability to do the right thing in unfamiliar situations, and this can be found in machines, for example, those that recommend a new book to a user.

If we want to understand how to handle AI, we can return to a crisis that hit the industry in the late 1980s, when many researchers were still trying to mimic what we thought humans do. For example, they were trying to understand the rules of language or human reasoning, to program them into machines.

Go here to see the original:
Commentary: AI's successes - and problems - stem from our own ... - CNA

Machine anxiety: How to reduce confusion and fear about AI technology – Thaiger

In the 19th century, computing pioneer Ada Lovelace wrote that a machine can only do whatever we know how to order it to perform, little knowing that by 2023, AI technology such as chatbot ChatGPT would be holding conversations, solving riddles, and even passing legal and medical exams. The result of this development is eliciting both excitement and concern about the potential implications of these new machines.

The ability of AI to learn from experience is the driving force behind its newfound capabilities. AlphaGo, a program designed to play and improve at the board game Go, defeated its creators using strategies they couldnt explain after playing countless games. Similarly, ChatGPT has processed far more books than any human could ever hope to read.

However, it is essential to understand that intelligence exhibited by machines is not the same as human intelligence. Different species exhibit diverse forms of intelligence without necessarily evolving towards consciousness. For example, the intelligence of AI can recommend a new book to a user, without the need for consciousness.

The obstacles encountered while trying to program machines using human-like language or reasoning led to the development of statistical language models, with the first successful example being crafted by Fredrick Jelinek at IBM. This approach rapidly spread to other areas, leading to data being harvested from the web and focusing AI on observing user behaviour.

While technology has progressed significantly, there are concerns about fair decision-making and the collection of personal data. The delegation of significant decisions to AI systems has also led to tragic outcomes, such as the case of 14-year-old Molly Russell, whose death was partially blamed on harmful algorithms showing her damaging content.

Addressing these problems will require robust legislation to keep pace with AI advancements. A meaningful dialogue on what society expects from AI is essential, drawing input from a diverse range of scholars and grounded in the technical reality of what has been built rather than baseless doomsday scenarios.

Nello Cristianini is a Professor of Artificial Intelligence at the University of Bath. This commentary first appeared on The Conversation, reports Channel News Asia.

Alex is a 42-year-old former corporate executive and business consultant with a degree in business administration. Boasting over 15 years of experience working in various industries, including technology, finance, and marketing, Alex has acquired in-depth knowledge about business strategies, management principles, and market trends. In recent years, Alex has transitioned into writing business articles and providing expert commentary on business-related issues. Fluent in English and proficient in data analysis, Alex strives to deliver well-researched and insightful content to readers, combining practical experience with a keen analytical eye to offer valuable perspectives on the ever-evolving business landscape.

Go here to read the rest:
Machine anxiety: How to reduce confusion and fear about AI technology - Thaiger