Archive for the ‘Alphago’ Category

How AI is Mishandled to Become a Cybersecurity Risk | eWEEK – eWeek

The rapid evolution of artificial intelligence algorithms has turned this technology into an element of critical business processes. The caveat is that there is a lack of transparency in the design and practical applications of these algorithms, so they can be used for different purposes.

Whereas infosec specialists use AI for benign purposes, threat actors mishandle it to orchestrate real-world attacks. At this point, it is hard to say for sure who is winning. The current state of the balance between offense and defense via machine learning algorithms has yet to be evaluated.

There is also a security principles gap regarding the design, implementation and management of AI solutions. Completely new tools are required to secure AI-based processes and thereby mitigate serious security risks.

The global race to develop advanced AI algorithms is accelerating non-stop. The goal is to create a system in which AI can solve complex problems (e.g., decision-making, visual recognition and speech recognition) and flexibly adapt to circumstances. These will be self-contained machines that can think without human assistance. This is a somewhat distant future of AI, however.

At this point, AI algorithms cover limited areas and already demonstrate certain advantages over humans, save analysis time and form predictions. The four main vectors of AI development are speech and language processing, computer vision, pattern recognitionin addition to reasoning and optimization.

Huge investments are flowing into AI research and development along with machine learning methods. Global AI spending in 2019 amounted to $37.5 billion, and it is predicted to reach a whopping $97.9 billion by 2023. China and the U.S. dominate the worldwide funding of AI development.

Transportation, manufacturing, finance, commerce, health care, big-data processing, robotics, analytics and many more sectors will be optimized in the next five to 10 years with the ubiquitous adoption of AI technologies and workflows.

With reinforcement learning in its toolkit, AI can play into attackers hands by paving the way for all-new and highly effective attack vectors. For instance, the AlphaGo algorithm has given rise to fundamentally new tactics and strategies in the famous Chinese board game Go. If mishandled, such mechanisms can lead to disruptive consequences.

Let us list the main advantages of the first generation of offensive tools based on AI:

At the same time, AI can help infosec experts to identify and mitigate risks and threats, predict attack vectors and stay one step ahead of criminals. Furthermore, it is worth keeping in mind that a human being is behind any AI algorithm and its practical application vectors.

Let us try to outline the balance between attacking and defending via AI. The main stages of an AI-based attack are as follows:

Now, let us provide an example of how AI can be leveraged in defense:

The expanding range of attack vectors is only one of the current problems related to AI. Attackers can manipulate AI algorithms to their advantage by modifying the code and abusing it at a completely different level.

AI also plays a significant role in creating Deepfakes. Images, audio, and video materials fraudulently processed with AI algorithms can wreak information havoc making it difficult to distinguish the truth from the lies.

To summarize, here are the main challenges and systemic risks associated with AI technology, as well as the possible solutions:

The current evolution of security tools: The infosec community needs to focus on AI-based defense tools. We must understand that there will be an incessant battle between the evolution of AI attack models and AI defenses. Enhancing the defenses will be pushing the attack methods forward, and therefore this cyber-arms race should be kept within the realms of common sense. Coordinated action by all members of the ecosystem will be crucial to eliminating risks.

Operations security (OPSEC): A security breach or AI failure in one part of the ecosystem could potentially affect its other components. Cooperative approaches to operations security will be required to ensure that the ecosystem is resilient to the escalating AI threat. Information sharing among participants will play a crucial role in activities such as detecting threats in AI algorithms.

Building defense capabilities: The evolution of AI can turn some parts of the ecosystem into low-hanging fruit for attackers. Unless cooperative action is taken to build a collective AI defense, the entire systems stability could be undermined. It is important to encourage the development of defensive technologies at the nation-state level. AI skills, education, and communication will be essential.

Secure algorithms: As industries become increasingly dependent on machine learning technology, it is critical to ensure its integrity and keep AI algorithms unbiased. At this point, approaches to concepts such as ethics, competitiveness, and code-readability of AI algorithms have not yet been fully developed.

Algorithm developers can be held liable for catastrophic errors in decisions made by AI. Consequently, it is necessary to come up with secure AI development principles and standards that are accepted not only in the academic environment and among developers, but also at the highest international level.

These principles should include secure design (tamper-proof and readable code), operational management (traceability and rigid version control) and incident management (developer responsibility for maintaining integrity).

David Balaban is a computer security researcher with over 17 years of experience in malware analysis and antivirus software evaluation. He runs MacSecurity.net and Privacy-PC.com projects that present expert opinions on contemporary information security matters, including social engineering, malware, penetration testing, threat intelligence, online privacy, and white hat hacking. Mr. Balaban has a strong malware troubleshooting background, with a recent focus on ransomware countermeasures.

Read more from the original source:
How AI is Mishandled to Become a Cybersecurity Risk | eWEEK - eWeek

AI and the future of gaming – Pocket Gamer.Biz

Charlotte Murphy is a freelance writer who loves writing about all things AI and how it's revolutionising the world in unexpected ways.

There are few industries that cut as close to the edge in next-gen technology as the gaming industry.

AI in gaming was formerly held back by the limitations of commercial computers and games consoles, but soon, this will no longer be an issue.

Game designers and programmers are scaling new levels in their quest for ever-increasingly sophisticated and engaging gameplay and AIs more advanced iterations will likely become integral to the games of the future.

Were now reaching a point where powerful AI can be relinquished from the confines of supercomputers and deployed to the masses - this will change gaming forever.

The origins of AI in gaming

You have to look back to the 90s to discover the origins of AIs intersection with gaming.

IBMs Deep Blue supercomputer beat Garry Kasparov, one of the worlds most prolific chess grandmasters in a six-game match in 1997.

Kasparov was stunned by the humanistic touch IBMs AI imparted on the game, leading to accusations of cheating. But for many onlookers worldwide, this signified a watershed moment - a moment of realisation that AI could now outsmart humanity.

In 2016, Google DeepMinds AlphaGo computer beat some of the worlds most prolific Go players - an abstract strategy game where there are a potential 2.110 potential combinations, greater than all the atoms in the observable universe.

Today, the immense power of AI is being integrated into games on a commercial scale.

The games of the future will offer near-infinite combinations of situations, scenarios, levels and landscapes as well as life-like NPCs and endless customisation.

AI in gaming today

AI has played a prominent role in gaming since the start but newer games have employed AI in increasingly innovative ways.

A vast combination of programming and software engineering techniques ranging from deep learning and neural networks to anomaly detection, Monte Carlo modelling and finite-state machine programming have been employed to make gaming more complex than ever.

For example, No Mans Skys tagline and USP was that it enabled players to explore a near-infinite host of planets (probably more like 18 quintillion - the topic is debated, but the point remains).

It manages this via a generative machine learning algorithm that creates new planets as you explore the universe, layering them with a diverse array of randomised flora and fauna.

Dark Souls is another great example of how AI is already used in gaming. FromSoftware programmed some of the notoriously merciless and difficult-to-beat bosses (Kalameet, The Nameless King amongst many others) to predict human error and react in advance.

These bosses have a good idea of your next move before you even play it. That makes them extremely tough to beat using a formulaic, planned approach.

These are just two examples of how AI is already keeping us glued to our screens when it comes to games. But theres a lot more to come.

AI in gaming in the future

AI in the future will be used to generate near-infinite in-game variables.

These relate to three main areas:

Pathfinding

Pathfinding is the process of getting from A to B. The gaming landscape is the main pathfinding feature, AI will generate the landscape, or game world, as you progress through the game.

This enables the landscape to feedback on anything from your moves and playing style to your in-game decisions, appearance, behaviour and technique.

Decision-making

Decision-making has always been a key component of games (Knights of the Old Republic, anyone?!)

With AI, the influence your decisions have on the game will be much more granular.

Consider Red Dead Redemption 2; the behaviour and interactions of NPCs are influenced by minuscule variables such as the type of hat youre wearing and whether or not your clothes are stained with blood.

The entire game world could be manipulated based on your decisions as millions of factors work together in a gigantic matrix of possibilities. The chains between cause and effect could become extremely sophisticated.

As the popular analogy for chaos theorys Butterfly Effect goes; one small beat of a butterflys wings could cause a hurricane on the other side of the world.

NPCs - emotion in gaming

Consider this; what if those AI NPCs actually felt emotion, perhaps even in a similar way that we do?

Versu, a game created back in 2013, breathed emotional life into AI-generated characters.

In this remarkably complex and intriguing storytelling game, characters are programmed to have emotional states that interact with each other as the story unwinds. Some stories generated in Versu even surprised its creator Richard Evans (DeepMind researcher and AI lead on The Sims 3).

At Expressive Intelligence Studio, an experimental AI programming group, AI characters are even programmed with life-like memories.

Their emotion is influenced by the events characters remember from their childhoods, their upbringing, their emotional state, and even the songs they hear in-game.

Whilst most of these concepts are confined to academic exercise and experimentation for now, there may come a time where NPC game characters roam their worlds thinking in some of the same ways we do.

The ultimate result will be games that live and breathe organically with characters that remember, think, feel and think like humans.

Whilst the ethical considerations of this are another story altogether, the gaming industry is certainly on the cusp of an AI-powered revolution.

Follow this link:
AI and the future of gaming - Pocket Gamer.Biz

System on Chips And The Modern Day Motherboards – Analytics India Magazine

The SoC is the new motherboard.

Data centres are no longer betting on the one-size-fits-all compute. Decades of homogenous compute strategies are disrupted by the need to optimise. Modern-day data centres are embracing purpose-built System on Chip (SoC) designs to have more control over peak performance, optimise power consumption and scalability. Thus, customisation of chips has become the go-to solution for many cloud providers. Companies like Google Cloud especially are doubling down on this front.

Google introduced the Tensor Processing Unit (TPU) back in 2015. Today TPUs power services such as real-time voice search, photo object recognition, and interactive language translation. TPUs drive DeepMinds powerful AlphaGo algorithms, which outclassed the worlds best Go player. They were later used for Chess and Shogi. Today, TPUs have the power to process over 100 million photos a day. Most importantly, TPUs are also used for Googles search results. The search giant even unveiled OpenTitan, the first open-source silicon root-of-trust project. The companys custom hardware solutions range from SSDs, to hard drives, network switches, and network interface cardsoften in deep collaboration with external partners.

Workloads demand even deeper integration into the underlying hardware.

Just like on a motherboard, CPUs and TPUs come from different sources. A Google data centre consists of thousands of server machines connected to a local network. Google designs custom chips, including a hardware security chip currently being deployed on both servers and peripherals. According to Google Cloud, these chips allow them to securely identify and authenticate legitimate Google devices at the hardware level.

According to the team at GCP, computing at Google is at a critical inflection point. Instead of integrating components on a motherboard, Google focuses more on SoC designs where multiple functions sit on the same chip or on multiple chips inside one package. The company even claimed that the System on Chips is the modern-day motherboard.

To date, writes Amin Vahdat of GCP, the motherboard has been the integration point, where CPUs, networking, storage devices, custom accelerators, memory, all from different vendors blended into an optimised system. However, the cloud providers, especially companies like Google Cloud, AWS which own large data centres, gravitate towards deeper integration in the underlying hardware to gain higher performance at lesser power consumption.

According to ARM acquired by NVIDIA recently renewed interest towards design freedom and system optimisation has led to higher compute utilisation, improved performance-power ratios, and the ability to get more out of a physical datacenter.

For example, AWS Graviton2 instances, using the Arm Neoverse N1 platform, deliver up to 40 percent better price-performance over the previous x86-based instances at a 20 percent lower price. Silicon solutions such as Amperes Altra are designed to deliver performance-per-watt, flexibility, and scalability their customers demand.

The capabilities of cloud instances rely on the underlying architectures and microarchitectures that power the hardware.

Amazon has made its silicon ambitions obvious as early as 2015. Amazon acquired Israel-based Annapurna Labs, known for networking-focused Arm SoCs. Amazon leveraged Annapurna Labs tech to build a custom Arm server-grade chipGraviton2. After its release, Graviton2 locked horns with Intel and AMD, the data centre chip industrys major players. While the Graviton2 instance offered 64 physical cores, AMD or Intel could manage only 32 physical cores.

Last year, AWS even launched custom-built AWS Inferentia chips for the hardware specialisation department. Inferentias performance convinced AWS to deploy them for their popular Alexa services that require state of the art ML for speech processing and other tasks.

Amazons popular EC2 instances are now powered by AWS Inferentia chips that can deliver up to 30% higher throughput and up to 45% lower cost per inference. Whereas, Amazon EC2 F1 instances use FPGAs to enable delivery of custom hardware accelerations. F1 instances are easy to program and come with an FPGA Developer AMI and support hardware level development on the cloud. Examples of target applications that can benefit from F1 instance acceleration include genomics, search/analytics, image and video processing, network security, electronic design automation (EDA), image and file compression and big data analytics.

Source:AWS

Followed by AWS Inferentias success in providing customers with high-performance ML inference at the lowest cost in the cloud, AWS is launching Trainium to address the shortcomings of Inferentia. The Trainium chip is specifically optimised for deep learning training workloads for applications including image classification, semantic search, translation, voice recognition, natural language processing and recommendation engines.

The above table is a performance comparison by Anandtech, which shows how the cloud providers can ditch the legacy chip makers, thanks to ARMs license provisions. Even Microsoft is reportedly building an ARM-based processor for Azure data centres. Apart from custom chips thats under wraps, Microsoft too had a shot at silicon success. They have collaborated with AMD, Intel, and Qualcomm Technologies and announced the Microsoft Pluton security processor. The Pluton design builds security directly into the CPU.

To overcome the challenges and realise the opportunities presented by semiconductor densities and capabilities, electronic product cloud companies will look into System-on-a-Chip (SoC) design methodologies of incorporating pre-designed components, also called SoC Intellectual Property (SoC-IP), which can then be integrated into their own algorithms. As SoCs incorporate processors that allow customisation in the layers of software as well as in the hardware around the processors is the reason why even Google Cloud is bullish on this. They even roped in Intel veteran Uri Frank to lead their server chip design efforts. According to Amin Vahdata, VP, GCP, SoCs offer many orders of magnitude better performance with greatly reduced power and cost compared to assembling individual ASICs on a motherboard. The future of cloud infrastructure is bright, and its changing fast, said Vahdat.

View post:
System on Chips And The Modern Day Motherboards - Analytics India Magazine

BOOK REVIEW: Genius Makers, by Cade Metz the tribal war in AI – Business Day

A guide to an intellectual counter-revolution that is already transforming the world

BL PREMIUM

01 April 2021 - 05:10 John Thornhill

It may not be on the level of the Montagues and the Capulets, or the Sharks and the Jets, but in the world of geeks the rivalry is about as intense as it gets. For decades, two competing tribes of artificial intelligence (AI) experts have been furiously duelling with each other in research labs and conference halls around the world. But rather than swords or switchblades, they have wielded nothing more threatening than mathematical models and computer code.

On one side, the connectionist tribe believes that computers can learn behaviour in the same way as humans do, by processing a vast array of interconnected calculations. On the other, the symbolists argue that machines can only follow discrete rules. The machines instructions are contained in specific symbols, such as digits and letters...

The rest is here:
BOOK REVIEW: Genius Makers, by Cade Metz the tribal war in AI - Business Day

Reinforcement learning: The next great AI tech moving from the lab to the real world – VentureBeat

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.

Reinforcement learning (RL) is a powerful type of artificial intelligence technology that can be used to learn strategies to optimally control large, complex systems such as manufacturing plants, traffic control systems (road/train/aircraft), financial portfolios, robots, etc. It is currently transitioning from research labs to highly impactful, real world applications. For example, self-driving car companies like Wayveand Waymoare using reinforcement learning to develop the control systems for their cars.

AI systems that are typically used in industry perform pattern recognition to make a prediction. For instance, they may recognize patterns in images to detect faces (face detection), or recognize patterns in sales data to predict a change in demand (demand forecasting), and so on. Reinforcement learning methods, on the other hand, are used to make optimal decisions or take optimal actions in applications where there is a feedback loop. An example where both traditional AI methods and RL may be used, but for different purposes, will make the distinction clearer.

Say we are using AI to help operate a manufacturing plant. Pattern recognition may be used for quality assurance, where the AI system uses images and scans of the finished product to detect any imperfections or flaws. An RL system, on the other hand, would compute and execute the strategy for controlling the manufacturing process itself (by, for example, deciding which lines to run, controlling machines/robots, deciding which product to manufacture, and so on). The RL system will also try to ensure that the strategy is optimal in that it maximizes some metric of interest such as the output volume while maintaining a certain level of product quality. The problem of computing the optimal control strategy, which RL solves, is very difficult for some subtle reasons (often much more difficult than pattern recognition).

In computing the optimal strategy, or policy in RL parlance, the main challenge an RL learning algorithm faces is the so-called temporal credit assignment problem. That is, the impact of an action (e.g. run line 1 on Wednesday) in a given system state (e.g. current output level of machines, how busy each line is, etc.) on the overall performance (e.g. total output volume) is not known until after (potentially) a long time. To make matters worse, the overall performance also depends on all the actions that are taken subsequent to the action being evaluated. Together, this implies that, when a candidate policy is executed for evaluation, it is difficult to know which actions were the good ones and which were the bad ones in other words, it is very difficult to assign credit to the different actions appropriately. The large number of potential system states in these complex problems further exacerbates the situation via the dreaded curse of dimensionality. A good way to get an intuition for how an RL system solves all these problems at the same time is by looking at the recent spectacular successes they have had in the lab.

Many of the recent, prominent demonstrations of the power of RL come from applying them to board games and video games. The first RL system to impress the global AI community was able to learn to outplay humans in different Atari games when only given as input the images on screen and the scores received by playing the game. This was created in 2013 by London-based AI research lab Deepmind (now part of Alphabet Inc.). The same lab later created a series of RL systems (or agents), starting with the AlphaGo agent, which were able to defeat the top players in the world in the board game Go. These impressive feats, which occurred between 2015 and 2017, took the world by storm because Go is a very complex game, with millions of fans and players around the world, that requires intricate, long-term strategic thinking involving both the local and global board configurations.

Subsequently, Deepmind and the AI research lab OpenAI have released systems for playing the video games Starcraft and DOTA 2 that can defeat the top human players around the world. These games are challenging because they require strategic thinking, resource management, and control and coordination of multiple entities within the game.

All the agents mentioned above were trained by letting the RL algorithm play the games many many times (e.g. millions or more) and learning which policies work and which do not against different kinds of opponents and players. The large number of trials were possible because these were all games running on a computer. In determining the usefulness of various policies, the RL algorithm often employed a complex mix of ideas. These include hill climbing in policy space, playing against itself, running leagues internally amongst candidate policies or using policies used by humans as a starting point and properly balancing exploration of the policy space vs. exploiting the good policies found so far. Roughly speaking, the large number of trials enabled exploring many different game states that could plausibly be reached, while the complex evaluation methods enabled the AI system to determine which actions are useful in the long term, under plausible plays of the games, in these different states.

A key blocker in using these algorithms in the real world is that it is not possible to run millions of trials. Fortunately, a workaround immediately suggests itself: First, create a computer simulation of the application (a manufacturing plant simulation, or market simulation etc.), then learn the optimal policy in the simulation using RL algorithms, and finally adapt the learned optimal policy to the real world by running it a few times and tweaking some parameters. Famously, in a very compelling 2019 demo, OpenAI showed the effectiveness of this approach by training a robot arm to solve the Rubiks cube puzzle one-handed.

For this approach to work, your simulation has to represent the underlying problem with a high degree of accuracy. The problem youre trying to solve also has to be closed in a certain sense there cannot be arbitrary or unseen external effects that may impact the performance of the system. For example, the OpenAI solution would not work if the simulated robot arm was too different from the real robot arm or if there were attempts to knock the Rubiks cube out of the real robot arm (though it may naturally be or be explicitly trained to be robust to certain kinds of obstructions and interferences).

These limitations will sound acceptable to most people. However, in real applications it is tricky to properly circumscribe the competence of an RL system, and this can lead to unpleasant surprises. In our earlier manufacturing plant example, if a machine is replaced with one that is a lot faster or slower, it may change the plant dynamics enough that it becomes necessary to retrain the RL system. Again, this is not unreasonable for any automated controller, but stakeholders may have far loftier expectations from a system that is artificially intelligent, and such expectations will need to be managed.

Regardless, at this point in time, the future of reinforcement learning in the real world does seem very bright. There are many startups offering reinforcement learning products for controlling manufacturing robots (Covariant, Osaro, Luffy), managing production schedules (Instadeep), enterprise decision making (Secondmind), logistics (Dorabot), circuit design (Instadeep), controlling autonomous cars (Wayve, Waymo, Five AI), controlling drones (Amazon), running hedge funds (Piit.ai), and many other applications that are beyond the reach of pattern recognition based AI systems.

Each of the Big Tech companies has made heavy investments in RL research e.g. Google acquiring Deepmind for a reported 400 million (approx $525 million) in 2015. So it is reasonable to assume that RL is either already in use internally at these companies or is in the pipeline; but theyre keeping the details pretty quiet for competitive advantage reasons.

We should expect to see some hiccups as promising applications for RL falter, but it will likely claim its place as a technology to reckon with in the near future.

M M Hassan Mahmud is a Senior AI and Machine Learning Technologist at Digital Catapult, with a background in machine learning within academia and industry.

Original post:
Reinforcement learning: The next great AI tech moving from the lab to the real world - VentureBeat