Archive for the ‘Alphago’ Category

Creative AI: Best Examples of Artificial Intelligence In Photography, Art, Writing, Music and Gaming – TechTheLead

What is creativity? If you look it up in the dictionary, its the use of imagination or original ideas to create something. Naturally, its always been the prerogative of humans, beings that can dream big and visualize concepts. Lately, though, researchers have been arguing that creativity is not only the characteristic of humans but of artificial intelligence, too. AIs may not daydream as we do but, in specific contexts, with key information at hand, they can write, picture, invent games, or beat others at them. These are some of the most extraordinary creative AI examples.

To an AI, creativity isnt exactly the power of imagination as much as it is the power of creation. Artificial intelligence uses past data to translate a certain event or environment, learning from it just enough to generate new things. Most of the time, convolutional neural networks are fed immense amounts of data and left to train using those as starting points. The algorithms must see the patterns in the input information to then generate new examples that could be plausible in a given context.

The answer is definitely yes, if you admit the definition of creativity proposed by science. There are dozens of examples in this sense, some more successful than others, starting from an AI that creates poems or stories to neural networks that can come up with names or give life to old photos.

Take, for instance, this Deep Nostalgia AI that turns old photos into video animations. Imagine areverse iPhone Live Photos where instead of picking the best frame from a short video, the program uses a vintage portrait photo and puts it into motion.

Another way to give life to old pictures is through colorization. So, a team trained an AI to fill in the blanks in a way that makes it possible to actually see Abraham Lincoln, Albert Einstein, Frida Kahlo, or Bertrand Russells true colors. The results are amazing!

Truly mind-blowing is the following GAN effort! NVIDIA managed to produce new individuals, new human faces starting from the photos of real people. Just look at them and tell me if you can tell who is computer-generated and who is an actual person!

And AI hasnt stopped here. One AI fed with pictures of more than 1,000 classical sculptures managed to produce Dio, a unique sculpture. Ironically, Dio was built from the remains of the computer used to generate it.

Im not kidding. After trying their hand at retouching, colorizing and even creating portraits from scratch, AI programs were put to write. What? Pickup lines, poems, love songs, and even horrors.

In this case, getting it just right wasnt the main goal. In fact, the teams training the neural networks probably hoped for a good laugh at most.

What did they get?

Quirky ways to flirt, for one. Then, two-line poems written after a thorough study of over 20 million words coming from 19th-century poetry. Last, but not least, a love song that nobody should have to listen to Downtiration, Tender Love. Its cringey, at best.

At the opposite spectrum is this love poem from a machine trained by researchers at Microsoft and professors at Kyoto University. After being trained on thousands of images juxtaposed with human-written descriptions and poems about each image, the AI wrote a decent piece that could pass as avantgarde.

The most popular AI writer by far, however, is Shelley. This time it was MIT that gave an AI the power of storytelling and stories it wrote, from random snippets based on what it learned, to contributions on a given text. It all culminated with Shelley breaking the fourth wall and inviting users on Twitter to help her write better and more.

Writing horror stories may seem a fun, sometimes easy task. But defending your intentions to humans? Thats not for the faint of heart. Luckily, GPT-3, OpenAIs powerful new language generator, lacks a heart and was able to address humanity in a deeply moving essay.

Going from poetry to music is a piece of cake. So, researchers leveled up and gave AI the task to compose lyrics and even entire music albums.

One of them generated scary music in time for Halloween. The uncanny resemblance of this AI-generated playlist to horror movie soundtracks has an explanation. MIT trained the neural network on thousands of songs from cult scary movies to produce a new sound. Scary or just unnatural? Listen here and let me know!

Another AI was trained to come up with a new Swift song. To manage that, the neural network was trained on a massive amount of Tay lyrics. Unfortunately, its creation wasnt able to pass for a TSwift song.

Taylor Swift wasnt the only singer AI tried to replace on stage. Eminem and Kanye came up next, although in their case it was more of a deepfake situation. Both artists changed lanes and started rapping about female power. Check out their collab here!

Finally, this AI went above and beyond with its music skills. It helped an artistcompose and produce an entire music album. No need for a crew!

Have you heard of the God of Go? You must have. AlphaGo is the strongest player of Go in history and surprise, surprise, its not human. DeepMind, Alphabets subsidiary that is in charge of artificial intelligence development,is the creator of the most powerful AI player of Go. Their Zero rendition proved that it doesnt need to observe humans before challenging a Go player.In fact, it can play against its predecessor already a worldwide champion and beat them! Want to find more about this extraordinary program? Read about it here!

Defeating humans at their own game is satisfying in itself but coming up with a whole new game well, thats awe-inspiring. A neural network trained on data from over 400 existing sports created Speedgate and its logo!

For those who prefer more static hobbies, a knitting AI could come in handy. InverseKnit showed it could do a more than decent job with fingerless gloves from a very specific type of acrylic yarn but it would be easy to train it on more materials. In the end, researchers would like to make InverseKnit available to the public.

Now, if you think knitter is the weirdest job an AI could have, try again. One machine learning program simulated the voice and facial features of a known Chinese TV anchor, making the case for a non-stop TV host.

In a different corner of the world, an advertising competition in Belgium had an AI judge in its panel to pick the winning campaign. Surprisingly, the AI made the same choice as the human judges, proving its worthiness.

Finally, an AI took upon itself the task of naming adoptable kittens. Sure, some of its suggestions were downright terrifying like Warning Signs, Bones of The Master, and Kill All Humansbut the AI did manage to find some ingenious ones.

That goes to show that artificial intelligence doesnt need to be inventive to be creative. For AI, the sky is the limit as long as humans fill that gap.

Facebook Twitter LinkedIn Reddit Pinterest

Subscribe to our website and stay in touch with the latest news in technology.

You will soon receive relevant content about the latest innovations in tech.

There was an error trying to subscribe to the newsletter. Please try again later.

Read more:
Creative AI: Best Examples of Artificial Intelligence In Photography, Art, Writing, Music and Gaming - TechTheLead

How AI is Mishandled to Become a Cybersecurity Risk | eWEEK – eWeek

The rapid evolution of artificial intelligence algorithms has turned this technology into an element of critical business processes. The caveat is that there is a lack of transparency in the design and practical applications of these algorithms, so they can be used for different purposes.

Whereas infosec specialists use AI for benign purposes, threat actors mishandle it to orchestrate real-world attacks. At this point, it is hard to say for sure who is winning. The current state of the balance between offense and defense via machine learning algorithms has yet to be evaluated.

There is also a security principles gap regarding the design, implementation and management of AI solutions. Completely new tools are required to secure AI-based processes and thereby mitigate serious security risks.

The global race to develop advanced AI algorithms is accelerating non-stop. The goal is to create a system in which AI can solve complex problems (e.g., decision-making, visual recognition and speech recognition) and flexibly adapt to circumstances. These will be self-contained machines that can think without human assistance. This is a somewhat distant future of AI, however.

At this point, AI algorithms cover limited areas and already demonstrate certain advantages over humans, save analysis time and form predictions. The four main vectors of AI development are speech and language processing, computer vision, pattern recognitionin addition to reasoning and optimization.

Huge investments are flowing into AI research and development along with machine learning methods. Global AI spending in 2019 amounted to $37.5 billion, and it is predicted to reach a whopping $97.9 billion by 2023. China and the U.S. dominate the worldwide funding of AI development.

Transportation, manufacturing, finance, commerce, health care, big-data processing, robotics, analytics and many more sectors will be optimized in the next five to 10 years with the ubiquitous adoption of AI technologies and workflows.

With reinforcement learning in its toolkit, AI can play into attackers hands by paving the way for all-new and highly effective attack vectors. For instance, the AlphaGo algorithm has given rise to fundamentally new tactics and strategies in the famous Chinese board game Go. If mishandled, such mechanisms can lead to disruptive consequences.

Let us list the main advantages of the first generation of offensive tools based on AI:

At the same time, AI can help infosec experts to identify and mitigate risks and threats, predict attack vectors and stay one step ahead of criminals. Furthermore, it is worth keeping in mind that a human being is behind any AI algorithm and its practical application vectors.

Let us try to outline the balance between attacking and defending via AI. The main stages of an AI-based attack are as follows:

Now, let us provide an example of how AI can be leveraged in defense:

The expanding range of attack vectors is only one of the current problems related to AI. Attackers can manipulate AI algorithms to their advantage by modifying the code and abusing it at a completely different level.

AI also plays a significant role in creating Deepfakes. Images, audio, and video materials fraudulently processed with AI algorithms can wreak information havoc making it difficult to distinguish the truth from the lies.

To summarize, here are the main challenges and systemic risks associated with AI technology, as well as the possible solutions:

The current evolution of security tools: The infosec community needs to focus on AI-based defense tools. We must understand that there will be an incessant battle between the evolution of AI attack models and AI defenses. Enhancing the defenses will be pushing the attack methods forward, and therefore this cyber-arms race should be kept within the realms of common sense. Coordinated action by all members of the ecosystem will be crucial to eliminating risks.

Operations security (OPSEC): A security breach or AI failure in one part of the ecosystem could potentially affect its other components. Cooperative approaches to operations security will be required to ensure that the ecosystem is resilient to the escalating AI threat. Information sharing among participants will play a crucial role in activities such as detecting threats in AI algorithms.

Building defense capabilities: The evolution of AI can turn some parts of the ecosystem into low-hanging fruit for attackers. Unless cooperative action is taken to build a collective AI defense, the entire systems stability could be undermined. It is important to encourage the development of defensive technologies at the nation-state level. AI skills, education, and communication will be essential.

Secure algorithms: As industries become increasingly dependent on machine learning technology, it is critical to ensure its integrity and keep AI algorithms unbiased. At this point, approaches to concepts such as ethics, competitiveness, and code-readability of AI algorithms have not yet been fully developed.

Algorithm developers can be held liable for catastrophic errors in decisions made by AI. Consequently, it is necessary to come up with secure AI development principles and standards that are accepted not only in the academic environment and among developers, but also at the highest international level.

These principles should include secure design (tamper-proof and readable code), operational management (traceability and rigid version control) and incident management (developer responsibility for maintaining integrity).

David Balaban is a computer security researcher with over 17 years of experience in malware analysis and antivirus software evaluation. He runs MacSecurity.net and Privacy-PC.com projects that present expert opinions on contemporary information security matters, including social engineering, malware, penetration testing, threat intelligence, online privacy, and white hat hacking. Mr. Balaban has a strong malware troubleshooting background, with a recent focus on ransomware countermeasures.

Read more from the original source:
How AI is Mishandled to Become a Cybersecurity Risk | eWEEK - eWeek

AI and the future of gaming – Pocket Gamer.Biz

Charlotte Murphy is a freelance writer who loves writing about all things AI and how it's revolutionising the world in unexpected ways.

There are few industries that cut as close to the edge in next-gen technology as the gaming industry.

AI in gaming was formerly held back by the limitations of commercial computers and games consoles, but soon, this will no longer be an issue.

Game designers and programmers are scaling new levels in their quest for ever-increasingly sophisticated and engaging gameplay and AIs more advanced iterations will likely become integral to the games of the future.

Were now reaching a point where powerful AI can be relinquished from the confines of supercomputers and deployed to the masses - this will change gaming forever.

The origins of AI in gaming

You have to look back to the 90s to discover the origins of AIs intersection with gaming.

IBMs Deep Blue supercomputer beat Garry Kasparov, one of the worlds most prolific chess grandmasters in a six-game match in 1997.

Kasparov was stunned by the humanistic touch IBMs AI imparted on the game, leading to accusations of cheating. But for many onlookers worldwide, this signified a watershed moment - a moment of realisation that AI could now outsmart humanity.

In 2016, Google DeepMinds AlphaGo computer beat some of the worlds most prolific Go players - an abstract strategy game where there are a potential 2.110 potential combinations, greater than all the atoms in the observable universe.

Today, the immense power of AI is being integrated into games on a commercial scale.

The games of the future will offer near-infinite combinations of situations, scenarios, levels and landscapes as well as life-like NPCs and endless customisation.

AI in gaming today

AI has played a prominent role in gaming since the start but newer games have employed AI in increasingly innovative ways.

A vast combination of programming and software engineering techniques ranging from deep learning and neural networks to anomaly detection, Monte Carlo modelling and finite-state machine programming have been employed to make gaming more complex than ever.

For example, No Mans Skys tagline and USP was that it enabled players to explore a near-infinite host of planets (probably more like 18 quintillion - the topic is debated, but the point remains).

It manages this via a generative machine learning algorithm that creates new planets as you explore the universe, layering them with a diverse array of randomised flora and fauna.

Dark Souls is another great example of how AI is already used in gaming. FromSoftware programmed some of the notoriously merciless and difficult-to-beat bosses (Kalameet, The Nameless King amongst many others) to predict human error and react in advance.

These bosses have a good idea of your next move before you even play it. That makes them extremely tough to beat using a formulaic, planned approach.

These are just two examples of how AI is already keeping us glued to our screens when it comes to games. But theres a lot more to come.

AI in gaming in the future

AI in the future will be used to generate near-infinite in-game variables.

These relate to three main areas:

Pathfinding

Pathfinding is the process of getting from A to B. The gaming landscape is the main pathfinding feature, AI will generate the landscape, or game world, as you progress through the game.

This enables the landscape to feedback on anything from your moves and playing style to your in-game decisions, appearance, behaviour and technique.

Decision-making

Decision-making has always been a key component of games (Knights of the Old Republic, anyone?!)

With AI, the influence your decisions have on the game will be much more granular.

Consider Red Dead Redemption 2; the behaviour and interactions of NPCs are influenced by minuscule variables such as the type of hat youre wearing and whether or not your clothes are stained with blood.

The entire game world could be manipulated based on your decisions as millions of factors work together in a gigantic matrix of possibilities. The chains between cause and effect could become extremely sophisticated.

As the popular analogy for chaos theorys Butterfly Effect goes; one small beat of a butterflys wings could cause a hurricane on the other side of the world.

NPCs - emotion in gaming

Consider this; what if those AI NPCs actually felt emotion, perhaps even in a similar way that we do?

Versu, a game created back in 2013, breathed emotional life into AI-generated characters.

In this remarkably complex and intriguing storytelling game, characters are programmed to have emotional states that interact with each other as the story unwinds. Some stories generated in Versu even surprised its creator Richard Evans (DeepMind researcher and AI lead on The Sims 3).

At Expressive Intelligence Studio, an experimental AI programming group, AI characters are even programmed with life-like memories.

Their emotion is influenced by the events characters remember from their childhoods, their upbringing, their emotional state, and even the songs they hear in-game.

Whilst most of these concepts are confined to academic exercise and experimentation for now, there may come a time where NPC game characters roam their worlds thinking in some of the same ways we do.

The ultimate result will be games that live and breathe organically with characters that remember, think, feel and think like humans.

Whilst the ethical considerations of this are another story altogether, the gaming industry is certainly on the cusp of an AI-powered revolution.

Follow this link:
AI and the future of gaming - Pocket Gamer.Biz

System on Chips And The Modern Day Motherboards – Analytics India Magazine

The SoC is the new motherboard.

Data centres are no longer betting on the one-size-fits-all compute. Decades of homogenous compute strategies are disrupted by the need to optimise. Modern-day data centres are embracing purpose-built System on Chip (SoC) designs to have more control over peak performance, optimise power consumption and scalability. Thus, customisation of chips has become the go-to solution for many cloud providers. Companies like Google Cloud especially are doubling down on this front.

Google introduced the Tensor Processing Unit (TPU) back in 2015. Today TPUs power services such as real-time voice search, photo object recognition, and interactive language translation. TPUs drive DeepMinds powerful AlphaGo algorithms, which outclassed the worlds best Go player. They were later used for Chess and Shogi. Today, TPUs have the power to process over 100 million photos a day. Most importantly, TPUs are also used for Googles search results. The search giant even unveiled OpenTitan, the first open-source silicon root-of-trust project. The companys custom hardware solutions range from SSDs, to hard drives, network switches, and network interface cardsoften in deep collaboration with external partners.

Workloads demand even deeper integration into the underlying hardware.

Just like on a motherboard, CPUs and TPUs come from different sources. A Google data centre consists of thousands of server machines connected to a local network. Google designs custom chips, including a hardware security chip currently being deployed on both servers and peripherals. According to Google Cloud, these chips allow them to securely identify and authenticate legitimate Google devices at the hardware level.

According to the team at GCP, computing at Google is at a critical inflection point. Instead of integrating components on a motherboard, Google focuses more on SoC designs where multiple functions sit on the same chip or on multiple chips inside one package. The company even claimed that the System on Chips is the modern-day motherboard.

To date, writes Amin Vahdat of GCP, the motherboard has been the integration point, where CPUs, networking, storage devices, custom accelerators, memory, all from different vendors blended into an optimised system. However, the cloud providers, especially companies like Google Cloud, AWS which own large data centres, gravitate towards deeper integration in the underlying hardware to gain higher performance at lesser power consumption.

According to ARM acquired by NVIDIA recently renewed interest towards design freedom and system optimisation has led to higher compute utilisation, improved performance-power ratios, and the ability to get more out of a physical datacenter.

For example, AWS Graviton2 instances, using the Arm Neoverse N1 platform, deliver up to 40 percent better price-performance over the previous x86-based instances at a 20 percent lower price. Silicon solutions such as Amperes Altra are designed to deliver performance-per-watt, flexibility, and scalability their customers demand.

The capabilities of cloud instances rely on the underlying architectures and microarchitectures that power the hardware.

Amazon has made its silicon ambitions obvious as early as 2015. Amazon acquired Israel-based Annapurna Labs, known for networking-focused Arm SoCs. Amazon leveraged Annapurna Labs tech to build a custom Arm server-grade chipGraviton2. After its release, Graviton2 locked horns with Intel and AMD, the data centre chip industrys major players. While the Graviton2 instance offered 64 physical cores, AMD or Intel could manage only 32 physical cores.

Last year, AWS even launched custom-built AWS Inferentia chips for the hardware specialisation department. Inferentias performance convinced AWS to deploy them for their popular Alexa services that require state of the art ML for speech processing and other tasks.

Amazons popular EC2 instances are now powered by AWS Inferentia chips that can deliver up to 30% higher throughput and up to 45% lower cost per inference. Whereas, Amazon EC2 F1 instances use FPGAs to enable delivery of custom hardware accelerations. F1 instances are easy to program and come with an FPGA Developer AMI and support hardware level development on the cloud. Examples of target applications that can benefit from F1 instance acceleration include genomics, search/analytics, image and video processing, network security, electronic design automation (EDA), image and file compression and big data analytics.

Source:AWS

Followed by AWS Inferentias success in providing customers with high-performance ML inference at the lowest cost in the cloud, AWS is launching Trainium to address the shortcomings of Inferentia. The Trainium chip is specifically optimised for deep learning training workloads for applications including image classification, semantic search, translation, voice recognition, natural language processing and recommendation engines.

The above table is a performance comparison by Anandtech, which shows how the cloud providers can ditch the legacy chip makers, thanks to ARMs license provisions. Even Microsoft is reportedly building an ARM-based processor for Azure data centres. Apart from custom chips thats under wraps, Microsoft too had a shot at silicon success. They have collaborated with AMD, Intel, and Qualcomm Technologies and announced the Microsoft Pluton security processor. The Pluton design builds security directly into the CPU.

To overcome the challenges and realise the opportunities presented by semiconductor densities and capabilities, electronic product cloud companies will look into System-on-a-Chip (SoC) design methodologies of incorporating pre-designed components, also called SoC Intellectual Property (SoC-IP), which can then be integrated into their own algorithms. As SoCs incorporate processors that allow customisation in the layers of software as well as in the hardware around the processors is the reason why even Google Cloud is bullish on this. They even roped in Intel veteran Uri Frank to lead their server chip design efforts. According to Amin Vahdata, VP, GCP, SoCs offer many orders of magnitude better performance with greatly reduced power and cost compared to assembling individual ASICs on a motherboard. The future of cloud infrastructure is bright, and its changing fast, said Vahdat.

View post:
System on Chips And The Modern Day Motherboards - Analytics India Magazine

BOOK REVIEW: Genius Makers, by Cade Metz the tribal war in AI – Business Day

A guide to an intellectual counter-revolution that is already transforming the world

BL PREMIUM

01 April 2021 - 05:10 John Thornhill

It may not be on the level of the Montagues and the Capulets, or the Sharks and the Jets, but in the world of geeks the rivalry is about as intense as it gets. For decades, two competing tribes of artificial intelligence (AI) experts have been furiously duelling with each other in research labs and conference halls around the world. But rather than swords or switchblades, they have wielded nothing more threatening than mathematical models and computer code.

On one side, the connectionist tribe believes that computers can learn behaviour in the same way as humans do, by processing a vast array of interconnected calculations. On the other, the symbolists argue that machines can only follow discrete rules. The machines instructions are contained in specific symbols, such as digits and letters...

The rest is here:
BOOK REVIEW: Genius Makers, by Cade Metz the tribal war in AI - Business Day