Archive for the ‘Alphazero’ Category

Facebook develops AI algorithm that learns to play poker on the fly – VentureBeat

Facebook researchers have developed a general AI framework called Recursive Belief-based Learning (ReBeL) that they say achieves better-than-human performance in heads-up, no-limit Texas holdem poker while using less domain knowledge than any prior poker AI. They assert that ReBeL is a step toward developing universal techniques for multi-agent interactions in other words, general algorithms that can be deployed in large-scale, multi-agent settings. Potential applications run the gamut from auctions, negotiations, and cybersecurity to self-driving cars and trucks.

Combining reinforcement learning with search at AI model training and test time has led to a number of advances. Reinforcement learning is where agents learn to achieve goals by maximizing rewards, while search is the process of navigating from a start to a goal state. For example, DeepMinds AlphaZero employed reinforcement learning and search to achieve state-of-the-art performance in the board games chess, shogi, and Go. But the combinatorial approach suffers a performance penalty when applied to imperfect-information games like poker (or even rock-paper-scissors), because it makes a number of assumptions that dont hold in these scenarios. The value of any given action depends on the probability that its chosen, and more generally, on the entire play strategy.

The Facebook researchers propose that ReBeL offers a fix. ReBeL builds on work in which the notion of game state is expanded to include the agents belief about what state they might be in, based on common knowledge and the policies of other agents. ReBeL trains two AI models a value network and a policy network for the states through self-play reinforcement learning. It uses both models for search during self-play. The result is a simple, flexible algorithm the researchers claim is capable of defeating top human players at large-scale, two-player imperfect-information games.

At a high level, ReBeL operates on public belief states rather than world states (i.e., the state of a game). Public belief states (PBSs) generalize the notion of state value to imperfect-information games like poker; a PBS is a common-knowledge probability distribution over a finite sequence of possible actions and states, also called a history. (Probability distributions are specialized functions that give the probabilities of occurrence of different possible outcomes.) In perfect-information games, PBSs can be distilled down to histories, which in two-player zero-sum games effectively distill to world states. A PBS in poker is the array of decisions a player could make and their outcomes given a particular hand, a pot, and chips.

Above: Poker chips.

Image Credit: Flickr: Sean Oliver

ReBeL generates a subgame at the start of each game thats identical to the original game, except its rooted at an initial PBS. The algorithm wins it by running iterations of an equilibrium-finding algorithm and using the trained value network to approximate values on every iteration. Through reinforcement learning, the values are discovered and added as training examples for the value network, and the policies in the subgame are optionally added as examples for the policy network. The process then repeats, with the PBS becoming the new subgame root until accuracy reaches a certain threshold.

In experiments, the researchers benchmarked ReBeL on games of heads-up no-limit Texas holdem poker, Liars Dice, and turn endgame holdem, which is a variant of no-limit holdem in which both players check or call for the first two of four betting rounds. The team used up to 128 PCs with eight graphics cards each to generate simulated game data, and they randomized the bet and stack sizes (from 5,000 to 25,000 chips) during training. ReBeL was trained on the full game and had $20,000 to bet against its opponent in endgame holdem.

The researchers report that against Dong Kim, whos ranked as one of the best heads-up poker players in the world, ReBeL played faster than two seconds per hand across 7,500 hands and never needed more than five seconds for a decision. In aggregate, they said it scored 165 (with a standard deviation of 69) thousandths of a big blind (forced bet) per game against humans it played compared with Facebooks previous poker-playing system, Libratus, which maxed out at 147 thousandths.

For fear of enabling cheating, the Facebook team decided against releasing the ReBeL codebase for poker. Instead, they open-sourced their implementation for Liars Dice, which they say is also easier to understand and can be more easily adjusted. We believe it makes the game more suitable as a domain for research, they wrote in the a preprint paper. While AI algorithms already exist that can achieve superhuman performance in poker, these algorithms generally assume that participants have a certain number of chips or use certain bet sizes. Retraining the algorithms to account for arbitrary chip stacks or unanticipated bet sizes requires more computation than is feasible in real time. However, ReBeL can compute a policy for arbitrary stack sizes and arbitrary bet sizes in seconds.

See the original post:
Facebook develops AI algorithm that learns to play poker on the fly - VentureBeat

AlphaZero | Papers With Code

Convex Regularization in Monte-Carlo Tree Search Tuan Dam Carlo D'Eramo Jan Peters Joni Pajarinen 2020-07-01 Aligning Superhuman AI and Human Behavior: Chess as a Model System | Reid McIlroy-Young Siddhartha Sen Jon Kleinberg Ashton Anderson 2020-06-02 Think Too Fast Nor Too Slow: The Computational Trade-off Between Planning And Reinforcement Learning Thomas M. Moerland Anna Deichler Simone Baldi Joost Broekens Catholijn M. Jonker 2020-05-15 Neural Machine Translation with Monte-Carlo Tree Search | Jerrod Parker Jerry Zikun Chen 2020-04-27 Warm-Start AlphaZero Self-Play Search Enhancements Hui Wang Mike Preuss Aske Plaat 2020-04-26 Accelerating and Improving AlphaZero Using Population Based Training Ti-Rong Wu Ting-Han Wei I-Chen Wu 2020-03-13 Learning to Resolve Alliance Dilemmas in Many-Player Zero-Sum Games Edward Hughes Thomas W. Anthony Tom Eccles Joel Z. Leibo David Balduzzi Yoram Bachrach 2020-02-27 Polygames: Improved Zero Learning Tristan Cazenave Yen-Chi Chen Guan-Wei Chen Shi-Yu Chen Xian-Dong Chiu Julien Dehos Maria Elsa Qucheng Gong Hengyuan Hu Vasil Khalidov Cheng-Ling Li Hsin-I Lin Yu-Jin Lin Xavier Martinet Vegard Mella Jeremy Rapin Baptiste Roziere Gabriel Synnaeve Fabien Teytaud Olivier Teytaud Shi-Cheng Ye Yi-Jun Ye Shi-Jim Yen Sergey Zagoruyko 2020-01-27 Three-Head Neural Network Architecture for AlphaZero Learning Anonymous 2020-01-01 Self-Play Learning Without a Reward Metric Dan Schmidt Nick Moran Jonathan S. Rosenfeld Jonathan Rosenthal Jonathan Yedidia 2019-12-16 Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model | Julian Schrittwieser Ioannis Antonoglou Thomas Hubert Karen Simonyan Laurent Sifre Simon Schmitt Arthur Guez Edward Lockhart Demis Hassabis Thore Graepel Timothy Lillicrap David Silver # 1 ATARI GAMES ON ATARI 2600 ROBOTANK 2019-11-19 Multiplayer AlphaZero | Nick Petosa Tucker Balch 2019-10-29 Exploring the Performance of Deep Residual Networks in Crazyhouse Chess | Sun-Yu Gordon Chi 2019-08-25 Performing Deep Recurrent Double Q-Learning for Atari Games Felipe Moreno-Vera 2019-08-16 Multiple Policy Value Monte Carlo Tree Search Li-Cheng Lan Wei Li Ting-Han Wei I-Chen Wu 2019-05-31 Learning Compositional Neural Programs with Recursive Tree Search and Planning Thomas Pierrot Guillaume Ligner Scott Reed Olivier Sigaud Nicolas Perrin Alexandre Laterre David Kas Karim Beguir Nando de Freitas 2019-05-30 Deep Policies for Width-Based Planning in Pixel Domains | Miquel Junyent Anders Jonsson Vicen Gmez 2019-04-12 Improved Reinforcement Learning with Curriculum Joseph West Frederic Maire Cameron Browne Simon Denman 2019-03-29 Hyper-Parameter Sweep on AlphaZero General | Hui Wang Michael Emmerich Mike Preuss Aske Plaat 2019-03-19 -Rank: Multi-Agent Evaluation by Evolution Shayegan Omidshafiei Christos Papadimitriou Georgios Piliouras Karl Tuyls Mark Rowland Jean-Baptiste Lespiau Wojciech M. Czarnecki Marc Lanctot Julien Perolat Remi Munos 2019-03-04 Accelerating Self-Play Learning in Go | David J. Wu 2019-02-27 ELF OpenGo: An Analysis and Open Reimplementation of AlphaZero | Yuandong Tian Jerry Ma Qucheng Gong Shubho Sengupta Zhuoyuan Chen James Pinkerton C. Lawrence Zitnick 2019-02-12 The Entropy of Artificial Intelligence and a Case Study of AlphaZero from Shannon's Perspective Bo Zhang Bin Chen Jin-lin Peng 2018-12-14 Assessing the Potential of Classical Q-learning in General Game Playing | Hui Wang Michael Emmerich Aske Plaat 2018-10-14 ExIt-OOS: Towards Learning from Planning in Imperfect Information Games | Andy Kitchen Michela Benedetti 2018-08-30 Ranked Reward: Enabling Self-Play Reinforcement Learning for Combinatorial Optimization | Alexandre Laterre Yunguan Fu Mohamed Khalil Jabri Alain-Sam Cohen David Kas Karl Hajjar Torbjorn S. Dahl Amine Kerkeni Karim Beguir 2018-07-04 Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm | David Silver Thomas Hubert Julian Schrittwieser Ioannis Antonoglou Matthew Lai Arthur Guez Marc Lanctot Laurent Sifre Dharshan Kumaran Thore Graepel Timothy Lillicrap Karen Simonyan Demis Hassabis # 1 GAME OF SHOGI ON ELO RATINGS 2017-12-05

Here is the original post:
AlphaZero | Papers With Code

AlphaZero learns to play the game at the highest level

A Group of scientists from the group of DeepMind and University College London have developed artificial intelligence, able to self-learn the game and improve in three challenging Board games. In his work, published in the journal Science, the researchers describe their new system and explained why I think it is a big step towards the development of future AI systems.

20 years have Passed since then, as the supercomputer Deep Blue defeated the world chess champion Gary Kasparov and showed the world how far advanced calculations in the field of AI. Since computers became smarter and today beat people in games such as chess, Shogi and go. However, each of these programs is tuned specifically to become a master in a single game. In his new work, the researchers described the creation of artificial intelligence that is not only good in a few games, but also to teach this to improve yourself.

The New system is called AlphaZero is a system of reinforcement learning, that is learning, repeatedly playing the game and learning from their experiences. This, of course, very similar to the process of teaching people. Specifies a basic set of rules and the computer plays a game with himself. He even does not need partners. He plays with himself a lot of times, noting the good and victorious moves. Over time it gets better and better, is superior not only people, but other AI systems designed for Board games. This system also used a technique called "search tree search Monte-Carlo". The combination of two technologies has allowed the system to learn how to improve in the game. Scientists tested the strength of the program, and providing a large capacity 5000 tensor of processors and is paired with a large supercomputer.

At the moment AlphaZero has mastered chess, Shogi and go. The next step will be the popular video games. As for performance, AI, in, for example, AlphaZero beat legendary AlphaGo in 30 hours.

What do you think, when the blast of artificial intelligence? Tell us in our

See the article here:
AlphaZero learns to play the game at the highest level

Who Are The 8 Best U.S. Chess Players Ever? – Chess.com

On July 4, the day the United States of America celebrates its independence, let's take a look at the best chess players in American history.

The United States has long produced top chess talent, with some of the game's finest players, authors, and theoreticians calling the U.S. home.

In recent years, the U.S. has been a force on the international chess scene, and its "big three" grandmasters are staples at the world's top tournaments. The United States had a world-championship contender in 2018, with GMFabiano Caruana coming up just short against the world champion, GMMagnus Carlsen.

Caruana obviously makes the list of the best-ever U.S. players, but where does he rank? And who is ahead of him?

There are many ways to make a "best-of-all-time" list. Your selections will be different from mine. I am using peak playing strength as my primary metric, not overall career achievement because I am most interested in the best possible chess produced by each American on this list.

Peak rating: 2763

Gata Kamsky is a true chess prodigy. He became a strong grandmaster at age 16 and reached his peak in the 1990s. His career pinnacle was in the 1996 FIDE world championship bracket, where he made the finals but dropped the championship match against the reigning FIDE world champion, GMAnatoly Karpov.

Kamsky was born in the Soviet Union but moved to the United States early in his career. Kamsky won the U.S. chess championship five times (1991, 2010, 2011, 2013, and 2014), cementing his status as an American chess legend.

Here is a 22-year-old Kamsky beating the super-GM Nigel Short in 26 moves.

Peak rating: 2768

Even with much recent success, Leinier Dominguez Perez remains an underrated American chess talent.

Dominguez Perez officially became an American chess player less than two years ago, in December 2018, when he transferred federations to the United States. Before that, he was the five-time Cuban chess champion.

His career peak was likely his sole first place in the 2013 FIDE Grand Prix leg in Greece, finishing ahead of 11 other super-GMs, including three others on this list.

Dominguez Perez's attacking prowess was on full display in 2014 when he practically wiped future-compatriot GMWesley So's kingside off the board in this brutal miniature.

Peak rating: 2811 (estimated by Edo)

It's not a stretch to call Paul Morphy the father of American chess.

A true prodigy, Morphy was not just a chess force at an early age. His game was also about 100 years ahead of its time in terms of style and even tactical strength.

GM Bobby Fischer called Morphy "the most accurate player who ever lived," which should tell you something because many chess fans give that title instead to Fischer.

Morphy's game peaked quite early, and the apex was his European tour in 1858 at age 21. Morphy pretty much destroyed every strong player the European continent could throw at him, and by the time he returned to the United States, he was recognized as the unofficial world champion.

Morphy retired from competitive chess a year later to begin his law practice, never returning to the game before his death at age 47.

Morphy is the author of arguably the most famous chess game ever played, an exhibition against the Duke of Brunswick and Count Isouard at an opera house in Paris. If you're going to show a chess beginner one game, use this one.

Peak rating: 2816

Hikaru Nakamura, while quite a formidable traditional chess force, is truly a chess player of the modern age.

Nakamura has made his mark as unquestionably the best American blitz chess player ever, and also the best American online chess player ever. Since most chess games in 2020 are both played online and at fast time controls, these are fairly important arenas.

Nakamura has also established a tremendous following on the live-streaming site Twitch and was called "the grandmaster who got Twitch hooked on chess" by Wired magazine. On Chess.com, Nakamura has won the two most recent editions of the Speed Chess Championship (2018-2019).

Of course, Nakamura has enjoyed solid over-the-board success as well, winning the U.S. championship five times.

No game quite captures the modern, fun, and online-friendly nature of Nakamura's style like his thorough trolling of the computer engine Crafty back in 2007, when Crafty was one of the world's strongest engines and Nakamura was just 20 years old.

Peak rating: 2822

Wesley So transferred to the United States federation six years ago, and since then he has established himself as one of the world's best players.

So is 26 years old and it's reasonable to think that his chess peak is just getting started. So's style of play is precise and safe, rarely getting himself into trouble. This less-risky approach has been cited (mostly unfairly) as evidence that So is not an exciting chess player.

That argument went right out the window last November when So destroyed the classical world chess champion, Carlsen, in the finals of the first FIDE world Fischer random chess championship. So ran up the score, winning the match 13.5-2.5, putting to rest any doubts of his brilliance and creativity.

In this famous game against the top Chinese GM Ding Liren, So answers any lingering questions you might have about whether three pieces are better than a queen.

Peak rating: 2844

Fabiano Caruana is currently at the top of his career and sits just 28 rating points behind Carlsen on the live list. Caruana and Carlsen are the only players above 2800. The pair fought a close battle in the 2018 world chess championship, with Carlsen needing the tiebreaks to retain his title.

Caruana is still in contention for the next world championship whenever that process resumes, with the American one game off the lead of the 2020 candidates' tournament at the time of its postponement halfway through the schedule.

Caruana's chess highlight reel is too extensive to fully appreciate in this space. He won the U.S. chess championship on his first try in 2016, and he was the four-time Italian chess champion before transferring to the U.S. federation.

Why pick a draw for Caruana's showcase game, when all the other players get wins?

This game against Carlsen in the 2018 world chess championship represents the peak of chess on two levels. On the surface, you have the tremendous underdog Caruana outplaying and pressuring the world champion Carlsen, who was lucky to escape with the draw and maintain an even match.

On a deeper level, there is a beautiful and inscrutable endgame lurking in this game that astounded everyone who analyzed it. The chess super-computer "Sesse" found a forced checkmate for Caruana in 30 moves in real-time, as millions watched the game around the world. The legendary former world champion GMGarry Kasparov said no human could ever spot the win. Yet it was in there, on the board as surely the 64 squares themselves.

I still get goosebumps playing over this endgame.

Peak rating: 2785

Bobby Fischer stands as the most legendary U.S. chess player ever and is universally considered one of the three greatest world champions, along with Carlsen and Kasparov.

Fischer was responsible for a renaissance in American chess in the 1970s as he racked up ridiculous winning streaks on his way to the world title over GMBoris Spassky in 1972. Fischer elevated the game of chess to geopolitical philosophy, representing American individualism against the Soviet chess machine.

The most striking aspect of Fischer's chess was how far ahead he was of his competition. His peak rating of 2785, earned before the considerable rating inflation in the 50 years since would place him near the top of the chess world even today.

Computer studies have confirmed Fischer's strength and accuracy as other-worldly for his time. His style was universal, elegant and above all, accurate. His fierce competitive spirit is something the computer engines can't measure; Fischer had one of the strongest wills to win in chess history.

Fischer's career was cut short by disagreements with chess organizers along with mental and physical health problems. Nonetheless, in the short time he spent at the top of the game, he changed it forever with the millions of American players he inspired.

Almost as a side note, Fischer invented Fischer random chess (chess 960), which is considered one of the most creative chess variants. Fischer also held a patent for a chess clock with an increment, which is the preferred time control today of many players.

The below game, one of the most famous in chess history, shows the stunning chess clarity possessed by Fischer even as young as age 13 when he eviscerated a leading American chess master, Donald Byrne.

Peak rating: 3500+

I can already see the objections in the comment section. But the headline in this article said "chess players," not chess humans, and I am a big fan of non-human chess.

AlphaZero is an artificial intelligence project that plays chess. Given just the rules of the game, AlphaZero taught itself to play chess to superhuman levels in mere hours using machine-learning techniques.

It stormed onto the chess scene in late 2017 when its operators released the results of a 100-game match with Stockfish, the traditional champion chess engine.

AlphaZero plays chess differently from most computers, possessing an almost-intuitive understanding of the game and handling many positions in a beautiful, human-like manner. Of course, AlphaZero is stronger than any human, but if you played through its games you'd think it had a distinct personality. Maybe it does.

AlphaZero inspired a whole wave of neural-network chess engines, including the international open-source project Lc0, which currently sits second behind Stockfish on the computer ratings list. The machine-learning approach pioneered by AlphaZero transformed the scientific basis of computer chess, and it will be the neural-network engines that evolve the game to its next levels, wherever that may be.

Is AlphaZero American? AlphaZero runs on American TPUs. The project's inventor, the AI company DeepMind, is headquartered in the United Kingdom, but the company has been owned by an American corporation (Google/Alphabet) since before there was an AlphaZero.

If George Washington was born a British subject but can still be considered a founding father of the United States, we can extend that same leeway to AlphaZero, especially on the American day of independence from Great Britain.

Of course, there are many other American chess engines, most of them far stronger than the human players on this list, but here they are collectively represented by the intrepid AlphaZero, which changed computer chess forever.

I'll never forget where I was when I saw this game by AlphaZero against the reigning top computer engine Stockfish, and if you care about the evolution of chess, you might not either.

Who do you think are the top chess players in American history? Let us know in the comments.

Read the original post:
Who Are The 8 Best U.S. Chess Players Ever? - Chess.com

Super-Resolution: Why is it good and how can you incorporate it? – Display Daily

Welcome to Part 2 ofBitmovins Video Tech Deep Dive series: Super-Resolution with Machine learning.Before you get started, I highly recommend that you readSuper Resolution: Whats the buzz and why does it matter?. But if you would rather prefer to directly jump into it, here is a quick summary:

The focus of this series of blog posts will be on machine learning-based super-resolution.

In this post, we will examine:

Super-resolution, Machine learning (ML), and Video Upscaling are a match made in heaven. The three factors coming together is the reason behind the current popularity inMachine-learning based super-resolutionapplications. In this section, we will see why.

The concept of super-resolution has existed since the 1980s. The basic idea behind super-resolution was (and continues to be) tointelligentlycombinenon-redundant informationfrom multiple related low-resolution images to generate a single high-resolution image.

Some classic early applications were finding license plate information from several low-resolution images.

Several low-resolution snapshots of a moving car provides non-redundant but related information. Super-Resolution uses this related non-redundancy to create higher-resolution images, which can be useful in finding information such as license plate information or driver identification [Source].

But the recent wave of interest in super-resolution has been primarily driven by ML.

So, why ML and what changed now?

ML, in essence, is about learning theintelligencefor awell-defined problem. With the right architecture and enough data, ML can be significantly moreintelligentthan a human-defined solution (at least in that narrow domain). We saw this demonstrated stunningly in the case ofAlphaZero(for chess) andAlphaGo(for the board gameGo).

Super-resolution is awell-defined problem, and one could reasonably argue that ML would be a natural fit to solve this problem. With that motivation, early theoretical solutions were already proposed in the literature.

But, the exorbitant computational power and fundamental unresolved complexities kept the practical applications of ML-based super-resolution at bay.

However, in the last few years, there were two major developments:

These developments have led to a resurgence and come back for ML-based super-resolution methods.

It should be mentioned that ML-based super-resolution is a versatile hammer that can be used to drive manynails. It has wide applications, ranging frommedical imaging, remote sensing, astronomical observations, among others. But as mentioned inPart 1of this series, we will focus on howthe ML super-resolutionhammer can nail the problem ofvideo upscaling.

The last missing puzzle piece in this arc of the story isVideo upscaling.

When you think about it, video upscaling is almost a perfect nail for the ML-based super-resolution hammer.

Video provides the core features needed for the ML-based Super-Resolution. Namely:

The convergence of these three factors is why we are witnessing ahuge uptick in theresearchin this area, and also thefirst practical applicationsin the field of ML Super-Resolution powered Video upscaling.

I provided a historical timeline and the factors that lead to ML Super-Resolution powered Video upscaling. But, it might still not be clear on why it is superior to other traditional methods (bilinear,bicubic,Lanczos, among others). In this section, I will provide a simplified explanation to provide an intuitive understanding.

The superior performance simply boils down to the fact that the algorithm understands the nature of the content it is upsampling. And how it tunes itself to upsample that content in the best way possible. This is in contrast to the traditional methods where there is no tuning. In traditional methods, the same formula is applied without any consideration of the nature of the content.

One could say that:

ML-based super-resolution is to upsampling, whatPer-Titleis to encoding.

InPer-Title, we use different encoding recipes for the different pieces of content. In a similar way, ML-based super-resolution uses different upsampling recipes for different pieces of content.

The recipes can adapt itself on both at the:

Hopefully, by now, you are already excited about the possibilities of this idea. In this section, I would like to provide some suggestions on how you can incorporate this idea into your own video workflows and the potential benefits you might expect from it.

Broadly speaking, a video processing workflow typically has three steps involved:

Typically, there is a heavy emphasis on the encoding block for visual quality optimizations (Per-Title,3-Pass,Codec-Configuration, among others).

But, the other two (often overlooked) blocks are as important when it comes to visual quality optimization. In this instance, upsampling is a preprocessing step. And by choosing the right upsampling methods, such as super-resolution, one can improve the visual quality of the entire workflow. Sometimes, significantly more than that could be provided from the other blocks.

In the Part-3 of this series, we will delve more deeply into this. We will quantify how much quality improvements one could expect from tuning the pre-processing block with super-resolution. And use some real-life examples.

(This specific section is primarily meant for advanced readers who understand whatPer-Title,VMAF,convex-hullmeans. Please feel free to skip this section).

Like explained earlier, there are broadly three blocks in a video workflow. Roughly speaking, they work independently. But if we are smart about the design, we can extract synergies and use that to improve the overall video pipelines, that otherwise would not have existed.

One illustrative example is how Per-Title can work in conjunction with the Super-Resolution. This idea is depicted in the following figure.

VMAFvs Bitrate Convex hulls of video content. Green => 360p, Red => 720p, Blue => 1080p. BC : Bicubic, SR : SuperResolution.

In the above figure, for the illustrated bitrate: When using the traditional method the choice is clear. We will pick the 720p rendition. But, when using Super-Resolution, the choice is not very clear. We could either pick

The choice is determined by the complexity (vs) quality tradeoff that we are willing to make.

The takeaway message is two blocks synergistically working together to give more options and flexibility for the Per-Title algorithm to work with. Overall, a higher number of options translate to better overall results.

This is just one illustrative example, but within your own video workflows, you could identify regions where super-resolution can work synergically and improve the overall performance.

If your entire video catalog is a specific kind of content (anime for example), and you want to do a targeted upsample of these contents, then without doubtML Super-Resolution is the way to go!

In fact, that is what many companies alreadydo.This specific trend will only accelerate in the future, especially considering the popularity of consumer 4K TVs.

Visual quality enhancements,Synergies, andTargeted upsamplingare some ideas on how you can incorporate Super-Resolution into your video workflows.

Super-Resolution applied for targeted content such as Anime [Source]

We continued the story fromPart 1. We learned that :

In the follow-up, Part 3 of this series, we will look at how to do practical deployments, tools to use, and some real-life results.

This article was originally published as a blog post on the Bitmovin website byAdithyan Ilangovanand is re-published here with kind permission.

Originally posted here:
Super-Resolution: Why is it good and how can you incorporate it? - Display Daily