Archive for the ‘Alphago’ Category

Lessons learned from Day 3 of the TFT Reckoning World Championship – Upcomer

Day 3 wrapped up the Group Stage play at the TeamFight Tactics Reckoning World Championships with the conclusion of Group B. Many heavy hitters were present in Group B but the major stories came from the underdog players and regions. With the finals now set, here are the lessons learned from Day 3.

Your 8 TFT Reckoning Championship Finalists!

@shircaneTFT @DeliciousMilkGG @eschatv Zixingche @sealcune @nukotod qituX Huanmie pic.twitter.com/PtSYCm9jy1

Teamfight Tactics (@TFT) October 3, 2021

China entered the TFT Reckoning World Championship with something to prove. After receiving the most Worlds spots last season, they failed to bring any of their six players into the final. This year they received fewer invites but were still tied for the most with Europe. All eyes were on China once again as many did not think they deserved to bring four players to Worlds this season. After Day 3, the critics have been silenced.

After Zixingche became the first Chinese representative to qualify for the finals, since Juanzi and Alphago did it back in TFT: Galaxies two seasons ago, China came into Day 3 to show they had more than one player capable of winning the championship.

But after a thrilling five-game series, the two Chinese players in Group B managed to grab the final qualifier spots. Chinas first seed, qituX, qualified for the finals fairly easily. He managed to hit a Karma three-star to win the first game and followed it up by winning Game 3 with a VelKoz carry where he defeated a Lucian three-star in the final round. Even though he only managed top four in two of the five games, those wins qualified him in third place.

Huanmie had the reverse results as qituX. In the two rounds qituX won, Huanmie placed bottom four. But in the other three games, Haunmie placed in the top four, including a first place finish in Game 5 which secured the fourth and final spot. China now has three players in the top eight, the most of any region.

The three most talked about regions at the World championships were South Korea, Europe, and North America. All three of them were among the favorites to win Worlds. Korea especially came into Worlds trying to defend their world title. After Day 3, a new World championship region should be crowned.

After a poor performance by Ddudu in Group B, Korea will not have a player in the top eight finals. But Korea isnt the only major region to disappoint. Scipaeus, the EU rep in Group B, had a terrible Day 3. He managed to only grab a total of 19 points. Heading into Game 5, he was already mathematically eliminated from top-four contention.

The hope for NA rested on Robinsongz. After a poor start in the first two games, Robinsongz came roaring back with back-to-back top-three finishes in the third and fourth games. With Robinsongz in control of his destiny, the game had other plans. After unfortunate matchmaking and low-rolls, Robinsongz bowed out in sixth place, missing out on top four.

These three regions had a combined 10 spots out of the 20 available at Worlds. Combined they only have two spots in the top eight finals as EU and NA claimed both.

The lesser-known regions have been a major reason why all the other regions are underperforming outside of China. After Escha became the first player at the TFT Reckoning World Championships to qualify for the finals, Nukomaru followed shortly after marking the first time both of the super minor regions have made the top eight finals. Japan and Oceania were the only two regions to send a single player to Worlds. Now both of them have qualified for the finals. Both Nukomaru and Escha even had to play in the Play-In Stage where both of them finished first and second place showing that they belonged.

But, Nukomaru wasnt the only wildcard region player to impress on Day 3. Latin Americas SMbappe put on a show during Group B. After garnering fame from his first or eighth playstyle in the Play-In Stage, Smbappe didnt play that drastically in Group B. Instead, he managed to play better than anyone else. In a tightly contested lobby, with a single point separating first and fourth, Smbappe came out on top giving LATAM their first-ever competitor in the top eight finals at TFT Worlds.

With SMbappes qualification, all four players that qualified for the main event through the Play-In Stage are now in the top eight finals. This may be the first season where the TFT championship is brought back to a Wildcard region.

Continue reading here:
Lessons learned from Day 3 of the TFT Reckoning World Championship - Upcomer

Top AI Cities to Know Across the Globe in Race Towards Advancement – Analytics Insight

The capabilities of artificial intelligence are transforming industries by strengthening data, new personnel, and financial power by pushing the technological revolution to heights. As we say, data has become wealth today the potential of AI is not ignorable under any circumstance. Lets look at the top AI cities in the world that are doing pretty well to attain advancement.

It is not a surprise to see Chinas capital city Beijing topping the list of AI cities in the world. China has come up to secure the position of World Leader in AI by 2030 earlier in 2017, which itself is a big step towards taking artificial intelligence to the next level. The country is also aiming to exceed US$150 billion for its AI industry in the coming next decade. Beijing also has the first Googles AI research lab and leading institutions such as Tsinghua and Peking Universities and has more than 1,00 AI companies, the capital city becoming important for developments.

Austin has many tech companies and so it is called Silicon Hills. This is one of the AI cities in the world thriving for AI advancement in the currency artificial intelligence space. Austin is home to several big tech companies such as Spark Cognition, Hypergiant, and many more. While coming to giants like Apple and Facebook has also been playing a major part in contributing to the growth. Apple has confirmed plans to house the next US$1 billion campuses and recently Facebook has also announced Austin as their third-largest hub in the US.

When we talk about Silicon Valley, AI comes into the picture too. San Francisco is also referred to as the center of innovation. Even though it has a small geographical area, it accommodates more than 2,000 companies in approximately 50,000 square miles. Many well-known universities such as Stanford and UC Berkeley also lend their support to contribute to the citys artificial intelligence development. San Francisco is one of the AI cities in the world.

Earlier in 2019, the UK government outlined the AI sector deal highlighting the plans to lead the AI revolution from their end. The main idea of the city is to raise the total research and development investment to 2.4% of GDP by 2027. London is the capital city, is a home for many artificial intelligence companies becoming one of the top AI cities in the world. The companies such as AlphaGo creators DeepMind, Mindtrace, Kwiziq, Cleo, Swiftkey, and Babylon Health. London is also soon to be home for Alphabets which accommodates about 7,000 staff too. London Datastore had over 800 datasets that were used by over 50,000 researchers and companies per month.

New York has about 7,000 high-tech companies, the city has a diverse economy and proximity to the European market is attracting talent towards the Big Apple. It is one of the AI cities in the world making space for tech companies such as Apple, Facebook, and Amazon.

It is one of the AI cities in the world that has many tech companies such as Nvidia, Thomson Reuters, Samsung, General Motors, and Amazon in the space of cloud computing and engineering. Recently, Google, Accenture, and Nvidia have also partnered to start the Vector Institute which is completely dedicated to the development of AI.

AI Singapore is a national scheme across industries to create an artificial intelligence ecosystem for the nation. To this development, the National University of Singapore has also been driving the change through research, innovation, and technology. The government of the country is one of the government frameworks to address ethical dilemmas. Singapore is one of the top AI cities in the world.

Read more here:
Top AI Cities to Know Across the Globe in Race Towards Advancement - Analytics Insight

DeepMind Takes On The Rain – iProgrammer

DeepMind has proved once again the outstanding prowess of neural networks. Working with the UK Met Office ithas developed a deep-learning tool that can accurately predict the likelihood of rain in the next 90 minutes, one of weather forecastings toughest challenges.

Climate change is bringing an ever-increasing number of catastrophic weather events such as the devastating floods in Germany, Belgium and the Netherlands in July 2021 that claimed almost 200 lives with more than 700 injured. DeepMind's new tool, DMGR standing for Deep Generative Model of Rain, which can accurately predict where, when and how much rain will fall in the next 1-2 hours, could provide vital information to assist emergency services in this type of scenario.

DMGR is used for Nowcasting, the term for forecasting rain and other precipitation with the next 1-2 hours based on the most recent past high-resolution radar data.

In a paper published by Nature and on open access the 20-person Nowcasting team claimed:

"Using a systematic evaluation by more than 50 expert meteorologists, we show that our generative model ranked first for its accuracy and usefulness in 89% of cases against two competitive methods".

This illustration compares DGMR to the two alternatives, PySTEPS and UNet.

A heavy precipitation event in April 2019 over the eastern US (Target is the observed radar). The generative approach DGMR balances intensity and extent of precipitation compared to an advection approach (PySTEPS), the intensities of which are often too high, and does not blur like deterministic deep learning methods (UNet).

The practical applicability of DeepMind's DGMR shows that it is making good on its undertaking to build on its experience of using deep learning to play games, recall the triumph of AlphaGo, and tackle real world problems. We have already reported on its contributions to quantum chemistry and to protein folding and now it has added meteorology to its growing list of skills.

Nowcasting the Next Hour of Rain (DeepMind blog)

Skilful precipitation nowcasting using deep generative models of radar (Nature)

Why AlphaGo Changes Everything

David Silver Awarded 2019 ACM Prize In Computing

AlphaFold Reads The DNA

AlphaFold Solves Fundamental Biology Problem

AlphaFold DeepMind's Protein Structure Breakthrough

DeepMind Solves Quantum Chemistry

To be informed about new articles on IProgrammer,sign up for ourweekly newsletter,subscribe to theRSSfeedandfollow us on Twitter,Facebook orLinkedin.

Make a Comment or View Existing Comments Using Disqus

or email your comment to: comments@i-programmer.info

Continued here:
DeepMind Takes On The Rain - iProgrammer

Meet the Computer Scientist Overseeing Columbia’s $1 Billion Research Portfolio – Columbia University

Q. How is AI changing the way research is done? What does that mean for Columbia?

A. In traditional computing, people write programs. In machine learning, people feed the computer data, and the computer itself writes the program; itlearnsthe program from data. The termmachine learningis germane here. The machine learns the rules on its own. Because the machine, not the human, is writing the program, the program is not easily interpretable to us. In the case of deep learning, the most successful machine-learning technique to date, we dont really understand the science of how it works or why its so successful. Its an example of applications coming ahead of theory.

These tools are already in our daily lives. AI systems recommend movies and books, respond to our voice commands, and translate web pages from one language to another. AI also adds to our repertoire of scientific methods. In medicine, deep-learning models are processing medical scans faster than humans and catching warning signs that even the experts sometimes miss. And they dont get tired! In astronomy, theyre analyzing images from telescopes and space probes to make new discoveries about our universe. In climate modeling, theyre helping to reduce the uncertainty around climate change and its impacts.

These tools are accelerating science, and I expect the trend to continue. AI holds great promise for the social sciences, too. At Microsoft, I saw how bringing economists together with machine learning experts helped the company better forecast sales of some products.

Q. What are you most proud of accomplishing at the Data Science Institute?

Creating bridges. Everything I did was about building collaboration across schools and disciplines. The Data Science Institute connected a lot of dots across campuses and beyond Columbias gates. When people from different perspectives and areas of expertise come together, sparks fly. Through data science, researchers and educators asked questions they never would have thought to ask, let alone answer.

I also feel good about creating theTrustworthy AIinitiative to investigate some of machine learnings unintended consequences. Our goal is to find out whether the AI systems making decisions about peoples lives can be trusted: Do I really have cancer? Is the moving object in front of my car a ball or a child? Will the bank approve my loan? It turns out that its hard to formally define the properties of trustworthiness, let alone prove and guarantee that an AI system has any of them.

A. Columbia Engineering and the Data Science Institute built the IBM Center on Blockchain and Data Transparency under your tenure. And Columbia continues to court corporate funders. Why is industry collaboration so vital?

In certain areas of research, AI especially, industry is ahead. They have the data, which is mostly proprietary consumer data. They also have vast amounts of computing power. Amazon, Microsoft, Google have nearly limitless computing power through their cloud infrastructure. They have GPU clusters academia could never afford. I see enormous potential for collaboration. If faculty could gain access to data and compute, they could validate their algorithms at scale and identify new research directions.

Its a mutually beneficial relationship. Industry looks to academia for new ideas and talent.Academia looks to industry for real-world problems to solve, and opportunities to scale solutions. Its an important way to broaden our impact.

Q. Youve held leadership roles in academia, industry, and the federal government. What skills allowed you to succeed in such different cultures?

A. To be able to listen and learn. To know what you dont know, and to surround yourself with superb talent.

Go here to read the rest:
Meet the Computer Scientist Overseeing Columbia's $1 Billion Research Portfolio - Columbia University

DeepMind aims to marry deep learning and classic algorithms – VentureBeat

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!

Will deep learning really live up to its promise? We dont actually know. But if its going to, it will have to assimilate how classical computer science algorithms work. This is what DeepMind is working on, and its success is important to the eventual uptake of neural networks in wider commercial applications.

Founded in 2010 with the goal of creating AGI artificial general intelligence, a general purpose AI that truly mimics human intelligence DeepMind is on the forefront of AI research. The company is also backed by industry heavyweights like Elon Musk and Peter Thiel.

Acquired by Google in 2014, DeepMind has made headlines for projects such as AlphaGo, a program that beat the world champion at the game of Go in a five-game match, and AlphaFold, which found a solution to a 50-year-old grand challenge in biology.

Now DeepMind has set its sights on another grand challenge: bridging the worlds of deep learning and classical computer science to enable deep learning to do everything. If successful, this approach could revolutionize AI and software as we know them.

Petar Velikovi is a senior research scientist at DeepMind. His entry into computer science came through algorithmic reasoning and algorithmic thinking using classical algorithms. Since he started doing deep learning research, he has wanted to reconcile deep learning with the classical algorithms that initially got him excited about computer science.

Meanwhile, Charles Blundell is a research lead at DeepMind who is interested in getting neural networks to make much better use of the huge quantities of data theyre exposed to. Examples include getting a network to tell us what it doesnt know, to learn much more quickly, or to exceed expectations.

When Velikovi met Blundell at DeepMind, something new was born: a line of research that goes by the name of Neural Algorithmic Reasoning (NAR), after a position paper the duo recently published.

NAR traces the roots of the fields it touches upon and branches out to collaborations with other researchers. And unlike much pie-in-the-sky research, NAR has some early results and applications to show for itself.

Velikovi was in many ways the person who kickstarted the algorithmic reasoning direction in DeepMind. With his background in both classical algorithms and deep learning, he realized that there is a strong complementarity between the two of them. What one of these methods tends to do really well, the other one doesnt do that well, and vice versa.

Usually when you see these kinds of patterns, its a good indicator that if you can do anything to bring them a little bit closer together, then you could end up with an awesome way to fuse the best of both worlds, and make some really strong advances, Velikovi said.

When Velikovi joined DeepMind, Blundell said, their early conversations were a lot of fun because they have very similar backgrounds. They both share a background in theoretical computer science. Today, they both work a lot with machine learning, in which a fundamental question for a long time has been how to generalize how do you work beyond the data examples youve seen?

Algorithms are a really good example of something we all use every day, Blundell noted. In fact, he added, there arent many algorithms out there. If you look at standard computer science textbooks, theres maybe 50 or 60 algorithms that you learn as an undergraduate. And everything people use to connect over the internet, for example, is using just a subset of those.

Theres this very nice basis for very rich computation that we already know about, but its completely different from the things were learning. So when Petar and I started talking about this, we saw clearly theres a nice fusion that we can make here between these two fields that has actually been unexplored so far, Blundell said.

The key thesis of NAR research is that algorithms possess fundamentally different qualities to deep learning methods. And this suggests that if deep learning methods were better able to mimic algorithms, then generalization of the sort seen with algorithms would become possible with deep learning.

To approach the topic for this article, we asked Blundell and Velikovi to lay out the defining properties of classical computer science algorithms compared to deep learning models. Figuring out the ways in which algorithms and deep learning models are different is a good start if the goal is to reconcile them.

For starters, Blundell said, algorithms in most cases dont change. Algorithms are comprised of a fixed set of rules that are executed on some input, and usually good algorithms have well-known properties. For any kind of input the algorithm gets, it gives a sensible output, in a reasonable amount of time. You can usually change the size of the input and the algorithm keeps working.

The other thing you can do with algorithms is you can plug them together. The reason algorithms can be strung together is because of this guarantee they have: Given some kind of input, they only produce a certain kind of output. And that means that we can connect algorithms, feeding their output into other algorithms input and building a whole stack.

People have been looking at running algorithms in deep learning for a while, and its always been quite difficult, Blundell said. As trying out simple tasks is a good way to debug things, Blundell referred to a trivial example: the input copy task. An algorithm whose task is to copy, where its output is just a copy of its input.

It turns out that this is harder than expected for deep learning. You can learn to do this up to a certain length, but if you increase the length of the input past that point, things start breaking down. If you train a network on the numbers 1-10 and test it on the numbers 1-1,000, many networks will not generalize.

Blundell explained, They wont have learned the core idea, which is you just need to copy the input to the output. And as you make the process more complicated, as you can imagine, it gets worse. So if you think about sorting through various graph algorithms, actually the generalization is far worse if you just train a network to simulate an algorithm in a very naive fashion.

Fortunately, its not all bad news.

[T]heres something very nice about algorithms, which is that theyre basically simulations. You can generate a lot of data, and that makes them very amenable to being learned by deep neural networks, he said. But it requires us to think from the deep learning side. What changes do we need to make there so that these algorithms can be well represented and actually learned in a robust fashion?

Of course, answering that question is far from simple.

When using deep learning, usually there isnt a very strong guarantee on what the output is going to be. So you might say that the output is a number between zero and one, and you can guarantee that, but you couldnt guarantee something more structural, Blundell explained. For example, you cant guarantee that if you show a neural network a picture of a cat and then you take a different picture of a cat, it will definitely be classified as a cat.

With algorithms, you could develop guarantees that this wouldnt happen. This is partly because the kind of problems algorithms are applied to are more amenable to these kinds of guarantees. So if a problem is amenable to these guarantees, then maybe we can bring across into the deep neural networks classical algorithmic tasks that allow these kinds of guarantees for the neural networks.

Those guarantees usually concern generalizations: the size of the inputs, the kinds of inputs you have, and their outcomes that generalize over types. For example, if you have a sorting algorithm, you can sort a list of numbers, but you could also sort anything you can define an ordering for, such as letters and words. However, thats not the kind of thing we see at the moment with deep neural networks.

Another difference, which Velikovi noted, is that algorithmic computation can usually be expressed as pseudocode that explains how you go from your inputs to your outputs. This makes algorithms trivially interpretable. And because they operate over these abstractified inputs that conform to some preconditions and post-conditions, its much easier to reason theoretically about them.

That also makes it much easier to find connections between different problems that you might not see otherwise, Velikovi added. He cited the example of MaxFlow and MinCut as two problems that are seemingly quite different, but where the solution of one is necessarily the solution to the other. Thats not obvious unless you study it from a very abstract lens.

Theres a lot of benefits to this kind of elegance and constraints, but its also the potential shortcoming of algorithms, Velikovi said. Thats because if you want to make your inputs conform to these stringent preconditions, what this means is that if data that comes from the real world is even a tiny bit perturbed and doesnt conform to the preconditions, Im going to lose a lot of information before I can massage it into the algorithm.

He said that obviously makes the classical algorithm method suboptimal, because even if the algorithm gives you a perfect solution, it might give you a perfect solution in an environment that doesnt make sense. Therefore, the solutions are not going to be something you can use. On the other hand, he explained, deep learning is designed to rapidly ingest lots of raw data at scale and pick up interesting rules in the raw data, without any real strong constraints.

This makes it remarkably powerful in noisy scenarios: You can perturb your inputs and your neural network will still be reasonably applicable. For classical algorithms, that may not be the case. And thats also another reason why we might want to find this awesome middle ground where we might be able to guarantee something about our data, but not require that data to be constrained to, say, tiny scalars when the complexity of the real world might be much larger, Velikovi said.

Another point to consider is where algorithms come from. Usually what happens is you find very clever theoretical scientists, you explain your problem, and they think really hard about it, Blundell said. Then the experts go away and map the problem onto a more abstract version that drives an algorithm.The experts then present their algorithm for this class of problems, which they promise will execute in a specified amount of time and provide the right answer. However, because the mapping from the real-world problem to the abstract space on which the algorithm is derived isnt always exact, Blundell said, it requires a bit of an inductive leap.

With machine learning, its the opposite, as ML just looks at the data. It doesnt really map onto some abstract space, but it does solve the problem based on what you tell it.

What Blundell and Velikovi are trying to do is get somewhere in between those two extremes, where you have something thats a bit more structured but still fits the data, and doesnt necessarily require a human in the loop. That way you dont need to think so hard as a computer scientist. This approach is valuable because often real-world problems are not exactly mapped onto the problems that we have algorithms for and even for the things we do have algorithms for, we have to abstract problems. Another challenge is how to come up with new algorithms that significantly outperform existing algorithms that have the same sort of guarantees.

When humans sit down to write a program, its very easy to get something thats really slow for example, that has exponential execution time, Blundell noted. Neural networks are the opposite. As he put it, theyre extremely lazy, which is a very desirable property for coming up with new algorithms.

There are people who have looked at networks that can adapt their demands and computation time. In deep learning, how one designs the network architecture has a huge impact on how well it works. Theres a strong connection between how much processing you do and how much computation time is spent and what kind of architecture you come up with theyre intimately linked, Blundell said.

Velikovi noted that one thing people sometimes do when solving natural problems with algorithms is try to push them into a framework theyve come up with that is nice and abstract. As a result, they may make the problem more complex than it needs to be.

The traveling [salesperson], for example, is an NP complete problem, and we dont know of any polynomial time algorithm for it. However, there exists a prediction thats 100% correct for the traveling [salesperson], for all the towns in Sweden, all the towns in Germany, all the towns in the USA. And thats because geographically occurring data actually has nicer properties than any possible graph you could feed into traveling [salesperson], Velikovi said.

Before delving into NAR specifics, we felt a naive question was in order: Why deep learning? Why go for a generalization framework specifically applied to deep learning algorithms and not just any machine learning algorithm?

The DeepMind duo wants to design solutions that operate over the true raw complexity of the real world. So far, the best solution for processing large amounts of naturally occurring data at scale is deep neural networks, Velikovi emphasized.

Blundell noted that neural networks have much richer representations of the data than classical algorithms do. Even inside a large model class thats very rich and complicated, we find that we need to push the boundaries even further than that to be able to execute algorithms reliably. Its a sort of empirical science that were looking at. And I just dont think that as you get richer and richer decision trees, they can start to do some of this process, he said.

Blundell then elaborated on the limits of decision trees.

We know that decision trees are basically a trick: If this, then that. Whats missing from that is recursion, or iteration, the ability to loop over things multiple times. In neural networks, for a long time people have understood that theres a relationship between iteration, recursion, and the current neural networks. In graph neural networks, the same sort of processing arises again; the message passing you see there is again something very natural, he said.

Ultimately, Blundell is excited about the potential to go further.

If you think about object-oriented programming, where you send messages between classes of objects, you can see its exactly analogous, and you can build very complicated interaction diagrams and those can then be mapped into graph neural networks. So its from the internal structure that you get a richness that seems might be powerful enough to learn algorithms you wouldnt necessarily get with more traditional machine learning methods, Blundell explained.

Go here to read the rest:
DeepMind aims to marry deep learning and classic algorithms - VentureBeat