Archive for the ‘Alphago’ Category

KataGo Distributed Training

About This Run

KataGo is a strong open-source self-play-trained Go engine, with many improvements to accelerate learning (arXiv paper and further techniques since). It can predict score and territory, play handicap games reasonably, and handle many board sizes and rules all with the same neural net.

This site hosts KataGo's first public-distributed training run! With the help of volunteers, we are attempting to resume training from the end of KataGo's previous official run ("g170") that ended in June 2020, and see how much further we can go. If would like to contribute, see below!

If you simply want to run KataGo, the latest releases are here and you can download the latest networks from here.You very likely want a GUI as well, because the engine alone is command-line-only. Some possible GUIs include KaTrain, Lizzie, and q5Go, more can be found searching online.

Contributors are much appreciated! If you'd like to contribute your spare GPU cycles to generate training data for the run, the steps are:

First, create an account on this site, picking a username and secure password. Make sure to verify your email so that the site considers your account fully active. Note: the username you pick will be publicly visible in statistics and on the games you contribute.

Then pick one of the following methods.

Likely easiest method, for a home desktop computer:

Command line method: if running on a remote server, or have already set up KataGo for other things, or if you want a command line that will work in the background without any GUI, or want slightly more flexibility to configure things:

Either way, once some games are finished, you can view the results at https://katagotraining.org/contributions/ - scroll down and find your username! If anything looks unusual or buggy about the games, or KataGo is behaving weirdly on your machine, please let us know, so we can avoid uploading and training on bad data. Or, if you encounter any error messages, feel to ask for help on KataGo's GitHub or the Discord chat.

For advanced users, instead of downloading a release, you can also build it from source. If you do so, use the stable branch, NOT the master branch. The example config can be found in cpp/configs/contribute_example.cfg

And if you're interested contribute to development via coding, or have a cool idea for a tool, check out either KataGo's GitHub or the this website's GitHub, and/or the Discord chat where various devs hang out. If you want to test a change that affects the distributed client and you need a test server to experiment with modified versions of KataGo, it is available at test.katagodistributed.org, contact lightvector or tychota in Discord for a testing account.

In the last week, 75 distinct users have uploaded 7,507,800 rows of training data, 147,272 new training games, and 3,006 new rating games.

In the last 24h, 44 distinct users have uploaded 942,623 rows of training data, 18,387 new training games, and 381 new rating games.

Look up and view games for this run here.

Latest network: kata1-b40c256-s10452530432-d2547930297

Strongest confidently-rated network: kata1-b40c256-s10336005120-d2519775087

Click and drag to zoom. Double-click or click on a button to reset zoom.

By Upload TimeBy Data Rows (linear)By Data Rows (log)By Data Rows (log, recent)

Continue reading here:
KataGo Distributed Training

Artificial intelligence is smart, but does it play well with others? – MIT News

When it comes to games such as chess or Go, artificial intelligence (AI) programs have far surpassed the best players in the world. These "superhuman" AIs are unmatched competitors, but perhaps harder than competing against humans is collaborating with them. Can the same technology get along with people?

In a new study, MIT Lincoln Laboratory researchers sought to find out how well humans could play the cooperative card game Hanabi with an advanced AI model trained to excel at playing with teammates it has never met before. In single-blind experiments, participants played two series of the game: one with the AI agent as their teammate, and the other with a rule-based agent, a bot manually programmed to play in a predefined way.

The results surprised the researchers. Not only were the scores no better with the AI teammate than with the rule-based agent, but humans consistently hated playing with their AI teammate. They found it to be unpredictable, unreliable, and untrustworthy, and felt negatively even when the team scored well. A paper detailing this study has been accepted to the 2021 Conference on Neural Information Processing Systems (NeurIPS).

"It really highlights the nuanced distinction between creating AI that performs objectively well and creating AI that is subjectively trusted or preferred," says Ross Allen, co-author of the paper and a researcher in the Artificial Intelligence Technology Group. "It may seem those things are so close that there's not really daylight between them, but this study showed that those are actually two separate problems. We need to work on disentangling those."

Humans hating their AI teammates could be of concern for researchers designing this technology to one day work with humans on real challenges like defending from missiles or performing complex surgery. This dynamic, called teaming intelligence, is a next frontier in AI research, and it uses a particular kind of AI called reinforcement learning.

A reinforcement learning AI is not told which actions to take, but instead discovers which actions yield the most numerical "reward" by trying out scenarios again and again. It is this technology that has yielded the superhuman chess and Go players. Unlike rule-based algorithms, these AI arent programmed to follow "if/then" statements, because the possible outcomes of the human tasks they're slated to tackle, like driving a car, are far too many to code.

"Reinforcement learning is a much more general-purpose way of developing AI. If you can train it to learn how to play the game of chess, that agent won't necessarily go drive a car. But you can use the same algorithms to train a different agent to drive a car, given the right data Allen says. "The sky's the limit in what it could, in theory, do."

Bad hints, bad plays

Today, researchers are using Hanabi to test the performance of reinforcement learning models developed for collaboration, in much the same way that chess has served as a benchmark for testing competitive AI for decades.

The game of Hanabi is akin to a multiplayer form of Solitaire. Players work together to stack cards of the same suit in order. However, players may not view their own cards, only the cards that their teammates hold. Each player is strictly limited in what they can communicate to their teammates to get them to pick the best card from their own hand to stack next.

The Lincoln Laboratory researchers did not develop either the AI or rule-based agents used in this experiment. Both agents represent the best in their fields for Hanabi performance. In fact, when the AI model was previously paired with an AI teammate it had never played with before, the team achieved the highest-ever score for Hanabi play between two unknown AI agents.

"That was an important result," Allen says. "We thought, if these AI that have never met before can come together and play really well, then we should be able to bring humans that also know how to play very well together with the AI, and they'll also do very well. That's why we thought the AI team would objectively play better, and also why we thought that humans would prefer it, because generally we'll like something better if we do well."

Neither of those expectations came true. Objectively, there was no statistical difference in the scores between the AI and the rule-based agent. Subjectively, all 29 participants reported in surveys a clear preference toward the rule-based teammate. The participants were not informed which agent they were playing with for which games.

"One participant said that they were so stressed out at the bad play from the AI agent that they actually got a headache," says Jaime Pena, a researcher in the AI Technology and Systems Group and an author on the paper. "Another said that they thought the rule-based agent was dumb but workable, whereas the AI agent showed that it understood the rules, but that its moves were not cohesive with what a team looks like. To them, it was giving bad hints, making bad plays."

Inhuman creativity

This perception of AI making "bad plays" links to surprising behavior researchers have observed previously in reinforcement learning work. For example, in 2016, when DeepMind's AlphaGo first defeated one of the worlds best Go players, one of the most widely praised moves made by AlphaGo was move 37 in game 2, a move so unusual that human commentators thought it was a mistake. Later analysis revealed that the move was actually extremely well-calculated, and was described as genius.

Such moves might be praised when an AI opponent performs them, but they're less likely to be celebrated in a team setting. The Lincoln Laboratory researchers found that strange or seemingly illogical moves were the worst offenders in breaking humans' trust in their AI teammate in these closely coupled teams. Such moves not only diminished players' perception of how well they and their AI teammate worked together, but also how much they wanted to work with the AI at all, especially when any potential payoff wasnt immediately obvious.

"There was a lot of commentary about giving up, comments like 'I hate working with this thing,'" adds Hosea Siu, also an author of the paper and a researcher in the Control and Autonomous Systems Engineering Group.

Participants who rated themselves as Hanabi experts, which the majority of players in this study did, more often gave up on the AI player. Siu finds this concerning for AI developers, because key users of this technology will likely be domain experts.

"Let's say you train up a super-smart AI guidance assistant for a missile defense scenario. You aren't handing it off to a trainee; you're handing it off to your experts on your ships who have been doing this for 25 years. So, if there is a strong expert bias against it in gaming scenarios, it's likely going to show up in real-world ops," he adds.

Squishy humans

The researchers note that the AI used in this study wasn't developed for human preference. But, that's part of the problem not many are. Like most collaborative AI models, this model was designed to score as high as possible, and its success has been benchmarked by its objective performance.

If researchers dont focus on the question of subjective human preference, "then we won't create AI that humans actually want to use," Allen says. "It's easier to work on AI that improves a very clean number. It's much harder to work on AI that works in this mushier world of human preferences."

Solving this harder problem is the goal of the MeRLin (Mission-Ready Reinforcement Learning) project, which this experiment was funded under in Lincoln Laboratory's Technology Office, in collaboration with the U.S. Air Force Artificial Intelligence Accelerator and the MIT Department of Electrical Engineering and Computer Science. The project is studying what has prevented collaborative AI technology from leaping out of the game space and into messier reality.

The researchers think that the ability for the AI to explain its actions will engender trust. This will be the focus of their work for the next year.

"You can imagine we rerun the experiment, but after the fact and this is much easier said than done the human could ask, 'Why did you do that move, I didn't understand it?" If the AI could provide some insight into what they thought was going to happen based on their actions, then our hypothesis is that humans would say, 'Oh, weird way of thinking about it, but I get it now,' and they'd trust it. Our results would totally change, even though we didn't change the underlying decision-making of the AI," Allen says.

Like a huddle after a game, this kind of exchange is often what helps humans build camaraderie and cooperation as a team.

"Maybe it's also a staffing bias. Most AI teams dont have people who want to work on these squishy humans and their soft problems," Siu adds, laughing. "It's people who want to do math and optimization. And that's the basis, but that's not enough."

Mastering a game such as Hanabi between AI and humans could open up a universe of possibilities for teaming intelligence in the future. But until researchers can close the gap between how well an AI performs and how much a human likes it, the technology may well remain at machine versus human.

See the original post:
Artificial intelligence is smart, but does it play well with others? - MIT News

Opinion| The United States and the assassination of Iranian nuclear scientist – Daily News Egypt

New York Times has revealed interesting details about the assassination of Iranian nuclear scientist Fakhrizadeh, saying that it was carried out with a new weapon equipped with artificial intelligence and multiple cameras operating via satellite.

The newspaper pointed out that the assassination was carried out by a killer robot capable of firing 600 rounds per minute, without agents on the ground. The newspaper also added that its information in this regard was based on interviews with American, Israeli, and Iranian officials, including two intelligence officials familiar with the planning and implementation of the operation.

According to an intelligence official familiar with the plan, Israel chose an advanced model of the Belgian-made FN MAG machine gun linked to an advanced smart robot. Then it was smuggled to Iran in pieces over different times, and then secretly reassembled in Iran.The robot was built to fit the size of a pickup tank, and cameras were installed in multiple directions on the vehicle to give the wheelhouse a complete picture of not only the target and its security details, but the surrounding environment.

In the end, the car was rigged with explosives, so that it could be detonated remotely and turned into small parts after the end of the killing process, in order to destroy all evidence. The newspaper pointed out that the assassination of Fakhrizadeh took less than a minute, during which only 15 bullets were fired. The satellite camera installed in the car sent images directly to the headquarters of the operation.

What happened is not a science fiction scene in a Hollywood movie, but it is a fact that we must deal with in the future, and we must also deal with the unprecedented risks and challenges that this entails on the overall security scene. The end of the world will be at the hands of smart robots. If you believe some of the AI observers, you will find that we are in a race towards what is known as the technological singularity point, a hypothetical point at which AI devices outperform our human intelligence and continue to evolve themselves amazingly beyond all our expectations. But if that happens, which is a rare assumption of course, what will happen to us?

Over the past few months, a number of high-profile celebrities, such as Elon Musk and Bill Gates, have warned that we should worry more about the potentially dangerous consequences of superintelligent AI systems. However, we find that they have already invested their money in projects that they consider to be important in this context. We find that Musk, like many billionaires, supports the OpenAI Foundation, a non-profit organization dedicated to developing artificial intelligence devices that serve and benefit humanity in general.

A recent study conducted by researchers from Oxford University in Britain and Yale University in the United States revealed that there is a 50% chance that artificial intelligence will outperform human intelligence in all areas within 45 years, and is expected to be able to take over all human jobs within 120 years. The results of the study do not rule out that this will happen before this date.

According to the study, machines will outperform humans at translating languages by 2024, writing academic articles by 2026, driving trucks by 2027, working in retail by 2031, writing a bestselling book by 2049, and performing surgery by 2053.

The study also stressed that artificial intelligence is rapidly improving its capabilities, and is increasingly proving itself in areas historically controlled by humans, for example, the AlphaGo programme, owned by Google, recently defeated the worlds largest player in the ancient Chinese game known as Atmosphere. In the same vein, the study also expects that self-driving technology will replace millions of taxi drivers.

A few days ago, the United Nations High Commissioner for Human Rights, Michelle Bachelet, stressed the urgent need to halt the sale and use of artificial intelligence systems that pose a grave threat to human rights, until appropriate safeguards are in place. It also called for banning artificial intelligence applications that cannot be used in line with international human rights law So the reality is more dangerous than we think, and perhaps the disaster will be closer than we think.

See the rest here:
Opinion| The United States and the assassination of Iranian nuclear scientist - Daily News Egypt

Lessons learned from Day 3 of the TFT Reckoning World Championship – Upcomer

Day 3 wrapped up the Group Stage play at the TeamFight Tactics Reckoning World Championships with the conclusion of Group B. Many heavy hitters were present in Group B but the major stories came from the underdog players and regions. With the finals now set, here are the lessons learned from Day 3.

Your 8 TFT Reckoning Championship Finalists!

@shircaneTFT @DeliciousMilkGG @eschatv Zixingche @sealcune @nukotod qituX Huanmie pic.twitter.com/PtSYCm9jy1

Teamfight Tactics (@TFT) October 3, 2021

China entered the TFT Reckoning World Championship with something to prove. After receiving the most Worlds spots last season, they failed to bring any of their six players into the final. This year they received fewer invites but were still tied for the most with Europe. All eyes were on China once again as many did not think they deserved to bring four players to Worlds this season. After Day 3, the critics have been silenced.

After Zixingche became the first Chinese representative to qualify for the finals, since Juanzi and Alphago did it back in TFT: Galaxies two seasons ago, China came into Day 3 to show they had more than one player capable of winning the championship.

But after a thrilling five-game series, the two Chinese players in Group B managed to grab the final qualifier spots. Chinas first seed, qituX, qualified for the finals fairly easily. He managed to hit a Karma three-star to win the first game and followed it up by winning Game 3 with a VelKoz carry where he defeated a Lucian three-star in the final round. Even though he only managed top four in two of the five games, those wins qualified him in third place.

Huanmie had the reverse results as qituX. In the two rounds qituX won, Huanmie placed bottom four. But in the other three games, Haunmie placed in the top four, including a first place finish in Game 5 which secured the fourth and final spot. China now has three players in the top eight, the most of any region.

The three most talked about regions at the World championships were South Korea, Europe, and North America. All three of them were among the favorites to win Worlds. Korea especially came into Worlds trying to defend their world title. After Day 3, a new World championship region should be crowned.

After a poor performance by Ddudu in Group B, Korea will not have a player in the top eight finals. But Korea isnt the only major region to disappoint. Scipaeus, the EU rep in Group B, had a terrible Day 3. He managed to only grab a total of 19 points. Heading into Game 5, he was already mathematically eliminated from top-four contention.

The hope for NA rested on Robinsongz. After a poor start in the first two games, Robinsongz came roaring back with back-to-back top-three finishes in the third and fourth games. With Robinsongz in control of his destiny, the game had other plans. After unfortunate matchmaking and low-rolls, Robinsongz bowed out in sixth place, missing out on top four.

These three regions had a combined 10 spots out of the 20 available at Worlds. Combined they only have two spots in the top eight finals as EU and NA claimed both.

The lesser-known regions have been a major reason why all the other regions are underperforming outside of China. After Escha became the first player at the TFT Reckoning World Championships to qualify for the finals, Nukomaru followed shortly after marking the first time both of the super minor regions have made the top eight finals. Japan and Oceania were the only two regions to send a single player to Worlds. Now both of them have qualified for the finals. Both Nukomaru and Escha even had to play in the Play-In Stage where both of them finished first and second place showing that they belonged.

But, Nukomaru wasnt the only wildcard region player to impress on Day 3. Latin Americas SMbappe put on a show during Group B. After garnering fame from his first or eighth playstyle in the Play-In Stage, Smbappe didnt play that drastically in Group B. Instead, he managed to play better than anyone else. In a tightly contested lobby, with a single point separating first and fourth, Smbappe came out on top giving LATAM their first-ever competitor in the top eight finals at TFT Worlds.

With SMbappes qualification, all four players that qualified for the main event through the Play-In Stage are now in the top eight finals. This may be the first season where the TFT championship is brought back to a Wildcard region.

Continue reading here:
Lessons learned from Day 3 of the TFT Reckoning World Championship - Upcomer

Top AI Cities to Know Across the Globe in Race Towards Advancement – Analytics Insight

The capabilities of artificial intelligence are transforming industries by strengthening data, new personnel, and financial power by pushing the technological revolution to heights. As we say, data has become wealth today the potential of AI is not ignorable under any circumstance. Lets look at the top AI cities in the world that are doing pretty well to attain advancement.

It is not a surprise to see Chinas capital city Beijing topping the list of AI cities in the world. China has come up to secure the position of World Leader in AI by 2030 earlier in 2017, which itself is a big step towards taking artificial intelligence to the next level. The country is also aiming to exceed US$150 billion for its AI industry in the coming next decade. Beijing also has the first Googles AI research lab and leading institutions such as Tsinghua and Peking Universities and has more than 1,00 AI companies, the capital city becoming important for developments.

Austin has many tech companies and so it is called Silicon Hills. This is one of the AI cities in the world thriving for AI advancement in the currency artificial intelligence space. Austin is home to several big tech companies such as Spark Cognition, Hypergiant, and many more. While coming to giants like Apple and Facebook has also been playing a major part in contributing to the growth. Apple has confirmed plans to house the next US$1 billion campuses and recently Facebook has also announced Austin as their third-largest hub in the US.

When we talk about Silicon Valley, AI comes into the picture too. San Francisco is also referred to as the center of innovation. Even though it has a small geographical area, it accommodates more than 2,000 companies in approximately 50,000 square miles. Many well-known universities such as Stanford and UC Berkeley also lend their support to contribute to the citys artificial intelligence development. San Francisco is one of the AI cities in the world.

Earlier in 2019, the UK government outlined the AI sector deal highlighting the plans to lead the AI revolution from their end. The main idea of the city is to raise the total research and development investment to 2.4% of GDP by 2027. London is the capital city, is a home for many artificial intelligence companies becoming one of the top AI cities in the world. The companies such as AlphaGo creators DeepMind, Mindtrace, Kwiziq, Cleo, Swiftkey, and Babylon Health. London is also soon to be home for Alphabets which accommodates about 7,000 staff too. London Datastore had over 800 datasets that were used by over 50,000 researchers and companies per month.

New York has about 7,000 high-tech companies, the city has a diverse economy and proximity to the European market is attracting talent towards the Big Apple. It is one of the AI cities in the world making space for tech companies such as Apple, Facebook, and Amazon.

It is one of the AI cities in the world that has many tech companies such as Nvidia, Thomson Reuters, Samsung, General Motors, and Amazon in the space of cloud computing and engineering. Recently, Google, Accenture, and Nvidia have also partnered to start the Vector Institute which is completely dedicated to the development of AI.

AI Singapore is a national scheme across industries to create an artificial intelligence ecosystem for the nation. To this development, the National University of Singapore has also been driving the change through research, innovation, and technology. The government of the country is one of the government frameworks to address ethical dilemmas. Singapore is one of the top AI cities in the world.

Read more here:
Top AI Cities to Know Across the Globe in Race Towards Advancement - Analytics Insight